Andrew/Mickey: Which contrast to use?
- effort vs performance (original, reviewers/editor didn't comment on them, so stick with them?)
- effort vs neutral
Mickey: Given the pandemic, are we planning on running this online or in person? If in person, do we need to perhaps give ourselves more time? We might not be able to start in person testing till the fall and I wonder if a word or two in the cover letter about the pandemic would be wise...
Andrew: More on equating of rewards
- But it occurs to me that it might be advantageous to have the expected reward from performing well vs poorly in the performance condition roughly equate the average difference in reward for choosing the hard vs the easy option in the effort condition. This balance would be nice to rule out differences in expected reward driving things differentially across conditions.
- The average difference in reward for choosing the hard vs easy in the effort condition is 140 (370 - 230). Probably being dense here... but are you suggesting that in the performance condition, we do something like first split performance into two groups of trials (e.g., median-split trials based on performance; i.e., split at the dotted vertical line in Fig. 1c's middle panel to bin into "poor" and "good" performance trials). Then we try to ensure the average difference in reward on the poor/good performance trials' is also 140? If we do it this way, the average difference will be always less than 140, right? Or maybe I've misunderstood and you have something else in mind...
---
Reading this the first time through, it stumbled a bit wondering how these conclusions are any different from those starting this paragraph. Why would you look to different sets of hypotheses (3&4 vs 3&6) to come to the same conclusion (that the data support near-transfer effects)? Perhaps just take the stuff out pursuant to "Second,..." or make them more clearly distinct from what follows from "First,...".
---
As for your next point, in the pilot data, the rewards for the easy and hard tasks in the performance condition is nearly identical (about 300), which is to be expected since the two curves (easy/hard S-curves) are basically same line. So it seems like we might have to change the forms/locations of the curves if we want to equate the difference in expected reward for the performance of the easy vs easy task to the difference in choice-based rewards?
Here are two other forms we piloted previously that address this problem: https://www.dropbox.com/s/abjonv8ijndrzch/forms.png?dl=0 We eventually decided to sacrifice incentive compatibility but introduced other elements (e.g., tight, personalized RT deadlines) to somewhat compensate for the lack of it. It was mainly because the "gaps" between the two lines in the performance condition seemed to be confusing participants by making the distinction between the effort and performance conditions less distinct i.e., potentially reducing the effectiveness of our manipulations.