Second, based on pilot data, while the effort-based reward manipulation did add to the manipulation (i.e., pilot participants did choose the high-effort task more frequently, which is unsurprising as the goal of the effort-based reward function was precisely to get participants to prefer the high-effort task), the performance-based reward manipulation might not be taking away or adding to the manipulation, suggesting it seems to serve as a good control condition (though with increased sample sizes in the actual study, this effect/result might change).
Moreover, in addition to the covariate models we have pre-registered in Table 1, we can also run additional exploratory models that could provide more insights into whether/how our manipulations add to/take away from each other. Recall, participants perform the task both before and after training, allowing us to determine if the training increased or decreased effort choices relative to baseline. For example, we could consider the within-subject changes over time by fitting interaction models (time [pre/post-training] x condition [effort/performance/mixed] and change-score models (post-training minus pre-training) (for examples, see models in Supplementary information). We have opted to pre-register covariate models (Table 1) because they are the most appropriate for testing our central hypotheses—whether our experimental manipulations will lead to group differences (for details, see Pearl, 2016, http://dx.doi.org/10.1515/jci-2016-0021).
Critically, even if the performance-based reward manipulation doesn’t add to or take away the manipulation on rewarded trials during the training block (i.e., Hypothesis 1), the more important questions are whether this manipulation also changes people’s choices on unrewarded/probe trials during training (i.e., Hypothesis 2), and unrewarded trials on the same and a different task in the post-training block (i.e., Hypotheses 3 and 4, respectively).