- [[20211031_211621 decode results - 4 features - effort-reward rt taskacc taskrt|analyses/results with 4 features - no choice]]
Features are used in these analyses: `["rewardS|effortS", "rtS", "choiceEffortS", "accUpdatingS", "rtUpdatingS"]`
I rerun the original analyses. Because there's some randomness in the machine learning analyses, the numbers will not be identical to the original values. But the results/conclusions are the same before. Robust results.
# choice ~ matrix similarity correlations
## decode pairwise effort
```r
> # correlations
> dt_dist_choice[, summaryh(lm(choice_diff ~ fisherz)), by = comparison][term != '(Intercept)']
comparison term results
1: charity vs intragroup stranger fisherz b = −0.10, SE = 0.05, t(43) = −1.79, p = .081, r = −0.26
2: self vs charity fisherz b = 0.10, SE = 0.05, t(43) = 2.02, p = .050, r = 0.29
3: self vs intragroup stranger fisherz b = 0.27, SE = 0.05, t(43) = 5.72, p < .001, r = 0.66
# interaction
F(2, 129) = 13.49, p < .001
```
![[study2_effortmatrix.jpg]]
![[matrix-behav-correlations.png]]
![[feature_weights 1.png]]
## decode pairwise reward
```r
# correlations
comparison term results
1: charity_otherperson fisherz b = −0.04, SE = 0.11, t(43) = −0.42, p = .680, r = −0.06
2: charity_self fisherz b = −0.02, SE = 0.09, t(43) = −0.20, p = .846, r = −0.03
3: otherperson_self fisherz b = 0.05, SE = 0.09, t(43) = 0.57, p = .573, r = 0.09
# interaction
F(2, 129) = 0.29, p = .751,
```
![[study2_rewardmatrix.jpg]]
![[matrix-behav-correlations-yReward.png]]
![[feature_weights-yReward.png]]