decoded whether any given trial was self or other (i.e., self or charity, self or intragroup stranger) - features for classifier: `["effort", "reward", "choice_rt", "task_rt", "task_acc"]` (no `choice`) - also tried excluding `choice_rt` from the models (i.e., no `choice` and no `choice_rt`); results are very similar - `["effort", "reward", "task_rt", "task_acc"]` - here, the model literally isn't trained on anything related to the choice itself! just the reward/effort shown on that trial and how people performed on the math task rationale for this analysis - If I think that I'm very similar to target (e.g., self is similar to charity), then self-target representations should overlap a lot. If these representations overlap a lot, then the classifier should be **worse** at decoding/telling whether on any given trial I was choosing for self or other. What classifiers/models try to do is find the decision boundary that best separates two classes (e.g., self/charity classes) (see image below for visual intuition). But if these two classes overlap a lot, then decoding accuracies will be worse (around chance level) because the data points for these two classes overlap in hyperspace, so it's difficult to separate them. - **Thus, if decoding accuracies are high, then self-other overlap is low**. - **If decoding accuracies are close to chance, then self-other overlap is high**. ![[Pasted image 20220320214246.png]] # results Lower decoding accuracies for charity than for stranger (relative to self). Thus, more overlapping representations for charity-self relative to stranger-self; i.e., self overlaps more with charity (than stranger). Consistent with behavioral findings where people are more willing to exert effort for charity than stranger. ```r term results 1: (Intercept) b = 0.54, SE = 0.02, t(59) = 31.34, p < .001, r = 0.97 # decoding accuracies for charity (vs self) is 54% 2: stranger b = 0.06, SE = 0.01, t(45) = 4.36, p < .001, r = 0.54 # decoding accuracies for other person (vs self) is 6% higher than for charity-vs-self # when choice_RT is also excluded as a feature from the model (remember, choice is already excluded from all models) term results 1: (Intercept) b = 0.52, SE = 0.01, t(64) = 38.16, p < .001, r = 0.98 2: stranger b = 0.04, SE = 0.01, t(45) = 3.10, p = .003, r = 0.42 # all subjects - ignore exclusion criteria (model with choice_rt) term results 1: (Intercept) b = 0.51, SE = 0.01, t(127) = 48.17, p < .001, r = 0.97 2: stranger b = 0.04, SE = 0.008, t(93) = 4.23, p < .001, r = 0.40 ``` ## decoding accuracy correlates with difference in effortful choice between targets Negative relationship between decoding accuracies and difference in effortful choice. Each dot is on subject. Those with better decoding accuracies (x-axis) (i.e., less self-other overlap) are less likely to exert effort for others . ```r > dt3[, summaryh(lm(choice_diff ~ decode_acc)), comparison] comparison term results 2: charity decode_acc b = −1.08, SE = 0.29, t(44) = −3.72, p < .001, r = −0.49 4: intragroup stranger decode_acc b = −1.24, SE = 0.28, t(44) = −4.43, p < .001, r = −0.56 # when choice_RT is also excluded as a feature from the model (remember, choice is already excluded from all models) > dt3[, summaryh(lm(choice_diff ~ decode_acc)), comparison] comparison term results 2: charity decode_acc b = −1.40, SE = 0.33, t(44) = −4.20, p < .001, r = −0.54 4: intragroup stranger decode_acc b = −1.73, SE = 0.37, t(44) = −4.70, p < .001, r = −0.58 # all subjects - ignore exclusion criteria (model with choice_rt) > dt3[, summaryh(lm(choice_diff ~ decode_acc)), comparison] comparison term results 2: charity decode_acc b = −1.26, SE = 0.20, t(92) = −6.25, p < .001, r = −0.55 4: intragroup stranger decode_acc b = −1.53, SE = 0.18, t(92) = −8.36, p < .001, r = −0.66 ``` ![[decode_target.png|800]] all subjects - ignore exclusion criteria ![[decode_target_allsubjs.png|800]] ## decoding accuracy correlates with big five compassion Negative relationship between decoding accuracies and big five aspects compassion subscale. Each dot is on subject. Those with better decoding accuracies (x-axis) (i.e., less self-other overlap) are less compassionate. ```r > dt4[, summaryh(lm(decode_acc ~ bfas_agree_compassion_score)), comparison][term != "(Intercept)"] comparison term results 1: charity bfas_agree_compassion_score b = −0.08, SE = 0.02, t(44) = −4.07, p < .001, r = −0.52 2: intragroup stranger bfas_agree_compassion_score b = −0.07, SE = 0.03, t(44) = −2.41, p = .020, r = −0.34 # when choice_RT is also excluded as a feature from the model (remember, choice is already excluded from all models) comparison term results 1: charity bfas_agree_compassion_score b = −0.05, SE = 0.02, t(44) = −2.64, p = .011, r = −0.37 2: intragroup stranger bfas_agree_compassion_score b = −0.03, SE = 0.02, t(44) = −1.35, p = .183, r = −0.20 # no longer significant but still negative # all subjects - ignore exclusion criteria (model with choice_rt) > dt4[, summaryh(lm(decode_acc ~ bfas_agree_compassion_score)), comparison][term != "(Intercept)"] # comparison term results 1: charity bfas_agree_compassion_score b = −0.05, SE = 0.01, t(92) = −3.55, p < .001, r = −0.35 2: intragroup stranger bfas_agree_compassion_score b = −0.04, SE = 0.02, t(92) = −2.19, p = .031, r = −0.22 ``` ![[decode_compassion.png|800]] all subjects - ignore exclusion criteria ![[decode_compassion_allsubjs.png|800]]