## Subjects per condition and counterbalancing - recruitment $N = 496$ - `treat_img_veracity` is whether the single treatment headline is fake or real. ```r condition treat_img_veracity n choice 1: accuracy fake 118 5.084746 # 5% said it's real 2: accuracy real 128 78.125000 # 78% said it's real 3: funny fake 126 24.603175 # 24% said it's funny 4: funny real 124 2.419355 # 2% said it's funny ``` People find the fake news headline funny? Fake stuff are deemed funnier? - [ ] any RT differences? maybe faster to make funny (vs. accuracy) judgments? ## Exclusion criteria - [osf prereg](https://osf.io/d28ec) - total trials: 31744, remain trials: 29265 (excluded 7.81% fast/slow RT trials) - 6 subjects have fewer than 20 trials per block left (490 subjects left) - final $n = 395$ (included only subjects who passed at least 2 attention checks) ## Discernment analysis - [[20210414_103645 exclude subjects for diff reasons]] For each block, discernment = proportion yes for real headlines minus proportion yes for fake headlines. At baseline (block 1, `discern_1`), subjects in the accuracy condition have lower discernment than those in the funny condition. ```r condition discern_1 discern_2 1: funny 0.09479062 0.08674513 2: accuracy 0.06036296 0.08149903 # lower in both blocks relative to funny condition, but increased from block 1 to 2. ``` ![[share_discernment.png]] Correct direction, but weak effects? See also [[20210406_161405 results without excluding subjects who fail attention checks|without excluding subjects who fail attention checks]] Covariate model - all trials in block 2 ```r > m_discern <- lm(discern_2 ~ condition + discern_1, data = dtchoiceavg_wide) > summaryh(m_discern) term results 1: (Intercept) b = 0.02, SE = 0.01, t(392) = 2.02, p = .044, r = 0.10 2: conditionaccuracy b = 0.02, SE = 0.02, t(392) = 1.07, p = .287, r = 0.05 # better discernment for accuracy condition 3: discern_1 b = 0.66, SE = 0.04, t(392) = 17.49, p < .001, r = 0.66 ``` Add `crt*condition` interaction, still controlling for block 1 discernment. ```r > m_discern_crt <- lm(discern_2 ~ conditionEC * crtC + discern_1C, data = dtchoiceavg_wide) > summaryh(m_discern_crt) term results 1: (Intercept) b = 0.08, SE = 0.008, t(390) = 10.33, p < .001, r = 0.46 2: conditionEC b = 0.02, SE = 0.02, t(390) = 0.99, p = .324, r = 0.05 3: crtC b = 0.03, SE = 0.03, t(390) = 1.25, p = .211, r = 0.06 # higher CRT = better discernment 4: discern_1C b = 0.65, SE = 0.04, t(390) = 17.18, p < .001, r = 0.66 5: conditionEC:crtC b = −0.02, SE = 0.05, t(390) = −0.32, p = .747, r = −0.02 ``` - first 16 trials in block 2 ```r term results 1: (Intercept) b = 0.02, SE = 0.02, t(391) = 1.52, p = .130, r = 0.08 2: conditionaccuracy b = 0.04, SE = 0.02, t(391) = 1.80, p = .073, r = 0.09 3: discern_1 b = 0.67, SE = 0.05, t(391) = 13.86, p < .001, r = 0.57 ``` - first 10 trials in block 2 ![[share_discernment_10trials.png]] ```r > summaryh(m_discern) term results 1: (Intercept) b = 0.01, SE = 0.02, t(391) = 0.73, p = .468, r = 0.04 2: conditionaccuracy b = 0.05, SE = 0.03, t(391) = 1.92, p = .056, r = 0.10 3: discern_1 b = 0.70, SE = 0.06, t(391) = 11.17, p < .001, r = 0.49 ``` - first 5 trials in block 2 ```r > summaryh(m_discern) term results 1: (Intercept) b = 0.006, SE = 0.03, t(345) = 0.21, p = .835, r = 0.01 2: conditionaccuracy b = 0.08, SE = 0.04, t(345) = 2.22, p = .027, r = 0.12 3: discern_1 b = 0.76, SE = 0.09, t(345) = 8.81, p < .001, r = 0.43 ``` Interaction model Failed random assignment?! In block 1, participants in accuracy condition (0.5) have lower discernment than those in the funny condition (-0.5). But in block 2, discernment in the accuracy condition increased. ```r m_discern_interact <- lm(discernment ~ conditionEC * blockEC, data = dtchoiceavg_long) > summaryh(m_discern_interact) term results 1: (Intercept) b = 0.08, SE = 0.008, t(786) = 10.52, p < .001, r = 0.35 2: conditionEC b = −0.02, SE = 0.02, t(786) = −1.29, p = .197, r = −0.05 3: blockEC b = 0.007, SE = 0.02, t(786) = 0.43, p = .670, r = 0.01 4: conditionEC:blockEC b = 0.03, SE = 0.03, t(786) = 0.95, p = .343, r = 0.03 ``` ![[Pasted image 20210406170618.png]] ## Sharing behavior ### Sharing intentions - [[20210406_161405 results without excluding subjects who fail attention checks]] ```r condition block veracity share rt 1: accuracy 1 fake 0.13246720 3.936147 2: accuracy 1 real 0.19283016 3.983882 3: accuracy 2 fake 0.09110207 3.487578 4: accuracy 2 real 0.17260110 3.627360 5: funny 1 fake 0.13199205 3.833018 6: funny 1 real 0.22678267 3.971096 7: funny 2 fake 0.11192663 3.448411 8: funny 2 real 0.19867176 3.621401 ``` ![[share.png]] ![[share_nopoints.png]] ```r # pregistered model y_post ~ condition * veracity + trial_number + y_pre + (1 + condition * veracity + trial_number + y_pre | subject) + (1 + condition + trial_number + y_pre | item) ``` ```r # recode variables dtchoice_post[, trialC := (trial - 48.5) / 10] # center to median(33:64) dtchoice_post[, rt1C := (rt1 - mean(rt1)) / 10] dtchoice_post[, share1C := (share1 - mean(share1))] dtchoice_post[, veracityEC := ifelse(veracity == "fake", -0.5, 0.5)] dtchoice_post[, conditionEC := ifelse(condition == "funny", -0.5, 0.5)] condition veracity conditionEC veracityEC 1: funny fake -0.5 -0.5 2: funny real -0.5 0.5 3: accuracy fake 0.5 -0.5 4: accuracy real 0.5 0.5 ``` Non-significant condition-veracity interaction ```r m_share_full <- glmer(share ~ conditionEC * veracityEC + trialC + share1C + (1 + conditionEC * veracityEC + trialC + share1C || subject) + (1 | img_idx), family = "binomial", data = dtchoice_post, control = glmerControl(optimizer ='bobyqa', optCtrl = list(maxfun = 2e4))) summaryh(m_share_full) > summaryh(m_share_full) term results 1: (Intercept) b = −2.83, SE = 0.10, z = −27.46, p < .001, r = −0.62 2: conditionEC b = −0.17, SE = 0.12, z = −1.42, p = .155, r = −0.05 # no main effect of condition 3: veracityEC b = 1.03, SE = 0.18, z = 5.83, p < .001, r = 0.27 # real news shared more 4: trialC b = −0.14, SE = 0.04, z = −3.88, p < .001, r = −0.04 # less sharing over time 5: share1C b = 6.58, SE = 0.29, z = 23.00, p < .001, r = 0.88 # block 1 as covariate 6: conditionEC:veracityEC b = 0.24, SE = 0.21, z = 1.18, p = .239, r = 0.07 # see note below # very similar to results from covariate model 2: conditionaccuracy b = 0.02, SE = 0.02, t(392) = 1.07, p = .287, r = 0.05 ``` Add RT as a predictor ```r > m_share_rt <- glmer(share ~ conditionEC * veracityEC + trialC + share1C + rtC + + (1 + conditionEC * veracityEC + trialC + share1C + rtC || subject) + + (1 | img_idx), + family = "binomial", data = dtchoice_post, + control = glmerControl(optimizer ='bobyqa', optCtrl = list(maxfun = 2e4))) > summaryh(m_share_rt) term results 1: (Intercept) b = −3.03, SE = 0.11, z = −27.38, p < .001, r = −0.64 2: conditionEC b = −0.16, SE = 0.12, z = −1.28, p = .200, r = −0.04 3: veracityEC b = 1.05, SE = 0.18, z = 5.75, p < .001, r = 0.28 4: trialC b = −0.08, SE = 0.04, z = −2.02, p = .043, r = −0.02 5: share1C b = 6.81, SE = 0.30, z = 22.41, p < .001, r = 0.88 6: rtC b = 33.31, SE = 2.92, z = 11.42, p < .001, r = 0.99 # longer RTs, more likely to share (duh! people are quick to say no) 7: conditionEC:veracityEC b = 0.28, SE = 0.21, z = 1.32, p = .186, r = 0.08 # becomes more significant ``` ![[Pasted image 20210406181728.png]] ### Sharing reaction times - Slower RTs when sharing real headlines. - [ ] will RT predict sharing intentions, given that RTs are slower for real (vs. fake) headlines? ```r > m_shareRT_full <- lmer(rt ~ conditionEC * veracityEC + trialC + rt1C + + (1 + conditionEC * veracityEC + rt1C || subject), + data = dtchoice_post, + control = lmerControl(optimizer ='bobyqa', optCtrl = list(maxfun = 3e5))) > summaryh(m_shareRT_full) term results 1: (Intercept) b = 0.36, SE = 0.003, t(356) = 112.42, p < .001, r = 0.99 2: conditionEC b = −0.005, SE = 0.006, t(353) = −0.84, p = .403, r = 0.04 3: veracityEC b = 0.01, SE = 0.004, t(371) = 3.36, p < .001, r = 0.17 # slower RTs for read headlines 4: trialC b = −0.02, SE = 0.002, t(11319) = −12.36, p < .001, r = 0.12 # faster over time 5: rt1C b = 0.87, SE = 0.02, t(221) = 40.30, p < .001, r = 0.94 # block 1 as covariate 6: conditionEC:veracityEC b = 0.003, SE = 0.009, t(372) = 0.38, p = .707, r = 0.02 # no interaction ``` ## Headline attributes - trial number is included to control for time/trial effects ![[Pasted image 20210407152627.png]] ## Headline attributes (recoded) - `favors_r` and `benefits_r` are re-coded to mean political (in)consistency ![[Pasted image 20210414130209.png]] ## Correlations Only p < .05 (uncorrected) correlations shown (empty squares: p > .05). ![[Pasted image 20210407144649.png]] ## CRT ~ seen CRT before - 0 is never, 0.5 is maybe, 1.0 is yes ![[Pasted image 20210406142733.png]] ## Attention checks 400 subjects passed at least two attention checks. > Participants who fail two out of three attention checks will be excluded. As a robustness check, we will re-analyze the full dataset. ```R 0 0.33 1 8 88 400 ``` ## Randomization/condition-assignment ![[Pasted image 20210406190359.png]] ## Questions At the end, many people selected "I didn't think of any source". Is this normal? What to do? > "Earlier on, when you thought about whether you would share the content, which social media source were you primarily thinking about? (You can select more than one option if you were thinking about multiple sources.)"