- mturk
- 499 subjects
- 24 covid headlines (12 false, 12 true)
- cluster subject and headline
- [prereg 1.5 mturk covid headlines](https://osf.io/8cnyx)
# Basic models
Finally ideology-veracity interaction!
```r
> sum2(m0c)
term res
1: (Intercept) b = 0.30 [0.26, 0.34], p < .001
2: demrep_c b = 0.10 [0.07, 0.14], p < .001
3: veracity b = 0.22 [0.16, 0.29], p < .001
4: demrep_c:veracity b = -0.18 [-0.22, -0.13], p < .001 # significant interaction
> sum2(m0c)
term res
1: (Intercept) b = 0.30 [0.26, 0.34], p < .001
2: ideology b = 0.14 [0.11, 0.17], p < .001
3: veracity b = 0.22 [0.16, 0.29], p < .001
4: ideology:veracity b = -0.19 [-0.23, -0.15], p < .001 # significant interaction
```
# Model 1 (only false headlines)
```r
> sum2(m1_1c)
term res
1: (Intercept) b = -0.94 [-1.17, -0.71], p < .001
2: ideology b = 0.72 [0.55, 0.88], p < .001
3: bfi_c b = -0.13 [-0.28, 0.02], p = .083
4: ideology:bfi_c b = -0.04 [-0.17, 0.10], p = .603 # BF = 67.4514
```
# Model 2 (all headlines; key test 1)
```r
> sum2(m2_1c)
term res
1: (Intercept) b = -0.36 [-0.61, -0.11], p = .005
2: ideology b = 0.21 [0.01, 0.41], p = .040
3: bfi_c b = -0.16 [-0.26, -0.07], p < .001
4: ideology:bfi_c b = -0.03 [-0.11, 0.05], p = .422 # BF01 = 79.07
```
# Model 3 (all headlines)
```r
> sum2(m3_1c)
term res
1: (Intercept) b = -0.94 [-1.16, -0.71], p < .001
2: ideology b = 0.72 [0.56, 0.88], p < .001
3: bfi_c b = -0.13 [-0.28, 0.02], p = .082
4: veracity b = 1.03 [0.73, 1.33], p < .001
5: ideology:bfi_c b = -0.04 [-0.17, 0.10], p = .602
6: ideology:veracity b = -0.89 [-1.07, -0.71], p < .001
7: bfi_c:veracity b = -0.08 [-0.23, 0.07], p = .314
8: ideology:bfi_c:veracity b = 0.01 [-0.12, 0.14], p = .886 # BF01 = 108.1
```
# Model 4 (only false headlines)
- negative `ideology:attention_score` interaction (note veracity is coded 0/1)
- positive `ideology:ctsq_aot` interaction
```r
> sum2(m4_1c)
term res
1: (Intercept) b = -0.99 [-1.25, -0.73], p < .001
2: ideology b = 0.68 [0.48, 0.89], p < .001
3: bfi_c b = -0.03 [-0.22, 0.16], p = .743
4: bfi_e b = 0.14 [-0.04, 0.33], p = .119
5: bfi_a b = -0.21 [-0.37, -0.04], p = .015
6: bfi_n b = 0.08 [-0.13, 0.29], p = .477
7: bfi_o b = 0.02 [-0.14, 0.19], p = .781
8: age b = -0.15 [-0.33, 0.03], p = .112
9: gender b = -0.04 [-0.20, 0.11], p = .590
10: edu b = -0.06 [-0.21, 0.09], p = .413
11: attention_score b = -0.12 [-0.29, 0.05], p = .169
12: ctsq_aot b = -0.71 [-0.90, -0.52], p < .001 ##
13: ideology:bfi_c b = -0.05 [-0.21, 0.10], p = .502 # BF01 = 60.84
14: ideology:bfi_e b = -0.16 [-0.34, 0.02], p = .074
15: ideology:bfi_a b = 0.12 [-0.04, 0.28], p = .130
16: ideology:bfi_n b = -0.07 [-0.25, 0.11], p = .452
17: ideology:bfi_o b = 0.15 [0.146, 0.29], p = .048
18: ideology:age b = 0.01 [-0.16, 0.17], p = .942
19: ideology:gender b = 0.07 [-0.08, 0.22], p = .365
20: ideology:edu b = -0.10 [-0.25, 0.04], p = .164
21: ideology:attention_score b = -0.22 [-0.41, -0.03], p = .025 ###
22: ideology:ctsq_aot b = 0.19 [0.01, 0.36], p = .039 ###
term res
```
# Model 5 (all headlines; key test 2)
- negative `ideology:attention_score` interaction (note veracity is coded 0/1)
- negative `veracity:ideology:ctsq_aot` interaction
```r
> sum2(m5_1c)
term res
1: (Intercept) b = -0.99 [-1.24, -0.73], p < .001
2: veracity b = 1.09 [0.77, 1.41], p < .001
3: ideology b = 0.68 [0.48, 0.89], p < .001
4: bfi_c b = -0.03 [-0.22, 0.16], p = .742
5: bfi_e b = 0.14 [-0.04, 0.33], p = .118
6: bfi_a b = -0.21 [-0.37, -0.04], p = .015
7: bfi_n b = 0.08 [-0.13, 0.29], p = .476
8: bfi_o b = 0.02 [-0.14, 0.19], p = .781
9: age b = -0.15 [-0.33, 0.03], p = .110
10: gender b = -0.04 [-0.20, 0.11], p = .589
11: edu b = -0.06 [-0.21, 0.09], p = .412
12: attention_score b = -0.12 [-0.29, 0.05], p = .168
13: ctsq_aot b = -0.71 [-0.90, -0.52], p < .001
14: veracity:ideology b = -0.89 [-1.11, -0.67], p < .001
15: veracity:bfi_c b = -0.12 [-0.32, 0.07], p = .216
16: veracity:bfi_e b = -0.03 [-0.20, 0.14], p = .710
17: veracity:bfi_a b = 0.29 [0.12, 0.45], p < .001
18: veracity:bfi_n b = 0.18 [-0.03, 0.38], p = .091
19: veracity:bfi_o b = -0.06 [-0.22, 0.10], p = .483
20: veracity:age b = 0.32 [0.15, 0.49], p < .001
21: veracity:gender b = 0.10 [-0.06, 0.25], p = .225
22: veracity:edu b = 0.15 [0.01, 0.28], p = .035
23: veracity:attention_score b = 0.09 [-0.04, 0.22], p = .176
24: veracity:ctsq_aot b = 0.61 [0.42, 0.81], p < .001 ##
25: ideology:bfi_c b = -0.05 [-0.21, 0.10], p = .501
26: ideology:bfi_e b = -0.16 [-0.34, 0.02], p = .073
27: ideology:bfi_a b = 0.12 [-0.03, 0.28], p = .128
28: ideology:bfi_n b = -0.07 [-0.25, 0.11], p = .451
29: ideology:bfi_o b = 0.15 [0.146, 0.29], p = .047
30: ideology:age b = 0.01 [-0.16, 0.17], p = .942
31: ideology:gender b = 0.07 [-0.08, 0.22], p = .363
32: ideology:edu b = -0.10 [-0.25, 0.04], p = .163
33: ideology:attention_score b = -0.22 [-0.41, -0.03], p = .024 ##
34: ideology:ctsq_aot b = 0.19 [0.01, 0.36], p = .038
35: veracity:ideology:bfi_c b = 0.05 [-0.11, 0.21], p = .537 # BF01 = 89.05
36: veracity:ideology:bfi_e b = 0.21 [0.04, 0.39], p = .017
37: veracity:ideology:bfi_a b = -0.18 [-0.34, -0.01], p = .036
38: veracity:ideology:bfi_n b = 0.07 [-0.10, 0.24], p = .426
39: veracity:ideology:bfi_o b = -0.09 [-0.25, 0.07], p = .257
40: veracity:ideology:age b = -0.06 [-0.22, 0.10], p = .452
41: veracity:ideology:gender b = 0.07 [-0.07, 0.22], p = .320
42: veracity:ideology:edu b = 0.03 [-0.09, 0.16], p = .636
43: veracity:ideology:attention_score b = 0.16 [-0.01, 0.33], p = .068
44: veracity:ideology:ctsq_aot b = -0.17 [-0.33, -0.01], p = .034 ##
term res
```
![[Pasted image 20220128212755.png|900]]
# Item analysis (all studies)
![[item_demrep-bfi 1.png|1200]]