- MTurk study
- [prereg](https://osf.io/4hwcv)
- n = 484
- see [[20211206_211627 results - first 500 without counterbalancing - study 2 (aka study 1)|lucid study n = 500]]
- three-way interaction `veracity:demrep_c:bfi_c` in model 3 is our preregistered Bayes-factor test (BF must be either < 0.1 or > 10)
# Basic models
```r
term res
1: (Intercept) b = 0.21 [0.17, 0.25], p < .001
2: demrep_c b = 0.02 [-0.02, 0.06], p = .297
3: veracity b = 0.07 [0.02, 0.12], p = .007
4: demrep_c:veracity b = -0.00 [-0.06, 0.06], p = .982
```
# Model 1 (false headlines)
```r
m1_1 <- glm(share ~ demrep_c * bfi_c, family = binomial, data = dt1[veracity == 0])
m1_1c <- coeftest(m1_1, vcovCL(m1_1, ~ responseid + headline_id, NULL, fix = FALSE))
m1_1c
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.328872 0.139176 -9.5481 < 2.2e-16 ***
demrep_c 0.147585 0.119322 1.2369 0.216136
bfi_c -0.169720 0.064486 -2.6319 0.008491 ** BF10 = 0.42
demrep_c:bfi_c -0.045001 0.059264 -0.7593 0.447662 BF01 = 56.88
```
![[bf_model1 6.png|600]]
# Model 2 (false headlines)
```r
m2_1 <- glm(share ~ demrep_c * (bfi_c + bfi_e + bfi_a + bfi_n + bfi_o +
age + gender + edu + attention_score + ctsq_aot),
family = binomial, data = dt1[veracity == 0])
m2_1c <- coeftest(m2_1, vcovCL(m2_1, ~ responseid + headline_id, NULL))
m2_1c
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.3874004 0.1653468 -8.3908 < 2.2e-16 ***
demrep_c -0.0230917 0.1245625 -0.1854 0.85293
bfi_c -0.0517031 0.0816802 -0.6330 0.52674
bfi_e 0.0340839 0.0881002 0.3869 0.69885
bfi_a 0.0264817 0.0989298 0.2677 0.78894
bfi_n -0.0046578 0.0969157 -0.0481 0.96167
bfi_o -0.1542615 0.0741734 -2.0797 0.03755 *
age -0.0476999 0.0744010 -0.6411 0.52145
gender 0.1572078 0.0757217 2.0761 0.03788 *
edu 0.0400790 0.0679466 0.5899 0.55528
attention_score -0.1634524 0.0696147 -2.3480 0.01888 *
ctsq_aot -0.4727769 0.1028295 -4.5977 4.272e-06 *** BF10 = 517.723˚3
demrep_c:bfi_c 0.0430586 0.0738184 0.5833 0.55969 BF01 = 63.41879
demrep_c:bfi_e 0.0548467 0.0868866 0.6312 0.52788
demrep_c:bfi_a -0.0460973 0.1050524 -0.4388 0.66080
demrep_c:bfi_n 0.1224392 0.1009114 1.2133 0.22500
demrep_c:bfi_o 0.0328074 0.0825490 0.3974 0.69105
demrep_c:age 0.0902465 0.0693083 1.3021 0.19288
demrep_c:gender 0.0146045 0.0565385 0.2583 0.79617
demrep_c:edu -0.0684614 0.0500714 -1.3673 0.17154
demrep_c:attention_score 0.1141094 0.0727950 1.5675 0.11699
demrep_c:ctsq_aot 0.1097718 0.0677259 1.6208 0.10506
```
![[bf_model2 6.png|600]]
# Model 3 (false and true headlines)
- all variables are centered (except `veracity`)
```r
m3_1 <- glm(share ~ veracity * demrep_c * (bfi_c + bfi_e + bfi_a + bfi_n + bfi_o +
age + gender + edu + attention_score + ctsq_aot),
family = binomial, data = dt1)
m3_1c <- coeftest(m3_1, vcovCL(m3_1, ~ responseid + headline_id, NULL, fix = TRUE))
m3_1c
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.3874004 0.1622268 -8.5522 < 2.2e-16 ***
veracity 0.4217043 0.1773405 2.3779 0.0174099 *
demrep_c -0.0230917 0.1227225 -0.1882 0.8507496
bfi_c -0.0517031 0.0812808 -0.6361 0.5247080
bfi_e 0.0340839 0.0877520 0.3884 0.6977117
bfi_a 0.0264817 0.0983997 0.2691 0.7878347
bfi_n -0.0046578 0.0963538 -0.0483 0.9614448
bfi_o -0.1542615 0.0740183 -2.0841 0.0371510 *
age -0.0476999 0.0738834 -0.6456 0.5185320
gender 0.1572078 0.0750571 2.0945 0.0362147 *
edu 0.0400790 0.0677471 0.5916 0.5541210
attention_score -0.1634524 0.0693980 -2.3553 0.0185083 *
ctsq_aot -0.4727769 0.1016299 -4.6519 3.288e-06 ***
veracity:demrep_c 0.0242043 0.1586075 0.1526 0.8787099
veracity:bfi_c -0.0617514 0.0538756 -1.1462 0.2517190
veracity:bfi_e -0.0417316 0.0667313 -0.6254 0.5317293
veracity:bfi_a 0.0771335 0.0692787 1.1134 0.2655453
veracity:bfi_n -0.0013681 0.0756347 -0.0181 0.9855682
veracity:bfi_o 0.0732713 0.0437618 1.6743 0.0940674 .
veracity:age 0.1772302 0.0603611 2.9362 0.0033230 **
veracity:gender 0.0616753 0.0552666 1.1160 0.2644396
veracity:edu 0.0658811 0.0508681 1.2951 0.1952740
veracity:attention_score 0.1730346 0.0465645 3.7160 0.0002024 ***
veracity:ctsq_aot 0.2032278 0.0892313 2.2775 0.0227540 *
demrep_c:bfi_c 0.0430586 0.0733795 0.5868 0.5573423
demrep_c:bfi_e 0.0548467 0.0866051 0.6333 0.5265401
demrep_c:bfi_a -0.0460973 0.1048004 -0.4399 0.6600397
demrep_c:bfi_n 0.1224392 0.1004199 1.2193 0.2227409
demrep_c:bfi_o 0.0328074 0.0823925 0.3982 0.6904943
demrep_c:age 0.0902465 0.0688560 1.3107 0.1899744
demrep_c:gender 0.0146045 0.0567255 0.2575 0.7968242
demrep_c:edu -0.0684614 0.0504484 -1.3571 0.1747634
demrep_c:attention_score 0.1141094 0.0723185 1.5779 0.1145949
demrep_c:ctsq_aot 0.1097718 0.0673766 1.6292 0.1032650
veracity:demrep_c:bfi_c -0.0534293 0.0500507 -1.0675 0.2857445 BF01 = 60.13994
veracity:demrep_c:bfi_e -0.0453967 0.0540112 -0.8405 0.4006247
veracity:demrep_c:bfi_a 0.0367838 0.0644294 0.5709 0.5680563
veracity:demrep_c:bfi_n -0.0938422 0.0722190 -1.2994 0.1938029
veracity:demrep_c:bfi_o 0.0039249 0.0576459 0.0681 0.9457163
veracity:demrep_c:age -0.0623370 0.0517140 -1.2054 0.2280424
veracity:demrep_c:gender -0.0194503 0.0390515 -0.4981 0.6184367
veracity:demrep_c:edu 0.0123786 0.0365030 0.3391 0.7345256
veracity:demrep_c:attention_score -0.0688420 0.0556228 -1.2377 0.2158432
veracity:demrep_c:ctsq_aot -0.1005459 0.0563468 -1.7844 0.0743566 .
```
![[bf_model3 7.png|600]]
Opposite effect?!? Higher conscientious conservatives **more likely** to share fake news??
![[Pasted image 20211209231515.png]]