- [preregistration](https://doi.org/10.17605/OSF.IO/76YJE)
- [[20211206_211627 results - first 500 without counterbalancing - study 2 (aka study 1)|lucid study without counterbalancing]]
- n = 251
- counterbalance 0: news task, personality
- counterbalance 1: personality, news task
# Model 1 (only false headlines)
`demrep_c:bfi_c` is in the opposite direction!
```r
m1_1 <- glm(share ~ demrep_c * bfi_c, family = binomial, data = dt1[veracity == 0])
m1_1c <- coeftest(m1_1, vcovCL(m1_1, ~ responseid + headline_id, NULL, fix = FALSE))
m1_1c
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.45169 0.11544 -3.9127 9.126e-05 ***
demrep_c -0.18046 0.12654 -1.4261 0.153831
bfi_c -0.33759 0.10822 -3.1195 0.001812 **
demrep_c:bfi_c 0.21041 0.10450 2.0136 0.044054 * BF01 = 6.75 (6.75x more evidence for null hypothesis)
```
![[bf_model1 3.png]]
![[Pasted image 20211208234513.png]]
# Model 2 (only false headlines)
```r
m2_1 <- glm(share ~ demrep_c * (bfi_c + bfi_e + bfi_a + bfi_n + bfi_o +
age + gender + edu + counterbalanceZ + attention_score + ctsq_aot),
family = binomial, data = dt1[veracity == 0])
m2_1c <- coeftest(m2_1, vcovCL(m2_1, ~ responseid + headline_id, NULL))
m2_1c
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.881146 0.146717 -6.0058 1.904e-09 ***
demrep_c 0.179464 0.167442 1.0718 0.2838108
bfi_c -0.321092 0.167548 -1.9164 0.0553129 .
bfi_e 0.094476 0.131044 0.7209 0.4709422
bfi_a 0.122997 0.144455 0.8515 0.3945134
bfi_n 0.012818 0.133167 0.0963 0.9233165
bfi_o -0.112720 0.138230 -0.8155 0.4148130
age -0.916561 0.141167 -6.4927 8.429e-11 ***
gender 0.614718 0.166841 3.6844 0.0002292 ***
edu 0.419725 0.133918 3.1342 0.0017233 **
counterbalanceZ -0.430445 0.119745 -3.5947 0.0003248 ***
attention_score 0.152120 0.153671 0.9899 0.3222194
ctsq_aot -0.265145 0.137797 -1.9242 0.0543322 .
demrep_c:bfi_c 0.164225 0.180305 0.9108 0.3623895 BF01 = 32.83824
demrep_c:bfi_e 0.061416 0.140298 0.4378 0.6615635
demrep_c:bfi_a -0.014013 0.139187 -0.1007 0.9198047
demrep_c:bfi_n 0.083704 0.122595 0.6828 0.4947556
demrep_c:bfi_o 0.034682 0.139149 0.2492 0.8031695
demrep_c:age 0.223263 0.134962 1.6543 0.0980747 .
demrep_c:gender -0.197637 0.141274 -1.3990 0.1618247
demrep_c:edu -0.263232 0.116369 -2.2620 0.0236947 *
demrep_c:counterbalanceZ 0.097677 0.108589 0.8995 0.3683787
demrep_c:attention_score 0.198325 0.139896 1.4177 0.1562899
demrep_c:ctsq_aot 0.101853 0.128528 0.7925 0.4280927
```
![[bf_model2 3.png]]
# Model 3 (false and true headlines)
```r
m3_1 <- glm(share ~ veracity * demrep_c * (bfi_c + bfi_e + bfi_a + bfi_n + bfi_o +
age + gender + edu + counterbalanceZ + attention_score + ctsq_aot),
family = binomial, data = dt1)
m3_1c <- coeftest(m3_1, vcovCL(m3_1, ~ responseid + headline_id, NULL, fix = TRUE))
m3_1c
z test of coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.8811456 0.1461921 -6.0273 1.667e-09 ***
veracity 0.2138753 0.1029550 2.0774 0.0377677 *
demrep_c 0.1794639 0.1660872 1.0805 0.2799019
bfi_c -0.3210916 0.1672297 -1.9201 0.0548499 .
bfi_e 0.0944761 0.1312377 0.7199 0.4715956
bfi_a 0.1229974 0.1442920 0.8524 0.3939808
bfi_n 0.0128182 0.1334456 0.0961 0.9234763
bfi_o -0.1127198 0.1381273 -0.8161 0.4144673
age -0.9165608 0.1409187 -6.5042 7.812e-11 ***
gender 0.6147178 0.1664305 3.6935 0.0002212 ***
edu 0.4197251 0.1338063 3.1368 0.0017080 **
counterbalanceZ -0.4304454 0.1194687 -3.6030 0.0003146 *** BF10 = 9.37301940 (shared less fake news when CTSQ presented before news sharing task)
attention_score 0.1521199 0.1536057 0.9903 0.3220144
ctsq_aot -0.2651454 0.1379539 -1.9220 0.0546075 .
veracity:demrep_c -0.0768971 0.1302666 -0.5903 0.5549860
veracity:bfi_c 0.0954817 0.0945544 1.0098 0.3125878
veracity:bfi_e 0.1225556 0.0282475 4.3386 1.434e-05 ***
veracity:bfi_a -0.0960627 0.0539379 -1.7810 0.0749144 .
veracity:bfi_n -0.0399907 0.0633052 -0.6317 0.5275750
veracity:bfi_o -0.0329164 0.0688873 -0.4778 0.6327716
veracity:age 0.1951785 0.0725917 2.6887 0.0071727 **
veracity:gender -0.2017590 0.0665677 -3.0309 0.0024384 **
veracity:edu -0.0595992 0.0681235 -0.8749 0.3816446
veracity:counterbalanceZ 0.0474119 0.0475919 0.9962 0.3191442
veracity:attention_score -0.0616351 0.0791357 -0.7789 0.4360660
veracity:ctsq_aot -0.0439011 0.0429452 -1.0223 0.3066589
demrep_c:bfi_c 0.1642254 0.1805192 0.9097 0.3629600
demrep_c:bfi_e 0.0614161 0.1401878 0.4381 0.6613146
demrep_c:bfi_a -0.0140133 0.1397746 -0.1003 0.9201409
demrep_c:bfi_n 0.0837038 0.1225553 0.6830 0.4946146
demrep_c:bfi_o 0.0346825 0.1396557 0.2483 0.8038693
demrep_c:age 0.2232629 0.1344465 1.6606 0.0967922 .
demrep_c:gender -0.1976374 0.1417329 -1.3944 0.1631859
demrep_c:edu -0.2632318 0.1174564 -2.2411 0.0250194 *
demrep_c:counterbalanceZ 0.0976774 0.1091160 0.8952 0.3706960
demrep_c:attention_score 0.1983248 0.1407825 1.4087 0.1589145
demrep_c:ctsq_aot 0.1018531 0.1289295 0.7900 0.4295332
veracity:demrep_c:bfi_c 0.0292207 0.0698807 0.4182 0.6758371 BF01 = 64.42747
veracity:demrep_c:bfi_e -0.1120075 0.0463804 -2.4150 0.0157363 *
veracity:demrep_c:bfi_a 0.1452162 0.0687936 2.1109 0.0347811 *
veracity:demrep_c:bfi_n 0.0747988 0.0798460 0.9368 0.3488673
veracity:demrep_c:bfi_o -0.0205171 0.0849457 -0.2415 0.8091425
veracity:demrep_c:age -0.0572215 0.0697172 -0.8208 0.4117794
veracity:demrep_c:gender 0.1355919 0.0512230 2.6471 0.0081187 **
veracity:demrep_c:edu 0.0346152 0.0546029 0.6339 0.5261169
veracity:demrep_c:counterbalanceZ -0.0052657 0.0392089 -0.1343 0.8931670
veracity:demrep_c:attention_score 0.0191628 0.0615265 0.3115 0.7554543
veracity:demrep_c:ctsq_aot 0.0786948 0.0505354 1.5572 0.1194183
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
![[bf_model3 5.png]]
## BFs for counterbalancing effect
![[Pasted image 20211208232430.png]]