- pilot 1: gpt4turbo
- pilot 2: perplexity llama-3-sonar-large-32k-online
# descriptives
- pilot 1: 50/50 dem/repub
- pilot 2: Mess up/forgot to do 50/50 dem/repub, so there's imbalance!
```r
# pilot 1
# no. of participants dem/rep/independent based on this question:
# Which of the following best describes your political preference?
demrep demrepF N prop
<num> <fctr> <int> <num>
1: 1 dem 42 0.4827586 # responded 1,2,3
2: 2 rep 45 0.5172414 # responded 5,6,7
# pilot 2
demrep demrepF N
<num> <fctr> <int>
1: 1.0 dem 51 # responded 1,2,3
2: 1.5 ind 14 # responded 4
3: 2.0 rep 25 # responded 5,6,7
```
In pilot 2, we also let everyone else (non Biden/Trump voters) take part. A significant chunk (~25%) didn't vote for Biden or Trump.
```r
# pilot 1
Key: <vote>
vote N prop
<char> <int> <num>
1: Donald Trump 41 0.4712644
2: Joe Biden 46 0.5287356
# pilot 2
vote N prop
<char> <int> <num>
1: Joe Biden 44 0.48888889
2: Donald Trump 25 0.27777778
3: I would not vote out of protest 8 0.08888889
4: Prefer not to say 2 0.02222222
5: I would not vote for reasons outside of my control 2 0.02222222
6: Cornel West 2 0.02222222
7: I would not vote, but I could have 2 0.02222222
8: write in candidate 1 0.01111111
9: Robert Kennedy 1 0.01111111
10: Rfk 1 0.01111111
11: Kennedy 1 0.01111111
12: Jill Stein 1 0.01111111
```
## top issues
```r
# pilot1
topissue N prop
<char> <int> <num>
1: Economy 25 0.28735632
2: Climate change 12 0.13793103
3: Immigration 11 0.12643678
4: Democracy and election integrity 8 0.09195402
5: Abortion rights 8 0.09195402
6: Leadership style 5 0.05747126
7: Healthcare 4 0.04597701
8: Supreme court and judicial appointments 4 0.04597701
9: Crime 2 0.02298851
10: Foreign policy and international relations 2 0.02298851
11: Taxation 1 0.01149425
12: Racial equity and social justice 1 0.01149425
13: Education 1 0.01149425
14: Gun policy 1 0.01149425
15: Clean energy 1 0.01149425
16: Role of federal government 1 0.01149425
# pilot2
topissue N prop
<char> <int> <num>
1: Economy 23 0.25555556
2: Democracy and election integrity 11 0.12222222
3: Abortion rights 10 0.11111111
4: Racial equity and social justice 8 0.08888889
5: Healthcare 7 0.07777778
6: Foreign policy and international relations 5 0.05555556
7: Immigration 5 0.05555556
8: Gun policy 4 0.04444444
9: Leadership style 4 0.04444444
10: Climate change 3 0.03333333
11: LGBTQ rights 2 0.02222222
12: Supreme court and judicial appointments 2 0.02222222
13: Social security and medicare 1 0.01111111
14: Role of federal government 1 0.01111111
15: Crime 1 0.01111111
16: Taxation 1 0.01111111
17: Cybersecurity and foreign interference in US elections 1 0.01111111
18: Education 1 0.01111111
```
# intercept-only models looking at outcomes (`_diff` is t1 minus t0)
- model: `outcome ~ 1`
- `vote_chance`: "I'd like you to rate your chances of voting in November's election for president on a scale of 0 to 100. If 0 represents someone who definitely will not vote and 100 represents someone who definitely will vote, where on this scale of 0 to 100 would you place yourself?"
- variables without `_diff` are measured only after treatment; they're presented at the very end (before the debrief page)
`lean_bidentrump`: In both studies, the treatment made people leaned more toward Biden (coded 0) than Trump (coded 100): "If you had to, do you lean more toward Joe Biden or more toward Donald Trump in the presidential election?"
```r
# pilot (all 87 participants)
measure term estimate std.error statistic p.value
<fctr> <char> <num> <num> <num> <num>
1: therm_repub_diff (Intercept) 1.8391 1.2448 1.4774 0.1432
2: therm_dem_diff (Intercept) 0.2414 1.2179 0.1982 0.8434
3: lean_bidentrump_diff (Intercept) -1.6552 1.0661 -1.5526 0.1242 # lean more biden (negative = more more biden)
4: vote_chance_diff (Intercept) -0.4138 0.6274 -0.6595 0.5113
# pilot 1, dem/rep separately
measure demrep term estimate std.error statistic p.value
<fctr> <num> <char> <num> <num> <num> <num>
1: therm_repub_diff 1 (Intercept) 1.1667 2.3882 0.4885 0.6278
2: therm_repub_diff 2 (Intercept) 2.4667 0.9361 2.6349 0.0116
3: therm_dem_diff 1 (Intercept) 0.8571 2.1959 0.3903 0.6983
4: therm_dem_diff 2 (Intercept) -0.3333 1.1815 -0.2821 0.7792
5: lean_bidentrump_diff 1 (Intercept) -1.7619 1.4365 -1.2265 0.2270 # lean more biden (negative = more more biden)
6: lean_bidentrump_diff 2 (Intercept) -1.5556 1.5810 -0.9839 0.3305 # lean more biden (negative = more more biden)
7: vote_chance_diff 1 (Intercept) 0.5476 0.8109 0.6753 0.5033
8: vote_chance_diff 2 (Intercept) -1.3111 0.9368 -1.3996 0.1687
# pilot 2
measure term estimate std.error statistic p.value
<fctr> <char> <num> <num> <num> <num>
1: therm_repub_diff (Intercept) 0.3111 0.8021 0.3879 0.6990
2: therm_dem_diff (Intercept) 1.1667 1.0573 1.1034 0.2728 # lean more dem
3: lean_bidentrump_diff (Intercept) -1.9889 0.8851 -2.2471 0.0271 # again! lean more biden
4: vote_chance_diff (Intercept) 0.4333 0.7450 0.5816 0.5623
# pilot 2, dem/ind/rep separately (remember we have only 25 republicans!!)
measure demrep term estimate std.error statistic p.value
<fctr> <num> <char> <num> <num> <num> <num>
1: therm_repub_diff dem (Intercept) -0.0196 1.0326 -0.0190 0.9849
2: therm_repub_diff ind (Intercept) -1.8571 2.1505 -0.8636 0.4035
3: therm_repub_diff rep (Intercept) 2.2000 1.5449 1.4241 0.1673
4: therm_dem_diff dem (Intercept) 1.5294 1.1968 1.2779 0.2072
5: therm_dem_diff ind (Intercept) 1.9286 2.9211 0.6602 0.5206
6: therm_dem_diff rep (Intercept) 0.0000 2.4833 0.0000 1.0000
7: lean_bidentrump_diff dem (Intercept) -2.3922 0.8975 -2.6653 0.0103 # more biden
8: lean_bidentrump_diff ind (Intercept) -1.2857 0.7590 -1.6939 0.1141 # more biden
9: lean_bidentrump_diff rep (Intercept) -1.5600 2.6128 -0.5971 0.5561 # more biden
10: vote_chance_diff dem (Intercept) 0.4510 0.9050 0.4983 0.6205
11: vote_chance_diff ind (Intercept) -1.4286 2.8684 -0.4980 0.6268
12: vote_chance_diff rep (Intercept) 1.4400 1.1447 1.2580 0.2205
measure demrep term estimate std.error statistic p.value
```
# predictors of difference/change scores
Those who trusted/learned from AI more (measured using the three post-treatment AI questions) preferred Biden more (more negative change scores) post-treatment.
- `ai_trust`: How much do you trust information about politics and elections that come from AI (e.g., ChatGPT)?
- `ai_addresss`: How well did the AI you spoke with earlier address your questions?
- `ai_learn`: How much did you learn from talking with the AI earlier?
```r
# correlations: each measure correlated with ai_trust
measure term estimate std.error statistic p.value
<fctr> <char> <num> <num> <num> <num>
1: therm_repub_diff ai_trust 0.0221 0.0274 0.8088 0.4208
2: therm_dem_diff ai_trust 0.0375 0.0360 1.0417 0.3004
3: lean_bidentrump_diff ai_trust -0.0568 0.0297 -1.9123 0.0591 #
4: vote_chance_diff ai_trust -0.0057 0.0255 -0.2240 0.8233
measure term estimate std.error statistic p.value
<fctr> <char> <num> <num> <num> <num>
1: therm_repub_diff ai_address -0.0178 0.0256 -0.6957 0.4885
2: therm_dem_diff ai_address 0.0547 0.0333 1.6418 0.1042
3: lean_bidentrump_diff ai_address -0.0589 0.0276 -2.1352 0.0355 #
4: vote_chance_diff ai_address 0.0384 0.0235 1.6363 0.1054
measure term estimate std.error statistic p.value
<fctr> <char> <num> <num> <num> <num>
1: therm_repub_diff ai_learn -0.0092 0.0251 -0.3679 0.7138
2: therm_dem_diff ai_learn 0.0203 0.0331 0.6137 0.5410
3: lean_bidentrump_diff ai_learn -0.0574 0.0271 -2.1206 0.0368 #
4: vote_chance_diff ai_learn 0.0045 0.0233 0.1940 0.8466
# avg(ai_trust, ai_address, ai_learn)
measure term estimate std.error statistic p.value
<fctr> <char> <num> <num> <num> <num>
1: therm_repub_diff ai_combine -0.0036 0.0298 -0.1223 0.9029
2: therm_dem_diff ai_combine 0.0487 0.0389 1.2524 0.2137
3: lean_bidentrump_diff ai_combine -0.0755 0.0318 -2.3711 0.0199 #
4: vote_chance_diff ai_combine 0.0171 0.0276 0.6205 0.5365
```
![[1719889155.png]]