- [prereg](https://docs.google.com/document/d/1q2V5QZptVLRSBvrZbvd39Gks6-Or57UYLwakdUdyDLk/edit)
- [[240903_162316 effect size|effect size?]]
```r
topic condition N
<char> <char> <int>
1: personality persuadeHarris 576
2: personality persuadeTrump 557
3: policy persuadeHarris 593
4: policy persuadeTrump 569
```
# descriptive
![[1725463415.png]]
![[1725463431.png]]
![[1725463442.png]]
# main results
![[1725463469.png]]
![[1725463480.png]]
## models
## h1: lean
We will test whether treatment changes candidate preference using this OLS model (where Z is z-scored variable):
- `post_preference ~ condition * pre_preferenceZ * topic_dummyZ`
```r
> m0 <- feols(lean_bidentrump_2 ~ condition * (lean_bidentrump_1Z * topicZ), data = d00)
> summ(m0)
1: (Intercept) b = 41.80 (0.31) [41.20, 42.41] p < .001
2: conditionpersuadeTrump b = 3.01 (0.44) [2.15, 3.87] p < .001
3: lean_bidentrump_1Z b = 40.98 (0.31) [40.37, 41.58] p < .001
4: topicZ b = -0.65 (0.31) [-1.25, -0.04] p = .035
5: lean_bidentrump_1Z × topicZ b = 0.05 (0.31) [-0.55, 0.65] p = .868
6: conditionpersuadeTrump × lean_bidentrump_1Z b = 0.77 (0.44) [-0.09, 1.63] p = .081
7: conditionpersuadeTrump × topicZ b = 0.98 (0.44) [0.12, 1.84] p = .026
8: conditionpersuadeTrump × lean_bidentrump_1Z × topicZ b = -0.17 (0.44) [-1.03, 0.69] p = .700
```
We will follow up any significant effects involving condition using one-sample t-tests comparing pre- and post-values by condition (and subsetting by topic or pre-preference if there are significant interactions in the above regression). We will use bootstrapping to determine whether pre-to-post differences differ significantly across subgroups.
```r
# persuade Harris (personality + policy topics combined)
1: (Intercept) b = -1.72 (0.34) [-2.38, -1.05] p < .001
# personality only
1: (Intercept) b = -1.07 (0.53) [-2.11, -0.02] p = .045
# policy only
1: (Intercept) b = -2.34 (0.43) [-3.19, -1.50] p < .001
# persuade Trump (personality + policy topics combined)
1: (Intercept) b = 1.34 (0.28) [0.80, 1.89] p < .001
# personality only
1: (Intercept) b = 1.01 (0.37) [0.28, 1.73] p = .007
# policy only
1: (Intercept) b = 1.67 (0.41) [0.86, 2.48] p < .001
```
### Split the above by initially pro-Harris (lean<=50) participants and pro-Trump (lean>50) participants.
pro-Harris
- persuading pro-harris people with personality has no effect
- stronger effects when we persuade them to prefer Trump
```r
# persuade Harris (personality + policy topics combined)
1: (Intercept) b = -0.76 (0.39) [-1.52, 0.01] p = .052
# personality only
1: (Intercept) b = -0.19 (0.70) [-1.56, 1.18] p = .784 # didn't work at all
# policy only
1: (Intercept) b = -1.30 (0.37) [-2.02, -0.58] p < .001
# persuade Trump (personality + policy topics combined) - stronger effects than persuade Harris
1: (Intercept) b = 2.00 (0.39) [1.23, 2.76] p < .001
# personality only
1: (Intercept) b = 1.71 (0.48) [0.77, 2.65] p < .001
# policy only
1: (Intercept) b = 2.27 (0.61) [1.07, 3.48] p < .001
```
pro-Trump
- we can only persuade pro-Trump participants to prefer Harris???
```r
# persuade Harris (personality + policy topics combined)
1: (Intercept) b = -2.88 (0.59) [-4.03, -1.73] p < .001
# personality only
1: (Intercept) b = -2.12 (0.82) [-3.72, -0.51] p = .010
# policy only
1: (Intercept) b = -3.63 (0.84) [-5.28, -1.98] p < .001
# persuade Trump (personality + policy topics combined)
1: (Intercept) b = 0.50 (0.39) [-0.26, 1.26] p = .199
# personality only
1: (Intercept) b = 0.13 (0.58) [-1.01, 1.27] p = .823
# policy only
1: (Intercept) b = 0.87 (0.51) [-0.14, 1.88] p = .092
```
## h2: vote likelihood
We will test whether treatment changes voting likelihood using two OLS models:
- `post_vote ~ condition * pre_voteZ * topic_dummyZ`
run separately for participants who were pre_preference<=50 (lean Harris) versus pre_preference>50 (lean Trump)
```r
# initial lean harris
term result
<char> <char>
1: (Intercept) b = 90.24 (0.36) [89.53, 90.95] p < .001
2: conditionpersuadeTrump b = -2.36 (0.51) [-3.37, -1.36] p < .001
3: scale(vote_chance_1) b = 20.29 (0.38) [19.53, 21.04] p < .001
4: scale(topicZ) b = 0.08 (0.36) [-0.63, 0.79] p = .827
5: scale(vote_chance_1) × scale(topicZ) b = -1.05 (0.38) [-1.80, -0.29] p = .007
6: conditionpersuadeTrump × scale(vote_chance_1) b = 2.17 (0.52) [1.16, 3.19] p < .001
7: conditionpersuadeTrump × scale(topicZ) b = 0.04 (0.51) [-0.96, 1.05] p = .930
8: conditionpersuadeTrump × scale(vote_chance_1) × scale(topicZ) b = 1.24 (0.52) [0.23, 2.26] p = .017
# initial lean trump
term result
<char> <char>
1: (Intercept) b = 84.52 (0.47) [83.60, 85.43] p < .001
2: conditionpersuadeTrump b = 2.46 (0.67) [1.14, 3.78] p < .001
3: scale(vote_chance_1) b = 24.89 (0.49) [23.92, 25.85] p < .001
4: scale(topicZ) b = -0.82 (0.47) [-1.74, 0.09] p = .077
5: scale(vote_chance_1) × scale(topicZ) b = 1.20 (0.49) [0.24, 2.16] p = .014
6: conditionpersuadeTrump × scale(vote_chance_1) b = -2.57 (0.69) [-3.93, -1.22] p < .001
7: conditionpersuadeTrump × scale(topicZ) b = 1.10 (0.67) [-0.22, 2.42] p = .101
8: conditionpersuadeTrump × scale(vote_chance_1) × scale(topicZ) b = -2.01 (0.69) [-3.36, -0.66] p = .004
```
followed up with one sample t-tests as described above.
```r
# persuade Harris, initial lean harris (personality + policy topics combined)
1: (Intercept) b = 2.43 (0.36) [1.72, 3.14] p < .001
# personality only
1: (Intercept) b = 2.39 (0.47) [1.46, 3.32] p < .001
# policy only
1: (Intercept) b = 2.47 (0.55) [1.38, 3.55] p < .001
# persuade Harris, initial lean trump (personality + policy topics combined)
1: (Intercept) b = 0.16 (0.51) [-0.85, 1.17] p = .755
# personality only
1: (Intercept) b = 0.92 (0.81) [-0.68, 2.52] p = .258
# policy only
1: (Intercept) b = -0.59 (0.64) [-1.84, 0.67] p = .357
# persuade trump, initial lean trump (personality + policy topics combined)
1: (Intercept) b = 2.78 (0.47) [1.85, 3.71] p < .001
# personality only
1: (Intercept) b = 3.08 (0.71) [1.68, 4.48] p < .001
# policy only
1: (Intercept) b = 2.48 (0.63) [1.25, 3.71] p < .001
# persuade trump, initial lean harris (personality + policy topics combined)
1: (Intercept) b = 0.28 (0.42) [-0.55, 1.11] p = .503
# personality only
1: (Intercept) b = 0.14 (0.67) [-1.18, 1.46] p = .836
# policy only
1: (Intercept) b = 0.42 (0.52) [-0.60, 1.44] p = .419
```
# pre/post correlations
![[1725463540.png]]
![[1725463541.png]]
![[1725463595.png]]
![[1725463542.png]]
![[1725463543.png]]