No clear evidence for AI message accuracy moderating effects. - multiple AI messages for each person, so tried min(accuracy) and mean(accuracy) # models ## lean/candidate preference ### accuracy<75 vs accuracy>=75 ```r # MIN(accuracy) < 75 (1040 observations) > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_min < 75])) term result <char> <char> 1: (Intercept) b = 47.12 [46.22, 48.02], p < .001 2: conditionC b = 5.31 [3.51, 7.11], p < .001 3: scale(lean_bidentrump_1) b = 40.61 [39.73, 41.49], p < .001 4: topicC b = -0.94 [-2.74, 0.86], p = .304 5: conditionC × scale(lean_bidentrump_1) b = 1.69 [-0.07, 3.44], p = .060 6: conditionC × topicC b = 2.37 [-1.23, 5.97], p = .197 7: scale(lean_bidentrump_1) × topicC b = -0.38 [-2.13, 1.38], p = .675 8: conditionC × scale(lean_bidentrump_1) × topicC b = 0.06 [-3.45, 3.58], p = .972 # MIN(accuracy) >= 75 > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_min >= 75])) term result <char> <char> 1: (Intercept) b = 40.70 [39.64, 41.75], p < .001 2: conditionC b = 2.47 [0.37, 4.58], p = .021 # smaller than effect above, but CIs overlap 3: scale(lean_bidentrump_1) b = 41.37 [40.22, 42.52], p < .001 4: topicC b = -0.12 [-2.23, 1.98], p = .910 5: conditionC × scale(lean_bidentrump_1) b = 0.96 [-1.34, 3.27], p = .412 6: conditionC × topicC b = 2.88 [-1.33, 7.09], p = .180 7: scale(lean_bidentrump_1) × topicC b = -0.22 [-2.52, 2.08], p = .851 8: conditionC × scale(lean_bidentrump_1) × topicC b = 0.10 [-4.50, 4.71], p = .964 # MEAN(accuracy) < 75 (691 observations) > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_mean < 75])) term result <char> <char> 1: (Intercept) b = 55.65 [54.28, 57.03], p < .001 2: conditionC b = 5.68 [2.93, 8.43], p < .001 3: scale(lean_bidentrump_1) b = 39.59 [38.40, 40.79], p < .001 4: topicC b = -2.30 [-5.05, 0.45], p = .102 5: conditionC × scale(lean_bidentrump_1) b = 1.80 [-0.60, 4.19], p = .141 6: conditionC × topicC b = 5.17 [-0.33, 10.68], p = .065 7: scale(lean_bidentrump_1) × topicC b = -1.62 [-4.01, 0.78], p = .185 8: conditionC × scale(lean_bidentrump_1) × topicC b = 3.06 [-1.73, 7.85], p = .211 # MEAN(accuracy) >= 75 > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_mean >= 75])) term result <char> <char> 1: (Intercept) b = 38.48 [37.82, 39.14], p < .001 2: conditionC b = 2.51 [1.19, 3.82], p < .001 # smaller than effect above, but CIs overlap 3: scale(lean_bidentrump_1) b = 40.65 [39.95, 41.36], p < .001 4: topicC b = -0.44 [-1.76, 0.88], p = .515 5: conditionC × scale(lean_bidentrump_1) b = 0.84 [-0.57, 2.25], p = .243 6: conditionC × topicC b = 1.93 [-0.70, 4.57], p = .151 7: scale(lean_bidentrump_1) × topicC b = -0.50 [-1.91, 0.92], p = .490 8: conditionC × scale(lean_bidentrump_1) × topicC b = -0.83 [-3.66, 1.99], p = .563 ``` ### accuracy<50 vs accuracy>=50 ```r > thres <- 50 # MIN(accuracy) < 50 (710 observations) > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_min < thres])) term result <char> <char> 1: (Intercept) b = 53.17 [51.86, 54.49], p < .001 2: conditionC b = 5.87 [3.25, 8.50], p < .001 3: scale(lean_bidentrump_1) b = 39.61 [38.43, 40.79], p < .001 4: topicC b = -2.66 [-5.28, -0.03], p = .048 5: conditionC × scale(lean_bidentrump_1) b = 2.34 [-0.03, 4.71], p = .053 6: conditionC × topicC b = 6.70 [1.45, 11.96], p = .012 7: scale(lean_bidentrump_1) × topicC b = -2.20 [-4.57, 0.16], p = .068 8: conditionC × scale(lean_bidentrump_1) × topicC b = 4.20 [-0.54, 8.93], p = .082 # MIN(accuracy) >= 50 > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_min >= thres])) term result <char> <char> 1: (Intercept) b = 39.49 [38.83, 40.16], p < .001 2: conditionC b = 3.02 [1.68, 4.35], p < .001 # smaller than effect above, but CIs overlap 3: scale(lean_bidentrump_1) b = 41.09 [40.38, 41.80], p < .001 4: topicC b = -0.95 [-2.28, 0.38], p = .162 5: conditionC × scale(lean_bidentrump_1) b = 1.10 [-0.31, 2.51], p = .126 6: conditionC × topicC b = 0.86 [-1.81, 3.52], p = .528 7: scale(lean_bidentrump_1) × topicC b = -0.88 [-2.29, 0.53], p = .222 8: conditionC × scale(lean_bidentrump_1) × topicC b = -1.66 [-4.48, 1.16], p = .249 # MEAN(accuracy) < 50 (ONLY 249 observations!) > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_mean < thres])) term result <char> <char> 1: (Intercept) b = 68.45 [64.41, 72.49], p < .001 2: conditionC b = 19.52 [11.43, 27.60], p < .001 # huge effect 3: scale(lean_bidentrump_1) b = 28.11 [25.35, 30.87], p < .001 4: topicC b = -20.23 [-28.32, -12.15], p < .001 5: conditionC × scale(lean_bidentrump_1) b = 6.25 [0.73, 11.78], p = .027 6: conditionC × topicC b = 38.23 [22.07, 54.39], p < .001 7: scale(lean_bidentrump_1) × topicC b = -8.85 [-14.37, -3.33], p = .002 8: conditionC × scale(lean_bidentrump_1) × topicC b = 19.39 [8.34, 30.44], p = .001 # MEAN(accuracy) >= 50 > summ(feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC, data = d0[pfc_score_mean >= thres])) term result <char> <char> 1: (Intercept) b = 40.14 [39.66, 40.62], p < .001 2: conditionC b = 2.98 [2.03, 3.94], p < .001 # smaller/more reasonable estimate 3: scale(lean_bidentrump_1) b = 40.79 [40.31, 41.28], p < .001 4: topicC b = -0.38 [-1.34, 0.58], p = .435 5: conditionC × scale(lean_bidentrump_1) b = 1.03 [0.06, 2.01], p = .038 6: conditionC × topicC b = 1.81 [-0.10, 3.73], p = .063 7: scale(lean_bidentrump_1) × topicC b = -0.35 [-1.33, 0.63], p = .484 8: conditionC × scale(lean_bidentrump_1) × topicC b = -0.68 [-2.63, 1.28], p = .496 ``` ### 4-way interaction models ```r # min(accuracy) m1 <- feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC * scale(pfc_score_min) , data = d0) term result <char> <char> 1: (Intercept) b = 43.84 [43.20, 44.48], p < .001 2: conditionC b = 3.39 [2.11, 4.67], p < .001 3: scale(lean_bidentrump_1) b = 41.17 [40.52, 41.82], p < .001 4: topicC b = -0.80 [-2.08, 0.48], p = .222 5: scale(pfc_score_min) b = 0.31 [-0.34, 0.95], p = .349 6: conditionC × scale(lean_bidentrump_1) b = 1.20 [-0.10, 2.49], p = .070 7: conditionC × topicC b = 2.14 [-0.42, 4.70], p = .102 8: scale(lean_bidentrump_1) × topicC b = -0.86 [-2.16, 0.43], p = .191 9: conditionC × scale(pfc_score_min) b = -1.05 [-2.34, 0.24], p = .112 10: scale(lean_bidentrump_1) × scale(pfc_score_min) b = 0.31 [-0.33, 0.95], p = .348 11: topicC × scale(pfc_score_min) b = 0.19 [-1.10, 1.48], p = .776 12: conditionC × scale(lean_bidentrump_1) × topicC b = 0.05 [-2.53, 2.64], p = .967 13: conditionC × scale(lean_bidentrump_1) × scale(pfc_score_min) b = -0.35 [-1.63, 0.93], p = .590 14: conditionC × topicC × scale(pfc_score_min) b = -0.52 [-3.10, 2.06], p = .693 15: scale(lean_bidentrump_1) × topicC × scale(pfc_score_min) b = 0.26 [-1.02, 1.54], p = .694 16: conditionC × scale(lean_bidentrump_1) × topicC × scale(pfc_score_min) b = -1.60 [-4.17, 0.96], p = .220 # mean(accuracy) m1 <- feols(lean_bidentrump_2 ~ conditionC * scale(lean_bidentrump_1) * topicC * scale(pfc_score_mean), data = d0) term result <char> <char> 1: (Intercept) b = 44.02 [43.36, 44.69], p < .001 2: conditionC b = 3.80 [2.47, 5.13], p < .001 3: scale(lean_bidentrump_1) b = 41.29 [40.62, 41.97], p < .001 4: topicC b = -0.85 [-2.18, 0.48], p = .212 5: scale(pfc_score_mean) b = 0.54 [-0.28, 1.35], p = .196 6: conditionC × scale(lean_bidentrump_1) b = 1.63 [0.27, 2.98], p = .019 7: conditionC × topicC b = 1.43 [-1.23, 4.09], p = .293 8: scale(lean_bidentrump_1) × topicC b = -1.03 [-2.39, 0.33], p = .137 9: conditionC × scale(pfc_score_mean) b = -0.88 [-2.51, 0.75], p = .289 10: scale(lean_bidentrump_1) × scale(pfc_score_mean) b = 0.54 [-0.27, 1.36], p = .193 11: topicC × scale(pfc_score_mean) b = -0.11 [-1.74, 1.52], p = .893 12: conditionC × scale(lean_bidentrump_1) × topicC b = -0.27 [-2.98, 2.44], p = .845 13: conditionC × scale(lean_bidentrump_1) × scale(pfc_score_mean) b = -0.38 [-2.01, 1.25], p = .650 14: conditionC × topicC × scale(pfc_score_mean) b = -0.35 [-3.62, 2.91], p = .831 15: scale(lean_bidentrump_1) × topicC × scale(pfc_score_mean) b = 0.34 [-1.29, 1.97], p = .681 16: conditionC × scale(lean_bidentrump_1) × topicC × scale(pfc_score_mean) b = -1.63 [-4.90, 1.63], p = .326 ``` ## vote intention ### 4-way interaction models among harris supporters some signal, but likely fluke? more accurate messages (based on min(accuracy) but not mean(accuracy)) somewhat increased the prob of voting (also interacts with initial vote intention). see line 13 ```r m1 <- feols(vote_chance_2 ~ conditionHarrisC * scale(vote_chance_1) * topicC * scale(pfc_score_min) , data = d0[lean == initial_lean]) term result <char> <char> 1: (Intercept) b = 89.26 [88.68, 89.84], p < .001 2: conditionHarrisC b = 1.63 [0.46, 2.79], p = .006 3: scale(vote_chance_1) b = 20.91 [20.23, 21.59], p < .001 4: topicC b = -0.37 [-1.54, 0.79], p = .529 5: scale(pfc_score_min) b = 0.27 [-0.32, 0.87], p = .371 6: conditionHarrisC × scale(vote_chance_1) b = -1.04 [-2.41, 0.33], p = .137 7: conditionHarrisC × topicC b = 2.07 [-0.26, 4.40], p = .081 8: scale(vote_chance_1) × topicC b = 0.03 [-1.34, 1.40], p = .968 9: conditionHarrisC × scale(pfc_score_min) b = -0.97 [-2.16, 0.22], p = .109 10: scale(vote_chance_1) × scale(pfc_score_min) b = -0.18 [-0.96, 0.61], p = .661 11: topicC × scale(pfc_score_min) b = -1.33 [-2.52, -0.14], p = .029 12: conditionHarrisC × scale(vote_chance_1) × topicC b = -6.21 [-8.95, -3.48], p < .001 13: conditionHarrisC × scale(vote_chance_1) × scale(pfc_score_min) b = 1.41 [-0.16, 2.98], p = .079 # something but weak 14: conditionHarrisC × topicC × scale(pfc_score_min) b = 0.99 [-1.39, 3.37], p = .414 15: scale(vote_chance_1) × topicC × scale(pfc_score_min) b = 2.18 [0.61, 3.75], p = .007 # no condition though 16: conditionHarrisC × scale(vote_chance_1) × topicC × scale(pfc_score_min) b = -1.13 [-4.27, 2.02], p = .482 # nothing m1 <- feols(vote_chance_2 ~ conditionHarrisC * scale(vote_chance_1) * topicC * scale(pfc_score_mean) , data = d0[lean == initial_lean]) term result <char> <char> 1: (Intercept) b = 89.04 [88.44, 89.64], p < .001 2: conditionHarrisC b = 2.14 [0.95, 3.34], p < .001 3: scale(vote_chance_1) b = 21.27 [20.60, 21.95], p < .001 4: topicC b = -0.32 [-1.52, 0.88], p = .602 5: scale(pfc_score_mean) b = -0.19 [-0.83, 0.45], p = .566 6: conditionHarrisC × scale(vote_chance_1) b = -1.81 [-3.16, -0.46], p = .009 7: conditionHarrisC × topicC b = 2.15 [-0.24, 4.54], p = .078 8: scale(vote_chance_1) × topicC b = -0.21 [-1.56, 1.14], p = .756 9: conditionHarrisC × scale(pfc_score_mean) b = -0.30 [-1.58, 0.98], p = .649 10: scale(vote_chance_1) × scale(pfc_score_mean) b = 0.32 [-0.48, 1.12], p = .432 11: topicC × scale(pfc_score_mean) b = -1.57 [-2.85, -0.29], p = .017 12: conditionHarrisC × scale(vote_chance_1) × topicC b = -6.93 [-9.63, -4.22], p < .001 13: conditionHarrisC × scale(vote_chance_1) × scale(pfc_score_mean) b = 0.50 [-1.11, 2.10], p = .542 # nothing 14: conditionHarrisC × topicC × scale(pfc_score_mean) b = 1.02 [-1.54, 3.58], p = .434 15: scale(vote_chance_1) × topicC × scale(pfc_score_mean) b = 3.23 [1.63, 4.84], p < .001 16: conditionHarrisC × scale(vote_chance_1) × topicC × scale(pfc_score_mean) b = -0.58 [-3.79, 2.63], p = .725 ``` among trump supporters ```r m1 <- feols(vote_chance_2 ~ conditionC * scale(vote_chance_1) * topicC * scale(pfc_score_min) , data = d0[lean == initial_lean]) 1: (Intercept) b = 85.50 [84.28, 86.71], p < .001 2: conditionC b = 0.86 [-1.57, 3.29], p = .487 3: scale(vote_chance_1) b = 24.26 [22.41, 26.10], p < .001 4: topicC b = 0.48 [-1.95, 2.91], p = .697 5: scale(pfc_score_min) b = -0.81 [-2.04, 0.43], p = .199 6: conditionC × scale(vote_chance_1) b = -0.83 [-4.51, 2.86], p = .660 7: conditionC × topicC b = 1.12 [-3.74, 5.98], p = .651 8: scale(vote_chance_1) × topicC b = -0.77 [-4.46, 2.91], p = .680 9: conditionC × scale(pfc_score_min) b = -0.82 [-3.29, 1.65], p = .515 10: scale(vote_chance_1) × scale(pfc_score_min) b = 0.97 [-0.99, 2.93], p = .331 11: topicC × scale(pfc_score_min) b = -0.62 [-3.09, 1.85], p = .623 12: conditionC × scale(vote_chance_1) × topicC b = -3.60 [-10.97, 3.77], p = .338 13: conditionC × scale(vote_chance_1) × scale(pfc_score_min) b = 1.93 [-1.98, 5.85], p = .332 14: conditionC × topicC × scale(pfc_score_min) b = 1.57 [-3.37, 6.50], p = .533 15: scale(vote_chance_1) × topicC × scale(pfc_score_min) b = 0.33 [-3.59, 4.24], p = .870 16: conditionC × scale(vote_chance_1) × topicC × scale(pfc_score_min) b = -2.37 [-10.19, 5.46], p = .553 m1 <- feols(vote_chance_2 ~ conditionC * scale(vote_chance_1) * topicC * scale(pfc_score_mean) , data = d0[lean == initial_lean]) term result <char> <char> 1: (Intercept) b = 85.24 [83.92, 86.56], p < .001 2: conditionC b = 1.88 [-0.76, 4.52], p = .162 3: scale(vote_chance_1) b = 24.37 [22.53, 26.20], p < .001 4: topicC b = -0.45 [-3.09, 2.19], p = .738 5: scale(pfc_score_mean) b = -0.14 [-1.67, 1.40], p = .862 6: conditionC × scale(vote_chance_1) b = -3.03 [-6.71, 0.64], p = .106 7: conditionC × topicC b = 4.22 [-1.05, 9.49], p = .117 8: scale(vote_chance_1) × topicC b = 0.84 [-2.83, 4.52], p = .652 9: conditionC × scale(pfc_score_mean) b = -1.25 [-4.33, 1.83], p = .426 10: scale(vote_chance_1) × scale(pfc_score_mean) b = -0.26 [-2.42, 1.89], p = .810 11: topicC × scale(pfc_score_mean) b = 1.69 [-1.39, 4.76], p = .282 12: conditionC × scale(vote_chance_1) × topicC b = -7.43 [-14.79, -0.08], p = .048 13: conditionC × scale(vote_chance_1) × scale(pfc_score_mean) b = 2.13 [-2.18, 6.45], p = .332 14: conditionC × topicC × scale(pfc_score_mean) b = -0.12 [-6.28, 6.03], p = .969 15: scale(vote_chance_1) × topicC × scale(pfc_score_mean) b = -2.83 [-7.15, 1.48], p = .198 16: conditionC × scale(vote_chance_1) × topicC × scale(pfc_score_mean) b = -0.25 [-8.88, 8.38], p = .954 ``` ## binary vote ### 4-way interaction models harris vote ```r > summ(feols(vote_harris_2 ~ conditionHarrisC * scale(vote_harris_1) * scale(topicZ) * scale(pfc_score_min), data = d0)) term result <char> <char> 1: (Intercept) b = 49.83 [48.86, 50.80], p < .001 2: conditionHarrisC b = 3.59 [1.66, 5.53], p < .001 3: scale(vote_harris_1) b = 47.54 [46.57, 48.50], p < .001 4: scale(topicZ) b = 0.36 [-0.61, 1.33], p = .465 5: scale(pfc_score_min) b = -0.06 [-1.06, 0.94], p = .906 6: conditionHarrisC × scale(vote_harris_1) b = -0.32 [-2.25, 1.61], p = .743 7: conditionHarrisC × scale(topicZ) b = 0.38 [-1.56, 2.32], p = .700 8: scale(vote_harris_1) × scale(topicZ) b = -0.61 [-1.57, 0.36], p = .216 9: conditionHarrisC × scale(pfc_score_min) b = 0.05 [-1.95, 2.05], p = .961 10: scale(vote_harris_1) × scale(pfc_score_min) b = 0.39 [-0.61, 1.39], p = .445 11: scale(topicZ) × scale(pfc_score_min) b = 0.16 [-0.84, 1.16], p = .751 12: conditionHarrisC × scale(vote_harris_1) × scale(topicZ) b = -0.11 [-2.04, 1.82], p = .913 13: conditionHarrisC × scale(vote_harris_1) × scale(pfc_score_min) b = -0.90 [-2.89, 1.10], p = .379 14: conditionHarrisC × scale(topicZ) × scale(pfc_score_min) b = -0.60 [-2.60, 1.40], p = .558 15: scale(vote_harris_1) × scale(topicZ) × scale(pfc_score_min) b = 0.21 [-0.79, 1.21], p = .682 16: conditionHarrisC × scale(vote_harris_1) × scale(topicZ) × scale(pfc_score_min) b = -0.28 [-2.28, 1.71], p = .782 > summ(feols(vote_harris_2 ~ conditionHarrisC * scale(vote_harris_1) * scale(topicZ) * scale(pfc_score_mean), data = d0)) term result <char> <char> 1: (Intercept) b = 49.76 [48.74, 50.78], p < .001 2: conditionHarrisC b = 2.51 [0.48, 4.54], p = .016 3: scale(vote_harris_1) b = 47.43 [46.42, 48.45], p < .001 4: scale(topicZ) b = 0.01 [-1.01, 1.02], p = .990 5: scale(pfc_score_mean) b = 0.81 [-0.46, 2.09], p = .212 6: conditionHarrisC × scale(vote_harris_1) b = 0.32 [-1.71, 2.34], p = .761 7: conditionHarrisC × scale(topicZ) b = -0.24 [-2.27, 1.79], p = .816 8: scale(vote_harris_1) × scale(topicZ) b = -0.46 [-1.47, 0.56], p = .377 9: conditionHarrisC × scale(pfc_score_mean) b = 0.33 [-2.22, 2.88], p = .799 10: scale(vote_harris_1) × scale(pfc_score_mean) b = 0.07 [-1.20, 1.34], p = .912 11: scale(topicZ) × scale(pfc_score_mean) b = 0.51 [-0.76, 1.78], p = .434 12: conditionHarrisC × scale(vote_harris_1) × scale(topicZ) b = 0.59 [-1.43, 2.62], p = .565 13: conditionHarrisC × scale(vote_harris_1) × scale(pfc_score_mean) b = -0.90 [-3.45, 1.64], p = .488 14: conditionHarrisC × scale(topicZ) × scale(pfc_score_mean) b = 0.84 [-1.70, 3.38], p = .518 15: scale(vote_harris_1) × scale(topicZ) × scale(pfc_score_mean) b = -0.62 [-1.89, 0.65], p = .337 16: conditionHarrisC × scale(vote_harris_1) × scale(topicZ) × scale(pfc_score_mean) b = -0.40 [-2.94, 2.13], p = .755 ``` trump vote ```r > summ(feols(vote_trump_2 ~ conditionC * scale(vote_trump_1) * scale(topicZ) * scale(pfc_score_min), data = d0)) term result <char> <char> 1: (Intercept) b = 39.45 [38.58, 40.33], p < .001 2: conditionC b = 1.45 [-0.29, 3.20], p = .103 3: scale(vote_trump_1) b = 46.74 [45.80, 47.68], p < .001 4: scale(topicZ) b = 0.87 [-0.01, 1.75], p = .051 5: scale(pfc_score_min) b = -0.58 [-1.47, 0.30], p = .195 6: conditionC × scale(vote_trump_1) b = 0.79 [-1.10, 2.67], p = .413 7: conditionC × scale(topicZ) b = 0.74 [-1.01, 2.49], p = .407 8: scale(vote_trump_1) × scale(topicZ) b = 0.55 [-0.39, 1.49], p = .250 9: conditionC × scale(pfc_score_min) b = -1.45 [-3.22, 0.32], p = .108 10: scale(vote_trump_1) × scale(pfc_score_min) b = 0.27 [-0.66, 1.20], p = .571 11: scale(topicZ) × scale(pfc_score_min) b = 0.01 [-0.87, 0.90], p = .975 12: conditionC × scale(vote_trump_1) × scale(topicZ) b = -0.20 [-2.08, 1.68], p = .834 13: conditionC × scale(vote_trump_1) × scale(pfc_score_min) b = 0.94 [-0.92, 2.80], p = .320 14: conditionC × scale(topicZ) × scale(pfc_score_min) b = 1.03 [-0.73, 2.80], p = .251 15: scale(vote_trump_1) × scale(topicZ) × scale(pfc_score_min) b = 0.12 [-0.80, 1.05], p = .792 16: conditionC × scale(vote_trump_1) × scale(topicZ) × scale(pfc_score_min) b = 0.76 [-1.09, 2.61], p = .421 > summ(feols(vote_trump_2 ~ conditionC * scale(vote_trump_1) * scale(topicZ) * scale(pfc_score_mean), data = d0)) term result <char> <char> 1: (Intercept) b = 39.81 [38.88, 40.73], p < .001 2: conditionC b = 1.28 [-0.56, 3.13], p = .172 3: scale(vote_trump_1) b = 46.77 [45.80, 47.74], p < .001 4: scale(topicZ) b = 1.05 [0.13, 1.97], p = .026 5: scale(pfc_score_mean) b = -0.94 [-2.07, 0.19], p = .103 6: conditionC × scale(vote_trump_1) b = 0.68 [-1.26, 2.61], p = .493 7: conditionC × scale(topicZ) b = -0.07 [-1.91, 1.77], p = .941 8: scale(vote_trump_1) × scale(topicZ) b = 0.53 [-0.44, 1.49], p = .285 9: conditionC × scale(pfc_score_mean) b = -0.73 [-2.99, 1.53], p = .526 10: scale(vote_trump_1) × scale(pfc_score_mean) b = 0.25 [-0.90, 1.41], p = .668 11: scale(topicZ) × scale(pfc_score_mean) b = -0.52 [-1.65, 0.61], p = .364 12: conditionC × scale(vote_trump_1) × scale(topicZ) b = -0.69 [-2.62, 1.25], p = .487 13: conditionC × scale(vote_trump_1) × scale(pfc_score_mean) b = 1.27 [-1.04, 3.58], p = .281 14: conditionC × scale(topicZ) × scale(pfc_score_mean) b = 1.96 [-0.30, 4.21], p = .089 # something? 15: scale(vote_trump_1) × scale(topicZ) × scale(pfc_score_mean) b = -0.33 [-1.48, 0.82], p = .575 16: conditionC × scale(vote_trump_1) × scale(topicZ) × scale(pfc_score_mean) b = 0.78 [-1.52, 3.08], p = .507 ```