gt; m1 <- feols(persuasion ~ conditionZ * topicZ * lean_bidentrump_1Z + emotion_motivationZ + narratives_social_influenceZ + rationality_evidenceZ + relationship_trustZ + tone_rhetoricsZ + accuracyZ, d3, se = "HC1 ") summ(m1) term result sig <char> <char> <char> 1: (Intercept) b = 1.49 [1.06, 1.93], p < .001 *** 2: conditionZ b = 1.03 [0.41, 1.65], p = .001 ** 3: topicZ b = 0.79 [0.31, 1.27], p = .001 ** 4: lean_bidentrump_1Z b = 0.58 [0.17, 0.99], p = .005 ** 5: emotion_motivationZ b = 0.33 [-0.27, 0.94], p = .278 6: narratives_social_influenceZ b = 0.13 [-0.36, 0.62], p = .596 7: rationality_evidenceZ b = -0.49 [-1.05, 0.07], p = .086 . 8: relationship_trustZ b = 0.16 [-0.29, 0.62], p = .484 9: tone_rhetoricsZ b = 1.84 [1.13, 2.55], p < .001 *** 10: accuracyZ b = 0.11 [-0.52, 0.73], p = .742 11: conditionZ × topicZ b = -0.34 [-0.78, 0.09], p = .119 12: conditionZ × lean_bidentrump_1Z b = -2.42 [-3.16, -1.67], p < .001 *** 13: topicZ × lean_bidentrump_1Z b = -0.02 [-0.39, 0.36], p = .936 14: conditionZ × topicZ × lean_bidentrump_1Z b = -0.25 [-0.64, 0.13], p = .198 ``` ## break down tone-rhetorics (`tr_`) into subcomponents ```r term result sig <char> <char> <char> 1: (Intercept) b = 1.49 [1.06, 1.93], p < .001 *** 2: conditionZ b = 1.02 [0.39, 1.65], p = .001 ** 3: topicZ b = 0.74 [0.26, 1.22], p = .002 ** 4: lean_bidentrump_1Z b = 0.53 [0.11, 0.94], p = .013 * 5: emotion_motivationZ b = 0.35 [-0.27, 0.96], p = .267 6: narratives_social_influenceZ b = 0.16 [-0.33, 0.65], p = .526 7: rationality_evidenceZ b = -0.52 [-1.08, 0.04], p = .069 . 8: relationship_trustZ b = 0.17 [-0.28, 0.62], p = .465 9: tr_contrastive_toneZ b = 0.90 [0.39, 1.41], p = .001 *** 10: tr_namecallingZ b = 0.28 [-0.27, 0.84], p = .316 11: tr_negative_toneZ b = 0.73 [0.15, 1.31], p = .013 * 12: tr_positive_toneZ b = 0.63 [0.01, 1.24], p = .045 * 13: accuracyZ b = 0.13 [-0.50, 0.76], p = .691 14: conditionZ × topicZ b = -0.31 [-0.74, 0.12], p = .155 15: conditionZ × lean_bidentrump_1Z b = -2.45 [-3.21, -1.69], p < .001 *** 16: topicZ × lean_bidentrump_1Z b = 0.00 [-0.38, 0.38], p = .990 17: conditionZ × topicZ × lean_bidentrump_1Z b = -0.28 [-0.67, 0.12], p = .168 ``` use lasso regression to identify most predictive features (out of all 27 [includes accuracy]). features aren't that correlated (max r = .52) distribution of correlation r values ![[20250210214216.png]] lasso coefficients - `coef_regularized`: l1 regularization (if 0, unimportant feature) - `coef_debiased`: debiased version of `coef_regularized` where small coefs aren't shrunk to 0, `p` is p-value for `coef_regularized`, which is also debiased. ```r features coef_regularized coef_debiased p sig <char> <num> <num> <num> <char> 1: conditionZ 0.4799 1.0138 0.0029 ** 2: topicZ 0.4294 0.5531 0.0357 * 3: lean_bidentrump_1Z 0.3488 0.4710 0.0323 * 4: accuracyS 0.0000 0.9426 0.5432 5: active_listening_and_empathy 0.0000 -0.2385 0.7744 6: address_objections_and_counterarguments -2.2914 -2.5369 0.0002 *** 7: aggressive_and_explicit_directives 0.0000 1.6949 0.5157 8: audience_adaptation 1.7157 2.2036 0.0056 ** 9: build_rapport_and_common_ground 0.0000 0.3234 0.7450 10: cognitive_elaboration 0.0000 -0.1455 0.8576 11: contrastive_tone 2.0250 2.4688 0.0024 ** 12: emotional_appeal_with_balanced_urgency 0.0000 -0.3762 0.6582 13: encourage_action_with_clear_calls 0.6067 1.2503 0.0988 . 14: evidence_or_factbased_arguments 0.7994 1.1127 0.1028 15: localized_focus 0.0000 -0.3981 0.7177 16: namecalling 0.0000 1.0883 0.2831 17: negative_tone 1.5289 1.9076 0.0279 * 18: politeness_and_civil_tone 0.0000 2.6037 0.2678 19: positive_framing_and_value_alignment 0.0000 -0.5469 0.4699 20: positive_tone 0.4756 1.1994 0.1831 21: reciprocity_and_mutual_benefit 0.0000 -1.3077 0.2612 22: relatable_hypotheticals 0.0000 0.7049 0.4815 23: social_proof_and_normative_influence 0.0000 1.0702 0.4128 24: stimulate_anger 0.0000 -1.4596 0.2491 25: stimulate_enthusiasm 0.0000 0.1514 0.8650 26: storytelling_and_relatable_anecdotes 0.0000 0.1858 0.8345 27: transfer_of_association 0.6872 1.0620 0.2020 28: use_of_everyday_people_as_messengers 0.0000 0.4729 0.9255 29: use_of_negative_testimonials 0.0000 -1.0658 0.5818 30: use_of_positive_testimonials 0.0000 -0.6640 0.7360 31: conditionZ:topicZ -0.1879 -0.3355 0.1450 32: conditionZ:lean_bidentrump_1Z -2.4261 -3.0548 0.0000 *** 33: topicZ:lean_bidentrump_1Z 0.0000 0.0043 0.9826 34: conditionZ:topicZ:lean_bidentrump_1Z -0.2060 -0.2712 0.2017 features coef_regularized coef_debiased p sig ``` (over)fit by fitting regular/unpenalized OLS with only the important features (`coef_regularized` is not 0) ```r term result sig <char> <char> <char> 1: (Intercept) b = -3.58 [-5.75, -1.42], p = .001 ** 2: conditionZ b = 0.94 [0.38, 1.50], p = .001 *** 3: topicZ b = 0.48 [0.04, 0.93], p = .033 * 4: lean_bidentrump_1Z b = 0.39 [0.01, 0.77], p = .044 * 5: address_objections_and_counterarguments b = -2.86 [-4.18, -1.55], p < .001 *** 6: audience_adaptation b = 2.24 [0.74, 3.74], p = .003 ** 7: contrastive_tone b = 2.64 [1.04, 4.23], p = .001 ** 8: encourage_action_with_clear_calls b = 1.23 [-0.18, 2.64], p = .087 . 9: evidence_or_factbased_arguments b = 1.23 [-0.09, 2.56], p = .068 . 10: negative_tone b = 2.41 [0.75, 4.08], p = .005 ** 11: positive_tone b = 1.35 [-0.37, 3.07], p = .124 12: transfer_of_association b = 1.19 [-0.36, 2.74], p = .132 13: conditionZ × topicZ b = -0.26 [-0.70, 0.17], p = .232 14: conditionZ × lean_bidentrump_1Z b = -3.11 [-3.91, -2.31], p < .001 *** 15: conditionZ × topicZ × lean_bidentrump_1Z b = -0.33 [-0.71, 0.04], p = .079 . ``` might as well do it for all 26 strategies with unpenalized OLS ```r term result sig <char> <char> <char> 1: (Intercept) b = -7.23 [-13.15, -1.32], p = .017 * 2: conditionZ b = 1.12 [0.44, 1.79], p = .001 ** 3: topicZ b = 0.56 [0.05, 1.07], p = .030 * 4: lean_bidentrump_1Z b = 0.46 [0.03, 0.88], p = .037 * 5: accuracy b = 0.01 [-0.02, 0.04], p = .484 6: active_listening_and_empathy b = -0.28 [-1.91, 1.36], p = .739 7: address_objections_and_counterarguments b = -2.68 [-3.98, -1.37], p < .001 *** 8: aggressive_and_explicit_directives b = 1.61 [-3.59, 6.80], p = .544 9: audience_adaptation b = 2.28 [0.70, 3.87], p = .005 ** 10: build_rapport_and_common_ground b = 0.28 [-1.67, 2.22], p = .781 11: cognitive_elaboration b = -0.13 [-1.75, 1.48], p = .872 12: contrastive_tone b = 2.64 [1.01, 4.26], p = .002 ** 13: emotional_appeal_with_balanced_urgency b = -0.31 [-1.98, 1.36], p = .716 14: encourage_action_with_clear_calls b = 1.18 [-0.30, 2.66], p = .119 15: evidence_or_factbased_arguments b = 1.22 [-0.13, 2.56], p = .076 . 16: localized_focus b = -0.60 [-2.86, 1.66], p = .601 17: namecalling b = 1.34 [-0.70, 3.37], p = .197 18: negative_tone b = 2.27 [0.52, 4.02], p = .011 * 19: politeness_and_civil_tone b = 3.15 [-1.96, 8.25], p = .227 20: positive_framing_and_value_alignment b = -0.47 [-1.97, 1.03], p = .541 21: positive_tone b = 1.34 [-0.46, 3.15], p = .145 22: reciprocity_and_mutual_benefit b = -1.37 [-3.67, 0.93], p = .242 23: relatable_hypotheticals b = 0.39 [-1.61, 2.39], p = .699 24: social_proof_and_normative_influence b = 1.57 [-0.95, 4.08], p = .223 25: stimulate_anger b = -1.17 [-3.60, 1.26], p = .344 26: stimulate_enthusiasm b = 0.44 [-1.35, 2.22], p = .631 27: storytelling_and_relatable_anecdotes b = 0.03 [-1.73, 1.79], p = .975 28: transfer_of_association b = 1.15 [-0.48, 2.77], p = .168 29: use_of_everyday_people_as_messengers b = 0.20 [-9.63, 10.04], p = .968 30: use_of_negative_testimonials b = -0.90 [-4.66, 2.86], p = .640 31: use_of_positive_testimonials b = -0.85 [-5.25, 3.55], p = .705 32: conditionZ × topicZ b = -0.34 [-0.78, 0.11], p = .137 33: conditionZ × lean_bidentrump_1Z b = -3.13 [-3.99, -2.27], p < .001 *** 34: topicZ × lean_bidentrump_1Z b = 0.01 [-0.38, 0.39], p = .963 35: conditionZ × topicZ × lean_bidentrump_1Z b = -0.31 [-0.72, 0.11], p = .145 term result sig ``` ![[20250210175236.png]]