## Exhaustive Attrition Analysis — 4 Papers
---
### Paper 1: Stagnaro & Amsalem (2025), *Nature Communications*
**"Factual knowledge can reduce attitude polarization"**
**File:** `s41467-025-58697-3 (5).pdf` (no supplement in this set)
#### Did they do attrition analysis? **Yes — minimally in main text, with key analysis deferred to unavailable supplements.**
| # | Page | Verbatim quote | Assessment |
|---|------|---------------|------------|
| 1 | p.7 | "of the 1673 participants who entered the study, 433 were filtered out for duplicate use, not wanting to do a long study or the amount of work described, or because they indicated themselves to be True Independent or Political Other. Of the remaining 1064 individuals who continued into the study, 12 were ejected, giving us a final sample of N = 1011" | **Yes** — genuine recruited→analyzed pipeline (1,673→1,064→1,011). But this is pre-randomization screening, not post-treatment attrition. |
| 2 | p.2 | "retention rate of 87% (N = 881)" | **Yes** — overall retention rate reported. |
| 3 | p.7 | "Of the initial sample, 881 (87%) participated in Wave 2. The Wave 2 participants were comparable to those from Wave 1 on political ideology, party identification, gender, and condition (see Supplementary Note 13)" | **Partial** — claims comparability on key variables including condition, but all evidence deferred to Supplementary Note 13 (unavailable). No test statistics in main text. |
| 4 | p.7 | "looking at the Wave 1 results while understanding just on those who returned for Wave 2 yields substantively identical results (see Supplementary Note 14)" | **Partial** — robustness check restricting W1 analyses to W2 completers. Deferred to Supplementary Note 14 (unavailable). |
| 5 | p.7 | "All results reported above persist when examining only those who showed no cheating behavior (Supplementary Note 7)" | Robustness check for data quality exclusion — deferred to supplement. No count of excluded cheaters given. |
| 6 | p.4 | "excluding the 8.4% of participants whose attitudes were already moderate (i.e., exactly at the midpoint) pre-treatment" | Analytical exclusion (~85 people). No check of whether balanced across conditions. |
**Varying N/df across analyses (never explained):**
| Analysis | df/N |
|----------|------|
| Full W1 sample | N=1,011 |
| Fig 5 | n=1,008 |
| W1 attitude (gun control) | df=922 |
| W1 attitude (other) | df=919 |
| W1 attitude (another) | df=916 |
| Subgroup analysis | df=587 |
| Non-midpoint subsample | n=926 |
| W2 analyses | df=806, 803, 796 |
None of these variations are reconciled or explained.
**Referenced but unavailable supplements:** Supplementary Notes 7, 13, 14, and the Nature Portfolio Reporting Summary. Notes 13 and 14 are **high-priority** — they likely contain the completer vs. non-completer comparison and the attrition robustness check.
**What's missing:** No CONSORT diagram. No differential attrition test in main text. No IPW/bounds/imputation. No MCAR/MAR/MNAR. No explanation of varying df. Retention rate never broken down by treatment vs. control.
---
### Paper 2: Voelkel, Stagnaro et al. (2024), *Science*
**"Megastudy testing 25 treatments to reduce antidemocratic attitudes and partisan animosity"**
**File:** `science.adh4764 (1).pdf` (no supplement in this set)
#### Did they do attrition analysis? **No.**
| # | Page | Verbatim quote | Assessment |
|---|------|---------------|------------|
| 1 | p.1 | "n = 32,059 participants" | Final N only. No upstream denominator. |
| 2 | p.3 (Fig 1) | Varying control-group Ns: partisan animosity n=5,552; biased evaluation n=5,388 | Item-level missingness (range of 168 across outcomes). Never discussed. |
| 3 | p.6 (Fig 2) | Full-sample Ns range from 31,186 to 31,856 across 8 outcomes | 670-person range. Never discussed. |
| 4 | p.10 | "efficacious treatment effects on these outcomes (n = 8644), recruiting participants from the control condition and from 10 of the 25 treatment conditions" | 2-week follow-up. ~73% raw drop from 32,059 (or ~34% from eligible conditions). **No retention rate stated. No differential attrition test.** |
| 5 | p.5 | "six treatments still significantly reduced partisan animosity 2 weeks later" | Durability framed as effect persistence, not attrition. |
| 6 | fn 40 | "We exclude 2032 because of mode changes in data collection as a result of the COVID-19 pandemic" | 2,032 observations excluded. No analysis of whether they differ from retained. |
| 7 | fn 42 | "we did not include independents who reported not being closer to one of the parties" | Design-stage exclusion. No count. |
| 8 | p.10 | "Another limitation of our study is our use of participants sampled from a nonprobability opt-in internet panel" | External validity concern only — no mention of attrition bias. |
**What's completely absent:** No CONSORT diagram. No attrition rate (overall or by condition). No differential attrition tests. No completer vs. non-completer comparison. No IPW/bounds/imputation. No MCAR/MAR/MNAR. No sensitivity analysis for attrition. No reporting checklist.
**Referenced but unavailable supplements:** Sections S0–S11, Tables S1–S24+, Figures 1–59+. **Section S8** (follow-up survey details) is the most likely location for any attrition analysis, but nothing in the main text hints it exists there. The complete absence of any attrition mention in the main text makes it unlikely (though not impossible) that the supplement contains thorough attrition analysis.
**Key implicit attrition problems:**
- The follow-up drop (32,059 → 8,644) is massive and completely unanalyzed
- Effect-size decay at follow-up (32–42% of original) could be genuine decay OR differential attrition of persuadable participants — never discussed
- Item-level missingness varies by outcome (31,186 to 31,856) — never discussed
---
### Paper 3: Vlasceanu et al. (2024), *Science Advances*
**"Addressing climate change with behavioral science: A global intervention tournament in 63 countries"**
**Files:** `sciadv.adj5778.pdf` + `sciadv.adj5778_sm.pdf`
#### Did they do attrition analysis? **No.**
| # | Page | Verbatim quote | Assessment |
|---|------|---------------|------------|
| 1 | Main p.3 | "Participants (N = 59,440, from 63 countries, Table 1)" | Final N only. No upstream denominator. |
| 2 | Main p.10 | "A total of 8,937 completed the demo. Of these, 59,440 participants..." | **Garbled/contradictory sentence.** 59,440 > 8,937 — internally incoherent. The actual number recruited before exclusions is never clearly stated. |
| 3 | Main p.10 | "participants who failed these preregistered attention checks" | Attention check exclusions occurred. **No count of how many were excluded.** |
| 4 | Main p.10 | Attention check text described: "please select the color 'purple'" | Describes the check but not the exclusion numbers. |
| 5 | Main p.11 | "Participants were allowed to exit the task at any point with no penalty" | WEPT behavioral task allows early exit. **Whether exit rates differ by condition is never tested.** |
| 6 | Main p.7 | "convenience samples are adequate for estimating treatment effects (49, 50)" | External validity defense only — attrition bias never discussed. |
| 7 | Main p.10 | Pilot study: "a sample of 723 participants" | Pilot N reported with no attrition info. |
| 8 | SM, Tables S9–S10 | df values range from 58,566 to 59,185 across interventions and outcomes | Variation of ~619 across models. **Never explained.** |
| 9 | SM, Table S24 (p.36) | "coded as completed numbers of pages scored at least as 80% accurate" | WEPT outcome recoding, not attrition. |
| 10 | SM, Fig S6 (p.8) | Control condition N=5,086 | Provides one condition's N, but treatment-condition Ns never reported individually. |
**What's completely absent:** No CONSORT diagram. No recruited→analyzed pipeline. No attention-check exclusion counts. No differential attrition tests across 12 conditions. No completer vs. non-completer comparison. No response/retention rate. No IPW/bounds/imputation. No MCAR/MAR/MNAR. No sensitivity analysis. No reporting checklist. The entire 37-page supplement (25 tables, 6 figures) contains zero attrition-related content.
**Key implicit attrition problems:**
- The denominator is literally garbled — we cannot even determine the attrition rate
- Attention check exclusions happened but are unquantified
- The WEPT task explicitly allows quitting, creating differential behavioral attrition that is never tested
- Country-level Ns range from 104 (Tanzania) to 5,055 (USA_2) with no country-level response rates
- df fluctuates by hundreds across supplement tables
---
### Paper 4: Lee, Lelkes, Hawkins & Theodoridis (2022), *Nature Human Behaviour*
**"Negative partisanship is not more prevalent than positive partisanship"**
**Files:** `s41562-022-01348-0.pdf` + `41562_2022_1348_MOESM1_ESM.pdf`
#### Did they do attrition analysis? **No. The Reporting Summary claims are contradicted by the data.**
| # | Page | Verbatim quote | Assessment |
|---|------|---------------|------------|
| 1 | Reporting Summary p.2 | "No participants dropped out or declined participation." | **Unsupported claim, almost certainly false**, contradicted by BIAT and regression Ns below. |
| 2 | Reporting Summary p.2 | "We did not exclude any data points." | **Contradicted** by varying Ns across analyses. |
| 3 | ESM p.14 (Table 9) | "Missing values are not included in the percentage calculation." | Acknowledges missing demographic data — contradicts "we did not exclude any data points." |
| 4 | ESM p.15 (Table 10) | "Missing values are not included in the percentage calculation." | Same. |
| 5 | ESM p.16 (Table 11) | "Missing values are not included in the percentage calculation." | Same. |
| 6 | Main p.961 | "Most participants (84.1%) correctly answered both questions on the first try" | 15.9% initially failed comprehension. No statement on whether any were excluded. |
**Varying Ns — comprehensive documentation of unexplained data loss:**
| Dataset | Reported base N | Analysis N | Unexplained loss | % lost |
|---------|----------------|------------|-----------------|--------|
| **BIAT implicit** | 887 (ESM p.18) | 488 (ESM p.19, Fig 8) | **399** | **45.0%** |
| CCES: Out-party feelings | 1,228 | 949 (ESM Table 3, col 2) | **279** | **22.7%** |
| CCES: In-Out difference | 1,228 | 930 (ESM Table 3, col 3) | **298** | **24.3%** |
| CCES: In-party feelings | 1,228 | 1,189 (ESM Table 3, col 1) | 39 | 3.2% |
| SSI: Feeling thermometers | 887 | 850/848 (ESM Table 4) | 37–39 | 4.2–4.4% |
| Experiments 1 & 2 | unknown recruited | 599 / 622 (final) | unknown | unknown |
**What's completely absent:** No CONSORT diagram for any of the 5 datasets. No recruitment-to-analysis pipeline for any dataset. No attrition tables. No differential attrition tests. No completer vs. non-completer comparisons. No response/retention rates. No IPW/bounds/imputation. No MCAR/MAR/MNAR. No sensitivity analyses. No pre-registration.
**Critical credibility problems:**
1. The BIAT study loses **45% of its sample** (887→488) with zero explanation. Common BIAT exclusion criteria (fast responses, errors, incomplete blocks) can correlate with participant characteristics, potentially biasing the implicit partisanship measure.
2. CCES regressions lose up to **24.3%** of observations (1,228→930) — likely due to missing feeling thermometer data. Respondents who refuse to rate the out-party may be systematically different in their partisanship, directly biasing the core finding.
3. The Reporting Summary's claims of zero dropout and zero exclusion are **directly contradicted** by the supplement's own tables and missing-values footnotes.
---
## Final Summary Table
| Paper | Attrition analysis? | Best evidence of attrition handling | Worst gap | Implicit attrition severity |
|---|---|---|---|---|
| **Stagnaro & Amsalem (NComms)** | **Partial** | Sample flow (1,673→1,011), 87% W2 retention, comparability claim (deferred to Supp Notes 13–14) | No differential attrition test in main text; varying df (587–1,011) unexplained; supplements unavailable | Moderate (13% W2 loss) |
| **Voelkel et al. (Science)** | **No** | Nothing beyond reporting final Ns | No follow-up retention rate; no acknowledgment of 73% raw drop to follow-up; item-level Ns vary by 670 | **Severe** (massive follow-up attrition, unanalyzed) |
| **Vlasceanu et al. (SciAdv)** | **No** | Nothing — denominator is garbled | Cannot even determine attrition rate; attention-check exclusion counts missing; WEPT exit rates untested | **Severe** (opaque; denominator incoherent) |
| **Lee et al. (NHB)** | **No** | Two template assertions in Reporting Summary, contradicted by own data | BIAT loses 45% with no explanation; CCES loses 24%; Reporting Summary claims are false | **Severe** (45% silent data loss; contradictory claims) |