← We The People Platform Download .docx

Item 80: Iterative Hardening Process Documentation

Jason Robertson · Ohio · 2026 · v1.20 · Created May 6, 2026 for v2.29 (compiled at Jason's request after twelve iterations of the hardening cycle; updated for v2.30.9 to extend coverage through iteration 20) · Updated May 6, 2026 for v2.30.9 (content extended to cover iterations 13-20; language generalized) · Updated May 6, 2026 for v2.30.12 (Audit-Script Whitelist Policy section added; six initial whitelist entries documented) · Updated May 6, 2026 for v2.30.13 (whitelist extended with seventh entry covering OPEN-1 historical inconsistency documentation) · Updated May 6, 2026 for v2.30.17 (whitelist migrated to exact-text format in audit_whitelist.txt at package root; policy section updated) · Updated May 6, 2026 for v2.30.19 (The Harden Cycle Process section added codifying the standard procedure) · Updated May 6, 2026 for v2.30.22 (audit_script.py canonical implementation added at package root; item 80 updated to reference the new tool) · Updated May 6, 2026 for v2.30.35 (Standard Prompt for AI Operators section added) · Updated May 6, 2026 for v2.30.38 (Content-Level Proofreading Checks Catalog + Persona-Based Reading-Path Simulation Protocol + Standard Prompt for Persona Simulation added) · Updated May 6, 2026 for v2.30.39 (removed standing rule that allowed skipping mitigation and verification phases on clean audits; all iterations now run all four phases regardless of findings) · Updated May 6, 2026 for v2.30.40 (content-level proofreading checks integrated into Phase 1: Audit and into AI-operator prompt's Phase 1 Step 1.2; closes documentation gap from v2.30.38 where the catalog was added but not integrated into the cycle's operating instructions) · Updated May 6, 2026 for v2.30.42 (Deferred Mitigation Policy section added codifying when and how iterations may defer mitigation of large finding sets to subsequent iterations) · Updated May 6, 2026 for v2.30.45 (audit_script.py extended with automatic recursive meta-trigger detection in current iteration narrative entries; the abstracted-language discipline previously enforced manually is now programmatically verified) · Updated May 6, 2026 for v2.30.48 (Expanded-Scope Audits subsection added; cadence specified as run-first plus pre-engagement; audit_expanded_scope function integrated into audit_script.py) · Updated May 6, 2026 for v2.30.49 (Audit Angles Used Across the Hardening Cycle section updated to formally catalog expanded-scope audits as a documented angle category) · Updated May 6, 2026 for v3.1.1 (Section 47 Item Status Criterion subsection added codifying the OPEN-or-CLOSED criterion adopted in v3.1.1) · Updated May 6, 2026 for v3.1.2 (Section 47 Item Status Criterion subsection refined to allow Mitigated = Y for OPEN items with documented external-help acknowledgment) · Updated May 6, 2026 for v3.1.4 (Expanded-Scope Audits subsection updated to document the new undefined-acronym audit; ACRONYMS_REGISTRY constant added to audit_script.py) · Updated May 7, 2026 for v3.1.7 (Section 47 and OIR Reference Integrity Checks subsection added) · Updated May 7, 2026 for v3.1.9 (Manifest Version and TOC Consistency Checks subsection added) · Updated May 7, 2026 for v3.1.12 (Cross-Reference Resolution Check subsection added)

Why This Document Exists

On May 6, 2026 Jason initiated a four-step iterative hardening process on the We The People platform. Across the day, the cycle ran iteratively across multiple package states; iterations 1-12 ran on v2.26.1 through v2.28.3, and iterations 13-20 continued on v2.29 through v2.30.8 as additional analytical content was added (items 79, 80, 81, OPEN-3 substantive enhancements). The cycle progressively improved the package's internal coherence by surfacing and resolving findings of varying severity. This document compiles the methodology, audit angles, programmatic checks, persona simulations, standing rules, finding categories, lessons learned, and meta-issues encountered. It serves three audiences: future iterations of the platform's hardening work that need a reference for what's been tried; auditors evaluating the platform's process rigor for transparency; and other AI-collaboration projects looking for a documented quality-assurance pattern that may apply to their own work.

The Four-Step Cycle

Step 1 — Audit. Run programmatic checks across all package files plus persona-based reading-path simulations plus dependency verification checks. Identify findings categorized by severity (CRIT, SIG, MIN, PROC). Document carryover findings from prior iterations. Pause for review. Step 2 — Mitigate. For each finding, either apply the fix in this iteration or document why it's deferred (typically: requires external input the platform lacks, or is substantive new analytical work better suited to a future minor release). Pause for review. Step 3 — Verify. Confirm all mitigations took effect. Validate document XML. Re-run targeted checks for the specific findings just mitigated. Pause for review. Step 4 — Repeat. If the audit found findings, pause and repeat the cycle (next iteration). If the audit found zero findings, the standing rule's clean-cycle condition is satisfied and substantive new work becomes eligible.

Standing Rules

Order of operation. Jason originally stated 'order or operation' which was corrected in conversation to 'order of operation.' The corrected rule is: 'if issues are found pause and repeat all of the previous steps in the same order of operation.' This is the canonical phrasing going forward. Pause discipline. Each step ends with an explicit pause; the next step does not begin until Jason says 'proceed' (or equivalent). This ensures Jason can review and redirect at every checkpoint rather than allowing Claude to drive a long autonomous run. Versioning. Minor releases (v2.x) for additive analytical content; patch releases (v2.x.y) for documentation fixes. Each iteration of the hardening cycle that finds findings produces a patch release documenting the iteration's mitigations. Canonical decisions. Once made (e.g., OPEN-1 4%+2% healthcare contribution; OPEN-2 graduated 5/10/15 income surcharge plus 0.5% above $10M wealth surcharge plus 2.5% above $50M wealth tax), canonical decisions are propagated through subsequent iterations rather than revisited; deferred secondary work is explicitly noted in OIR.

Audit Angles Used Across the Hardening Cycle

Each iteration of the cycle tried a different audit angle, recognizing that different angles surface different findings. The angles in chronological order: Iteration 1 (v2.26.2) ran the original 16 programmatic checks plus 6 persona simulations plus 7 dependency verifications, finding 27 new findings plus 15 carryover. Iteration 2 (v2.26.3) made canonical decisions on OPEN-1 and OPEN-2 with narrative-document propagation. Iteration 3 (v2.27.1) audited Calculator refactor documentation propagation (what was missing). Iteration 4 (v2.27.2) audited stale labels in display (what was wrong but not missing). Iteration 5 (v2.27.3) audited stale assumption and warning content inside details elements. Iteration 6 (v2.27.4) audited rate documentation in substantive analytical documents. Iteration 7 (v2.27.5) audited numerical consistency in worked examples plus reading-path completeness. Iteration 8 (v2.27.6) audited tense consistency for completed work. Iteration 9 (v2.28 transition) audited cross-doc terminology consistency, found zero issues, triggered substantive work. Iteration 10 (v2.28.1) audited item 79 addition for inconsistencies. Iteration 11 (v2.28.2) audited numerical consistency between item 79 and other documents plus v2.28.1 mitigation verification. Iteration 12 (v2.28.3 verification) audited WTMFY refactor consistency, found zero issues.

v2.30.48 (iteration 49): expanded-scope audits formalized as a documented angle category. The expanded-scope category covers full-document scans (full-OIR, full-README, full-VERSIONLOG) for meta-trigger patterns, distinct from the auto-check on latest entries introduced in v2.30.45. Expanded-scope audits produce OBS (observation) severity findings rather than MIN, since most occurrences are HISTORICAL ACCURATE by design (legitimate documentation of fixes from their own iterations). Cadence: run FIRST in audit_script.py main per the run-first rule; required before any external engagement initiation per the pre-engagement rule. Implementation: audit_expanded_scope function in audit_script.py, integrated to run automatically as the first check in every audit run. Detailed documentation: see the Expanded-Scope Audits subsection of this document. The category's value: catches incomplete past mitigations and verifies historical narrative classifications, complementing rather than replacing the narrow-scope auto-check that handles current iteration narrative verification.

Programmatic Checks Catalog

The audit script runs the following programmatic checks. Cross-reference resolution: scan all docx for 'item N' and '(item N)' patterns; verify N is in valid range (currently 1-79). Document version stamp consistency: verify each docx's cover version matches the manifest entry in Package Version doc. Manifest filename consistency: verify all package files appear in the Package Version doc's manifest table (orphan check). README and VERSIONLOG consistency: verify README references current package version; VERSIONLOG entries are reverse-chronologically ordered. Cover page version: verify Package Version cover shows current package version (not stale). Item-specific structure checks: OIR sections sequentially numbered; TOC has entries for all numbered items; Constituent Letter has committee mappings for all phased-expansion items 63-79. Calculator HTML balance: brace, paren, and bracket counts balanced. Calculator function presence: required functions (calcHighEarnerSurcharge, calcInfrastructureFee, compute) defined. Constants match canonical: HIGH_EARNER_BRACKETS has correct thresholds; WEALTH_SURCHARGE_THRESHOLD_10M = $10M at 0.5%; WEALTH_TAX_THRESHOLD_50M = $50M at 2.5%; INFRA fee constants match item 78. Worked-example arithmetic: dollar figures in narrative documents match canonical math (5/10/15 graduated produces specific values at specific income levels). Stale phrase detection: search for known-stale phrases from prior iterations (e.g., '2% above $200K', 'wealth surcharge of 2%', 'future platform versions will provide', 'calculator does not model the surcharge'). False positive distinguishing: legitimate scope disclosures (e.g., 'calculator does not model retirement income') are distinguished from stale claims via manual review of context.

Persona-Based Reading-Path Simulations

Six personas were used to evaluate the platform from different reader perspectives. Each persona has a specific reading path through the package and specific concerns that the platform should address. Persona 1 — Skeptic. A reader who doubts the platform's analytical rigor and is looking for inconsistencies. Reading path: starts with the Manifesto, then goes to Does This Raise Taxes, then to FFIA, looking for internal contradictions or unsupported claims. Concerns: rate consistency, numerical defensibility, source documentation. Persona 2 — Policy professional. A reader with a degree in public finance or related field, evaluating the platform for analytical adequacy. Reading path: FFIA, Wage Floors as Tax Architecture, Coalition Walkthrough, Open Issues Registry. Concerns: methodology rigor, comparison to peer-nation policy, acknowledgment of limitations. Persona 3 — Telecommunications industry professional. A reader from a telecom carrier or tower REIT evaluating the platform's infrastructure proposals. Reading path: Federal Infrastructure Fee, Federal Infrastructure Fee Transition Mechanics, Universal Broadband Access Substantiation. Concerns: regulatory treatment, transition mechanics, pass-through prevention specifics. Persona 4 — Tribal infrastructure officer. A reader from a tribal nation government evaluating sovereignty implications. Reading path: Emergency Services Communications, Federal Infrastructure Fee, the Federal Infrastructure Fee document's tribal nation lands subsection. Concerns: sovereignty preservation, consultation requirements, public-purpose exemption applicability. Persona 5 — Small business owner. A reader trying to understand how the platform affects their specific business. Reading path: Calculator (business-side section), Federal Infrastructure Fee (worked examples), What This Means For You (small business section). Concerns: total fee burden, exemption eligibility, transition timeline. Persona 6 — Concerned citizen. A reader trying to understand whether the platform helps or hurts their specific household. Reading path: What This Means For You, Does This Raise Taxes, Calculator. Concerns: net household impact, healthcare transition, retirement security.

Finding Severity Categories

CRIT (Critical) — issues that block the package from being used (broken documents, missing files referenced by other items, fundamental analytical errors). Three CRIT findings were identified across iterations 1-2 (Provenance out of date through v2.26.1; Constituent Letter missing items 76/77/78 committee mappings; Slideshow severely out of date). All three were mitigated in v2.26.2. SIG (Significant) — documentation gaps that materially affect reader understanding but don't block use. Most iteration findings have been SIG. Examples: stale labels in Calculator display (v2.27.2); calculator's 'does not model' warning after feature implemented (v2.27.3); rate documentation missing from FFIA (v2.27.4); worked example dollar amounts not recalculated after structure change (v2.27.5); tense for completed work still future-tense (v2.27.6). MIN (Minor) — polish issues that don't materially affect reader understanding. Examples: Calculator cover version stamp out of date; manifest entry format inconsistency. PROC (Procedural) — process improvements rather than content fixes. Example: PROC-2 Calculator missing business-side modeling (treated as substantive new work in v2.27 rather than a hardening fix). False Positive — audit script flagged something that manual review confirmed not to be a real issue. Example: 'calculator does not model retirement income' is a legitimate scope disclosure not stale content.

Meta-Issues Encountered

Recursive meta-trigger issue. Iteration 11 surfaced a subtle recursion: when the v2.28.1 changelog narrative described the prior INVALID-REF fix by quoting the offending text 'item 80' and 'item 80 or 81' verbatim, the cross-reference scanner could not distinguish the meta-reference (describing what was fixed) from a live reference. Mitigating this required rewriting the changelog narrative without literal item-number references. The same pattern then appeared in OIR Section 19 documenting the v2.28.2 fix, requiring a second-level rewrite. Lesson: when documenting a fix to a 'literal text triggers audit' issue, the documentation itself can re-trigger the same audit; documenting the fix at a meta-level (what kind of issue, not the specific text) avoids this. Audit script regex limitations. The audit script only loaded DOCX files for cross-reference resolution, missing HTML files. This produced false positives like 'item 79 references Calculator but doc not found' when item 62 is actually present as HTML. The script could be extended to load HTML files for reference resolution. DOCX-only file scanning. Several audit checks scan all .docx files; spreadsheet (.xlsx) and presentation (.pptx) files are checked separately. This split scanning catches most issues but occasional gaps emerge (e.g., the slideshow not getting v2.27 updates because presentation-file checks were less frequent than docx checks). Documentation propagation lag is the most common finding type. When a canonical decision is made or new analytical content is added, the decision/content needs to propagate to multiple downstream documents. The hardening cycle's primary value is catching these propagation gaps. Update for iterations 13-20: the recursive meta-trigger pattern recurred a total of six times across iterations 10, 11, 13, 14, 15, and 19. Each occurrence is in iteration-N's documentation describing iteration-N's fixes, where the natural way to describe the fix is to quote the offending text. The pattern is genuinely difficult to prevent because describing a fix to literal-text-triggered audit is most natural with the literal text. Mitigation pattern that works: rewrite verification statements at higher level of abstraction, describing what was changed without quoting the literal text being fixed. Iteration 20 was the first iteration where this lesson was successfully applied to all three narrative locations (Package Version doc changelog, VERSIONLOG entry, README entry) plus the new OIR section. Process improvement candidate: extend audit-script to flag literal-text patterns in narrative documentation blocks before shipping. Additional meta-issue surfaced in iterations 14, 19, and 21: silent code failures in version-bump operations (when the version-line format doesn't match the bump code's regex assumption, the bump silently fails while logging success). Mitigation: explicit verification of bumped version after each operation; documentation of embedded-version-format vs paragraph-start-version-format edge cases.

Lessons Learned

Each iteration needs a different angle. Repeating the same audit checks produces diminishing returns; new angles surface previously-overlooked findings. The cycle's iterations used at least eight distinct angles initially, with several additional angles introduced in iterations 13-20 (cascading numerical updates, reverse cross-references, TOC reading-path completeness, slideshow content cross-consistency, capitalization consistency), each catching issues the prior angles missed. Easy issues get caught first. CRIT findings from iteration 1; SIG documentation propagation gaps from iterations 3-7; subtler MIN issues and false positives from iterations 8-11. Subtle issues require manual inspection. Iterations 5 and 11 specifically caught issues the automated audit script flagged as zero or as false positives but where manual review of the actual content surfaced real problems. Both automated and manual inspection are necessary; either alone misses things the other catches. Convergence is achievable but takes multiple iterations. Iteration 9 was the first clean iteration after multiple iterations of finding documentation propagation issues. Clean iterations may be transient (subsequent substantive work can introduce new issues, as item 79's addition introduced two findings caught in iteration 10). The cycle is iterative not because convergence is impossible but because new content introduces new opportunities for incoherence. Standing rules matter. The 'order of operation' rule (audit, mitigate, verify, repeat) and the pause discipline (no autonomous long runs) together produced a process that Jason could review and redirect at every step. Without these rules, Claude would have likely produced more findings per iteration but without the same depth of analysis per finding.

Iteration-by-Iteration Summary

Iteration 1 (v2.26.2) — 27 new findings, 15 carryover, 14 mitigated. Iteration 2 (v2.26.3) — OPEN-1 and OPEN-2 canonical decisions made, propagated to most-misaligned narrative documents, secondary work deferred. Iteration 3 (v2.27.1) — 6 findings, all CALC-METHODOLOGY documentation gaps from v2.27 Calculator refactor. Iteration 4 (v2.27.2) — 2 findings, Calculator comparison table label propagation error. Iteration 5 (v2.27.3) — 3 findings caught by manual inspection (audit script reported 0): assumptions section stale, warning element stale, cover version stale. Iteration 6 (v2.27.4) — 3 real findings plus 2 false positives distinguished: rate selections (0.5%/2.5%) missing from FFIA and Per Citizen Benefits and Costs. Iteration 7 (v2.27.5) — 3 findings: worked example dollar amounts in DTRT not recalculated, reading paths missing items 76/77/78 (carryover from prior iterations), OIR Section 15 typo. Iteration 8 (v2.27.6) — 1 finding (WFA future-tense for completed work), 1 numerical observation deferred. Iteration 9 — zero findings, clean cycle, triggered substantive work. Iteration 10 (v2.28.1) — 2 findings introduced by item 79 addition: forward references to item 80, manifest format inconsistency. Iteration 11 (v2.28.2) — 2 real findings plus 1 false positive: changelog meta-reference to item 80, item 78 missing reference to item 79, item 79 references item 62 which is HTML (script limitation). Iteration 12 (v2.28.3 verification) — zero findings, clean cycle.

Iterations 13-20 (v2.30 through v2.30.8) — continued the cycle as additional analytical content was added (item 81 substantive enhancements for OPEN-3, FFIA reconciliation update, slideshow content sync, and various propagation fixes). Iteration 13 (v2.30.1) — caught two findings: OIR Section 21 forward references rephrased; WFA wealth tax architecture combined figure updated from $200B to $225B per item 81 refinement. Iteration 14 (v2.30.2) — caught two real findings: item 81 version metadata silent-failure (version-bump code couldn't find embedded version line); DTRT outdated $200B reference. Three false positives distinguished from real findings via context analysis. Iteration 15 (v2.30.3) — caught two findings: recursive meta-trigger pattern recurrence in OIR Section 24; WTMFY missing reverse-reference to item 81. Iteration 16 (v2.30.4) — caught one multi-part finding: TOC reading paths needed updates for items 79, 80, 81 inclusion (package size descriptor, supporter and elected-official path extensions). Iteration 17 (v2.30.6) — caught one finding: TOC items 53/54 said '15 slides' but actual deck has 16. Iteration 18 (v2.30.7) — caught two findings plus capitalization consistency sweep: item 80 TOC stale iteration count (generalized); item 81 TOC entry incomplete (added v2.30 enhancements); 11 specific Sovereign Fund capitalization fixes. Iteration 19 (v2.30.8) — caught six findings: three recursive meta-trigger occurrences in v2.30.7 documentation narratives (sixth recurrence overall); three manifest version mismatches. Iteration 20 — clean iteration; one false positive distinguished from real finding via manual investigation. Iteration 21 (v2.30.9) — caught two real findings about this very document: content staleness (specific iteration-count language) and content incompleteness (iterations 13-20 not represented). The iteration that produced this very summary.

Audit-Script Whitelist Policy

The hardening cycle's audit script uses pattern matching to detect findings (literal-text issues, forward references, stale phrasings, capitalization inconsistencies). Across the cycle's iterations, this pattern matching has produced two recurring categories of false positives. The first category is legitimate historical references where the package documents when something was created or what state the package was in at a prior version. For example, item 80's version line describes its own creation context as 'compiled at Jason's request after twelve iterations of the hardening cycle' — an accurate historical fact that should not be flagged as a stale claim about the cycle's current scope. The second category is previously-documented meta-trigger instances where prior iterations' fix descriptions contain the literal text being described. For example, OIR Section 29 documents the TOC-STALE-ITEM-80 finding from iteration 18 by quoting the offending text; the quote is preserved as historical record but should not be repeatedly re-flagged in subsequent iterations.

To prevent false-positive findings without losing detection of new occurrences, the audit script consults a whitelist of paragraph-level exemptions before flagging matches. Each whitelist entry identifies a specific paragraph by its distinctive content phrase (rather than paragraph index, which is fragile across edits), the audit pattern that the entry exempts, the exemption category, and the reason for the exemption. The audit script's behavior with the whitelist is: when a paragraph matches an audit pattern, the script checks whether the paragraph also contains any whitelist entry's distinctive phrase that exempts this pattern. If yes, the match is skipped (logged as 'whitelisted' for transparency) rather than flagged as a finding.

Whitelist maintenance policy. New entries are added when iterations document false positives the whitelist should handle. Entries are reviewed in subsequent iterations to verify they remain accurate — if a paragraph identified by an entry's distinctive phrase has been modified in a way that changes the context, the entry should be updated or removed. The whitelist is part of item 80 (this document) so that it is auditable, version-controlled, and discoverable by anyone reviewing the hardening process. Entries should be conservative: if there's genuine doubt about whether a match is a false positive or a real finding, it should be flagged rather than whitelisted. The whitelist is meant to suppress noise from known-stable references, not to suppress signal from new patterns.

Whitelist Entries (current count and categorization)

As of v2.30.17, the whitelist contains seven entries stored in a separate file at package root (audit_whitelist.txt) using exact-paragraph-text format. The whitelist began with six initial entries covering false positives identified through iteration 23 and was extended in v2.30.13 with one additional entry covering a previously-documented healthcare rate inconsistency identified in iteration 25's audit. Three of the entries are legitimate historical references (item 80's version line, OIR Section 21's v2.29 expansion entry describing item 80 creation, Package Version doc v2.30 changelog entry referring to item 80 creation context). Three of the entries are previously-documented meta-trigger instances (OIR Section 29's iteration 18 documentation of TOC-STALE-ITEM-80, OIR Section 30's iteration 19 verification, Package Version doc v2.30.7 changelog documenting the TOC-STALE-ITEM-80 fix). The seventh entry (added in v2.30.13) covers OIR Section 2's OPEN-1 entry documenting the four pre-canonical healthcare rate variations that OPEN-1 v2.26.3 resolved. Each entry stores the exact paragraph text that should be excluded from audit scanning, plus metadata (file, category, reason, stability).

The table below shows a summary of whitelist entries. The canonical source for the audit script is audit_whitelist.txt at package root, which contains the exact paragraph text plus metadata for each entry. The table is preserved here for human review.

File Distinctive Phrase Audit Pattern Category Reason
Iterative_Hardening_Process_Documentation compiled at Jason's request after twelve iterations twelve iterations Legitimate historical reference Item 80 v1.0 version line documents creation timing (after 12 iterations); historical fact, not a current claim about cycle scope
Open_Issues_Registry developed across twelve iterations on May 6, 2026 twelve iterations Legitimate historical reference OIR Section 21 v2.29 expansion entry documents when item 80 was created; historical fact
Platform_Package_Version developed across twelve iterations twelve iterations Legitimate historical reference Package Version doc v2.30 changelog entry; historical reference to item 80 creation context
Open_Issues_Registry TOC-STALE-ITEM-80 (MIN): Item 80 TOC entry mentioned twelve iterations Previously-documented meta-trigger OIR Section 29 (iter 18) documents TOC-STALE-ITEM-80 finding fix; literal quote was identified as meta-trigger pattern in iter 19 and is preserved as historical record of the fix
Open_Issues_Registry no longer 'twelve iterations' stale twelve iterations Previously-documented meta-trigger OIR Section 30 (iter 19) verification that iter 18 mitigation held; literal quote was identified as meta-trigger in iter 20 audit and preserved as historical record
Platform_Package_Version TOC-STALE-ITEM-80 (MIN): item 80 TOC entry generalized from twelve iterations Previously-documented meta-trigger Package Version doc v2.30.7 changelog (iter 18) documents the TOC-STALE-ITEM-80 fix; literal quote preserved as historical record
Open_Issues_Registry What the platform claims. The platform's Universal Healthcare contribution is variously stated as Pre-canonical healthcare rate references (e.g., '6 percent employer') Previously-documented inconsistency OPEN-1 entry in OIR Section 2 documents the historical pre-canonical healthcare rate variations (variants a, b, c, d) that OPEN-1 v2.26.3 resolved; references to pre-canonical rates here are documenting the prior inconsistency, not asserting current claims

The Harden Cycle Process — Standard Procedure

This section codifies the standard harden cycle process for use in subsequent iterations. The process has been refined across the cycle's iterations and is now documented here as the canonical procedure. Each iteration follows a four-phase process: Audit, Mitigate, Verify, Document. Per Jason's standing rules: if issues are found, the iteration pauses and repeats all previous steps in the same order of operation; pause discipline applies after each phase; canonical decisions propagate forward and are not revisited.

Phase 1: Audit

Choose audit angles for the iteration. A typical iteration uses three to five angles, mixing standard structural checks (cross-reference validity, manifest integrity, version sync, README and VERSIONLOG currency, OIR Section 47 table state) with standard content-level checks selected from the Content-Level Proofreading Checks Catalog (this document) and one or more fresh angles. As of v2.30.48, Expanded-Scope Audits run FIRST in the harden cycle process per the cadence specified in the Expanded-Scope Audits section of this document; these are full-document scans (full-OIR, full-README, full-VERSIONLOG) for meta-trigger patterns and produce OBS-severity findings, distinct from the auto-check's narrow latest-entry-only scope which produces MIN findings. Pattern checks always include the iteration-count sweep with whitelist application (load audit_whitelist.txt, for each match check exact-text whitelist, if matched mark as whitelisted; else apply HISTORICAL heuristic with keywords like 'occurrence', 'previous', 'predates'; else flag as finding). Compile findings list with severity (SIG for significant, MIN for minor, OBS for observation) and type. Pause after Phase 1 before mitigating. As of v2.30.22, the canonical implementation of the structural checks is in audit_script.py at package root; running 'python3 audit_script.py' from package root produces a structured report with findings categorized as SIG, MIN, or OBS. The script's exit code is non-zero if SIG findings are present, making it suitable for automated workflows. Content-level checks are not implemented in audit_script.py; they are run separately as documented in the Content-Level Proofreading Checks Catalog. Expanded-scope audits are implemented in the audit_expanded_scope function in audit_script.py and run automatically as the first check during every audit run.

Phase 2: Mitigate

For each real finding, rewrite the affected content using abstracted language (describe the change without quoting the literal text that the audit detects). For false positives that the audit's heuristic doesn't catch, add to audit_whitelist.txt with FILE, CATEGORY, REASON, STABILITY, ADDED metadata plus exact paragraph text. For audit-script limitations (false positives that would best be addressed by improving the audit logic itself), document as an observation rather than mitigating content. Bump versions for all modified docs and update the manifest in the Package Version doc. Pause after Phase 2 before verifying.

Phase 3: Verify

Re-read modified content to ensure mitigations applied correctly. Re-run relevant checks to confirm findings cleared. Pay particular attention to in-iteration regression: the documentation of fixes can introduce new instances of the patterns being fixed (the recursive meta-trigger pattern). Verify all three current-iteration narrative locations (README entry, VERSIONLOG entry, Package Version doc changelog, plus any new OIR section) are clean of literal text that the audit detects. Pause after Phase 3 before documenting.

Phase 4: Document

Add a new OIR section documenting the iteration. The section should describe: audit angles taken, findings classification (severity and type), mitigations applied, audit-script observations distinguished from real findings, what the iteration confirmed working, status of any persistent patterns, open issues remaining unchanged, and possible next substantive work. Update the Package Version doc with a v2.30.X.Y changelog entry. Update README.txt with an iteration entry. Update VERSIONLOG.txt with a structured iteration entry including FINDINGS MITIGATED, AUDIT-SCRIPT OBSERVATIONS, and PACKAGE VERSION blocks. Use abstracted language in all narratives — describe what the audit found and how it was mitigated without quoting the literal patterns the audit detects. Pause after Phase 4.

Versioning rules

Semantic versioning follows the pattern v2.X.Y where X is the minor version and Y is the patch number. Minor bumps (v2.30 to v2.31) are reserved for additive analytical content (new items, new sections, substantive structural changes). Patch bumps (v2.30.X to v2.30.X+1) cover hardening passes, documentation updates, audit findings mitigation, and process improvements. Each iteration that ships content increments the patch number. Each phase of a multi-phase substantive change can use the same minor version with increasing patch numbers (e.g., v2.30, v2.30.1, v2.30.2 for the v2.30 series). Doc-internal version numbers (v1.X for individual files like Item 80, OIR, Package Version doc) increment when their content changes; manifest entries track these per-doc versions.

Recursive meta-trigger pattern — known persistent issue

The recursive meta-trigger pattern (where documenting a fix to literal-text issue X tends to introduce new instances of X by quoting the original text) has recurred eleven times across the cycle (iterations 10, 11, 13, 14, 15, 19, 21, 22, 23, 26, 28). The whitelist policy implemented in v2.30.12 and migrated to exact-text format in v2.30.17 successfully suppresses noise from already-documented instances. The deeper prevention mechanism is documentation discipline: write narratives using abstracted descriptions of what was changed rather than quoting the before/after literal text. Iterations 27, 30, and 31 demonstrated this is achievable; the pattern's history argues for continued vigilance in Phase 3's verification step. The candidate process improvement of audit-script extension to flag literal-text patterns in narrative blocks before shipping (originally proposed in iteration 19) remains open and would meaningfully complement the discipline-based approach.

The audit_script.py Tool

As of v2.30.22, audit_script.py at package root provides a canonical implementation of the standard audit logic. The script consolidates the ad-hoc audit code that was used in earlier iterations into a reproducible tool that anyone can run to audit the package. The script implements all standard checks (cross-reference validity within the 1 to 81 range; manifest integrity; version sync) plus the iteration-count pattern sweep with exact-text whitelist application and [HISTORICAL] heuristic. It also implements all three audit-script improvement candidates that had accumulated across the cycle: narrative-block literal-text detection (catches the recursive meta-trigger pattern at write-time, addressing iteration 19's candidate); section self-check with heuristic harmonization (avoids false positives that the full sweep would not produce, addressing iteration 26's candidate); anchored regex for whitelist entry detection (avoids documentation header false positives, addressing iteration 33's candidate).

Usage. Run from package root with 'python3 audit_script.py' or from anywhere with 'python3 audit_script.py /path/to/package/root'. Requirements: Python 3.8+ and the python-docx package (install with 'pip install python-docx'). Output is a structured console report categorizing findings as SIG (significant, blocks the iteration), MIN (minor, should be fixed in current iteration), or OBS (observation, audit-script note rather than content finding). Exit code is non-zero if SIG findings are present, making the script suitable for pre-commit hooks or automated workflows.

Maintenance. The script is part of the package and version-controlled alongside content. Future audit logic improvements (additional patterns, new check types, refined heuristics) should be implemented as updates to audit_script.py rather than as one-off code in iteration work. The script's design separates configuration (whitelist file, historical keywords, valid ranges) from logic (sweep functions, heuristic functions) so individual concerns can be updated independently. The script does not include audit_script.py or audit_whitelist.txt themselves in its checks (these are tooling, not platform content), and the manifest's content-file extensions (.docx, .html, .pptx, .pdf, .csv, .xlsx) naturally exclude .py and .txt files at package root.

Standard Prompt for AI Operators

This section provides a standardized prompt for use when delegating the Harden Cycle to another AI profile (Claude, GPT, Gemini, or any other large language model with access to the package and standard tools). The prompt distills the four-phase process and standing rules into a self-contained instruction set. The intent is to ensure the testing pattern is consistent regardless of which AI profile executes the cycle. The full process history, lessons learned, and audit-script implementation details remain in the rest of this document; the prompt below is the operational subset needed for an AI to run the cycle correctly.

===== BEGIN STANDARD HARDEN CYCLE PROMPT =====

Role. You are running the Harden Cycle on the We The People platform (or a similar versioned multi-document analytical package). The cycle is a four-phase iteration: Audit, Mitigate, Verify, Document. Follow the phases in order. Pause for user review after each phase before proceeding to the next.

Standing rules. (1) If issues are found in any phase, pause and repeat all previous phases in order. (2) Pause discipline: do not auto-continue across phase boundaries. (3) Canonical decisions propagate forward and are not revisited. (4) Use abstracted language in narratives — describe what was changed without quoting literal text that the audit detects (this prevents the recursive meta-trigger pattern). (5) The Mitigated column in OIR Section 47 tracks documentation responsibility (Y = documentation complete to the platform's standard; N = further documentation required). The Issue Status column tracks underlying resolution. These are distinct columns and must not be conflated.

PHASE 1: AUDIT. Step 1.1: from the package root, run python3 audit_script.py. Record findings as SIG (significant, blocks the iteration), MIN (minor, fix in current iteration), or OBS (observation, audit-script note rather than content finding). The script's exit code is non-zero if SIG findings are present. As of v2.30.48, audit_script.py runs Expanded-Scope Audits FIRST per the cadence specified in the harden cycle documentation; these are full-document scans (full-OIR, full-README, full-VERSIONLOG) for meta-trigger patterns and produce OBS findings (informational, non-blocking). Most expanded-scope OBS findings will be HISTORICAL ACCURATE; the iteration author scans them for any unexpected occurrences in Phase 1.3 classification. Step 1.2: choose three to five supplementary audit angles for this iteration, drawing from both structural and content-level check categories. Standard structural checks: cross-reference validity (item references must be in the 1 to 81 range), manifest integrity (every content file in the package must be in the Package Version doc manifest), version sync (modified docs must have version-bumped manifest entries), README and VERSIONLOG currency, OIR Section 47 table state (count of Y and N rows must match the summary statistics paragraph). Standard content-level checks: selected from the Content-Level Proofreading Checks Catalog in the harden cycle documentation, which lists fourteen checks (1 spelling and typo audit; 2 numerical figure consistency; 3 capitalization consistency for platform-defined terms; 4 TOC accuracy verification; 5 acronym and abbreviation first-use definition tracking; 6 cross-doc term definition consistency; 7 date format consistency; 8 persona-based reading-path simulation execution; 9 calculator HTML functional test; 10 slideshow currency check; 11 sentence-level grammar review; 12 readability scoring; 13 run-on sentence detection; 14 external link validation). Recommended starting sequence when not targeting a known issue: checks 1 through 5 as a coordinated content-level pass; persona simulation (check 8) only after content baseline clean; checks 9 and 10 before external engagement that involves the calculator or slideshow specifically. Fresh angles to consider: content audits of recently-modified docs, definitional consistency after any semantic change, recursive meta-trigger pattern check on the current iteration's narrative entries, comprehensive cross-doc sweep for stale references after any naming or count change. Step 1.3: classify each finding as REAL FINDING (requires mitigation), META-REFERENCE (documentation of a previous fix that matches the audit pattern by design — these are intentional and stay), or HISTORICAL ACCURATE (describes prior state that was correct at the time — may need a clarifying preamble but not a rewrite). Most expanded-scope OBS findings classify as HISTORICAL ACCURATE by design; only unexpected new occurrences warrant promotion to REAL FINDING. Step 1.4 output: a structured findings list with severity, type, location, and classification. Pause before Phase 2.

PHASE 2: MITIGATE. Step 2.1: for each REAL finding, rewrite the affected content using abstracted language. When fixing patterns across multiple files, use regex with whitespace flexibility (\s+) to handle wrapped multi-line text in .txt files (newlines break exact-string substitution). Apply substitutions across all affected docs in one pass. Step 2.2: for audit false positives that the heuristic does not catch but should be exempted, add to audit_whitelist.txt with FILE, CATEGORY, REASON, STABILITY, ADDED metadata plus exact paragraph text. Step 2.3: for audit-script limitations (false positives best addressed by improving the script itself), document as an observation rather than mitigating content. Step 2.4: bump versions for all modified docs (increment doc-internal v1.X and update the manifest entry in the Package Version doc). Step 2.5: bump the package patch version (v2.X.Y becomes v2.X.Y+1). Patch bumps cover hardening passes, documentation updates, audit findings mitigation, and process improvements. Minor bumps (v2.X to v2.X+1) are reserved for additive analytical content. Step 2.6 output: list of changes applied plus new package version. Pause before Phase 3.

PHASE 3: VERIFY. Step 3.1: validate all modified .docx files using the docx validation tool (python3 /mnt/skills/public/docx/scripts/office/validate.py <path>). Step 3.2: re-run audit_script.py from the package root. The script must complete with zero real findings; documented meta-references are acceptable but must be confirmed as such. Step 3.3: re-run the supplementary checks from Phase 1 to confirm findings cleared. Step 3.4 (critical regression check): the current iteration's narrative locations (the README entry, the VERSIONLOG entry, the Package Version doc changelog, the new OIR section) must be clean of literal text that the audit detects. The recursive meta-trigger pattern (where documentation of a fix introduces new instances of the pattern being fixed) is the dominant source of in-iteration regression. Verify each narrative location individually. Step 3.5: verify audit_script.py executable bit is preserved (mode 0755). Step 3.6 output: validation results plus audit results plus regression check. Pause before Phase 4.

PHASE 4: DOCUMENT. Step 4.1: add a new section to the Open Issues Registry (positioned after the highest existing Section N, before the Closing section). The new section should document: audit angles taken, findings classification, mitigations applied, meta-references confirmed (with explanation that they are intentional documentation of fixes), what the iteration confirmed working, persistent pattern status, open issues unchanged, and possible next substantive work. Step 4.2: add a changelog entry (Heading 2 plus body paragraphs) to the Package Version doc immediately before the previous version's entry. Step 4.3: add an iteration entry to README.txt at the top of the version history (in paragraph format). Step 4.4: add a structured entry to VERSIONLOG.txt with FINDINGS MITIGATED, AUDIT-SCRIPT OBSERVATIONS, and PACKAGE VERSION blocks separated by horizontal rules. Step 4.5: build the release zip from the parent directory using 'zip -qr <package>_vX_Y_Z.zip <package_dir>/' and verify audit_script.py executable bit is preserved inside the zip with 'zipinfo <zip> | grep audit_script.py' — the entry must show '-rwxr-xr-x'. Step 4.6 output: zip path plus iteration summary. Pause.

Output formatting. Each phase produces a structured report with phase header, key actions taken, findings or changes, verification status, and next-phase precondition status. Prefer tables for findings lists. Use abstracted language for all narratives. Stay concise. Do not reproduce the literal text being fixed inside narrative descriptions of the fix — describe the fix in terms of what category of issue was addressed and what kind of replacement was applied.

References. The full process history, iteration-by-iteration summary, lessons learned, programmatic checks catalog, audit-script whitelist policy, and recursive meta-trigger pattern history are documented in this document (Item 80, 05_Iterative_Hardening_Process_Documentation.docx). The audit_script.py at package root is the canonical audit implementation. The audit_whitelist.txt at package root contains exempted patterns. The Open Issues Registry at 05_Analytical_Framing/05_Open_Issues_Registry.docx contains the consolidated issue table (Section 47) with Mitigated and Issue Status columns per the v2.30.32 definitional framework (Section 47 intro paragraphs).

Common failure modes to avoid. (1) Quoting literal text being fixed inside the fix's own narrative — this introduces new instances of the pattern being fixed (the recursive meta-trigger). Use abstracted language. (2) Using exact-string substitution for multi-line text — newlines break the match. Use regex with \s+ for whitespace flexibility. (3) Treating partial mitigation as full mitigation — the Mitigated column tracks documentation completeness, not external resolution. Items requiring external engagement can still be Mitigated = Y if their documentation responsibilities are met. (4) Conflating Mitigated column semantics with Issue Status column semantics — they track different things. (5) Forgetting to verify audit_script.py executable bit is preserved through the zip step. (6) Skipping pause discipline — each phase requires user review before the next phase begins.

===== END STANDARD HARDEN CYCLE PROMPT =====

When using the prompt above with another AI, copy from BEGIN STANDARD HARDEN CYCLE PROMPT through END STANDARD HARDEN CYCLE PROMPT inclusive. The AI receiving the prompt should also be given access to the package contents (or at minimum, the file paths specified in the References block). The prompt is intended to produce identical testing patterns regardless of which AI profile executes it. Updates to the prompt should be made here in Item 80 and reflected in any external copies that have been shared. Version this prompt with the document's version number; if the prompt itself changes substantively, increment Item 80's version and note the change in the document version history.

Content-Level Proofreading Checks Catalog

The Programmatic Checks Catalog (above) covers structural integrity: cross-references, manifest, version sync, iteration-count patterns. The harden cycle's discipline on structural integrity has matured well, with audit_script.py providing canonical implementation. Content-level proofreading is a separate dimension that has historically received less systematic attention. This catalog documents the content-level proofreading checks that complement the structural audit. These checks catch findings that programmatic structural checks miss: typos, inconsistent figure formats, capitalization drift, TOC currency, acronym first-use definitions, and similar.

When to run these checks. Content-level proofreading is heavier than the structural audit (longer running time per check; some checks require substantial human or AI judgment). Recommended cadence: a comprehensive content-level proofreading pass at least once per minor version bump (v2.X to v2.X+1) and before any external engagement-readiness assessment. Individual checks can be run more frequently as appropriate (spell check after any substantive content addition; numerical consistency after any number-bearing analytical change; capitalization after any new proper-noun introduction).

Check 1: Spelling and typo audit. Run a spell-check pass across all .docx files using pyspellcheck or equivalent, with a domain-specific word list covering the platform's proper nouns (Anthropic, Robertson, Norway, ICWA, ISDA, FFIA, OEWS, etc.). Flag candidates for review. False positives are common (technical terminology, surnames, neologisms); the human or AI reviewer applies judgment to distinguish real typos from vocabulary the spell-checker doesn't recognize. Severity: typos in high-visibility docs (Manifesto, TOC, slideshow) are SIG; typos in detailed analytical docs are MIN.

Check 2: Numerical figure consistency. Identify the platform's canonical figures (headline numbers like the Sovereign Fund corpus, mature-state spending, base-case return assumption, Year-15 healthcare per-capita target, etc.) and grep across all docs for any mention of each. Flag inconsistent expressions (for example, the same headline amount expressed in trillions versus billions in different documents, or a percentage expressed sometimes with the percent sign and sometimes spelled out). Establish canonical expression for each figure and align all instances. Severity: any numerical inconsistency in headline figures is SIG; format inconsistencies (with-symbol vs spelled-out) in less-prominent figures are MIN.

Check 3: Capitalization consistency for platform-defined terms. Build a list of platform-defined terms (Sovereign Fund, Universal Healthcare Access, Community Contribution Plan, Civic Infrastructure Pillar, Founding Stake, Federal Infrastructure Fee, Refundable Transition Bridge Credit, etc.). Grep each in capitalized and lowercase forms; flag mixed usage. Establish canonical capitalization for each (typically: capitalized when used as the proper noun for the specific platform concept; lowercase only in generic discussion of the underlying category). Severity: capitalization inconsistencies are MIN unless the inconsistency creates ambiguity about whether a generic concept or platform-specific term is being referenced.

Check 4: TOC accuracy verification. Compare the Platform Package TOC entries against current package contents. Verify every TOC entry has a corresponding file; verify every package file is in the TOC (or intentionally excluded). Verify TOC entries reference current document versions, not historical versions. Severity: TOC entries for nonexistent files are SIG; missing TOC entries for present files are MIN unless the missing file is one a new reader would expect to see in the TOC.

Check 5: Acronym and abbreviation first-use definition tracking. For each docx, scan for known acronyms (FFIA, OIR, FIF, USF, OEWS, BLS, CMS, NHE, ETI, JCT, TPC, REA, FCC, NTCA, CTIA, MVNO, ICWA, NHPA, NEPA, FEHB, TANF, LIHTC, GAO, CRS, CBO, etc.). Verify the first occurrence in each document is preceded or followed by a definition. For documents that intentionally rely on cross-doc definitions (the OIR, for example, assumes readers have encountered terms in their native documents), document the exemption. Severity: missing first-use definition in standalone-readable documents (Manifesto, Does This Raise Taxes, FFIA, etc.) is MIN.

Check 6: Cross-doc term definition consistency. For each platform-defined term, verify that the definition is consistent across all documents that define or characterize it. Does Federal Infrastructure Fee mean exactly the same thing in items 78, 79, and 81? Does Sovereign Fund have one definition or two slightly different ones across docs? This check catches semantic drift where a term acquires subtly different meanings in different docs over time. Severity: any term used with materially different definitions across docs is SIG; minor wording differences in consistent-meaning definitions are observations rather than findings.

Check 7: Date format consistency. Scan all docs for date references. Verify consistent format (typically full month name plus day plus year: May 4, 2026). Flag any short-form dates (5/4/2026), inverted-format dates (4 May 2026), or partial dates (May 2026 when the day is known). Severity: MIN.

Check 8: Persona-based reading-path simulation execution. Documented as a separate protocol (Persona-Based Reading-Path Simulation Protocol section, below). The most labor-intensive content check but also potentially the most revealing of navigational, sequencing, and audience-fit findings that programmatic checks cannot detect. Severity per finding.

Check 9: Calculator HTML functional test. Open the interactive calculator (06_We_The_People_Calculator.html) in a browser. Verify all input fields accept expected values, all calculations produce non-error output, all assumptions match current analytical-doc canonical figures (specifically: wage floor rates, contribution rates, healthcare costs, retirement projections). Verify any references in the calculator UI to platform documents are still accurate. Severity: any calculator function that errors or produces wrong output is SIG; reference inconsistencies are MIN.

Check 10: Slideshow currency check. Read the platform overview slideshow against current package state. The slideshow tends to lag the analytical docs because it's less frequently updated. Verify slide content references current items (not items that have been renamed or renumbered), current canonical figures, and current policy positions. Severity: stale slide content presented to external audiences is SIG; minor version-stamp differences are MIN.

Check 11: Sentence-level grammar review. AI-driven pass through each docx flagging grammar issues (subject-verb agreement, tense consistency, parallelism in lists, dangling modifiers). Most labor-intensive when run as a comprehensive sweep; can be run on individual docs after substantive edits. Severity: MIN unless the grammar issue creates substantive ambiguity about meaning.

Check 12: Readability scoring. Compute Flesch-Kincaid or similar readability score for each doc; verify scores match target audience (citizen-facing docs should be more accessible; technical analytical docs can be denser). Flag docs whose readability is substantially out of band for their intended audience. Severity: OBS rather than a finding; informs future editing rather than blocking iteration.

Check 13: Run-on sentence detection. Scan for sentences exceeding a threshold word count (typically 50 or more words in a single sentence). Manual review then determines whether the long sentence is a real readability issue or appropriate to its analytical content. Severity: MIN.

Check 14: External link validation. If the platform references external URLs anywhere (specific source citations, example references, peer-nation policy documents), verify the URLs are still live and reach the cited content. URLs decay (link rot) faster than expected. Severity: SIG for citations supporting platform claims (broken citation undermines credibility); MIN for incidental references.

Recommended priority order for first comprehensive pass: 1, 2, 3, 4, 5 as a coordinated content-level audit pass (the highest value for the lowest effort, addressing the most common first-impression credibility issues). Add 6, 7 if time permits. Run 8 (persona simulation) only after 1-7 clear, since the simulation will surface different findings against a clean content baseline. Run 9, 10 before any external engagement that involves the calculator or slideshow specifically. Run 11, 12, 13, 14 as bandwidth allows; these are lower-value relative to effort.

Persona-Based Reading-Path Simulation Protocol

The Persona-Based Reading-Path Simulations section (above) documents the six personas developed across the cycle's iterations. That section describes who the personas are and what their reading paths and concerns look like. This section documents the execution protocol: how to actually run a persona-based reading-path simulation as a harden-cycle audit angle. The simulation is not programmatic; it is human-driven or AI-driven, executed by reading the platform as the persona would and identifying friction points. The output of a successful simulation is a structured findings list comparable in form to programmatic-check findings.

When to run a persona simulation. The persona simulation is the most labor-intensive of the harden cycle's audit angles; running all six personas in a single iteration is rarely justified. Recommended cadence: one or two personas per iteration when the iteration's content changes intersect with that persona's reading path. After a substantive analytical addition, the personas whose reading paths include the new content are highest priority. After a minor version bump (v2.X to v2.X+1), running at least the Skeptic and Citizen personas is recommended because they exercise the broadest set of platform commitments. Before any external engagement-readiness assessment, all six personas should be run.

Execution steps. (1) Select the persona for this simulation. (2) Read the persona's documented reading path (from the Persona-Based Reading-Path Simulations section or from updated reading-path documentation if maintained separately). (3) Read the specified documents in the specified order, from the persona's perspective, with the persona's concerns active. (4) For each document, note where the persona would experience friction: claims not substantiated to the persona's satisfaction; navigation unclear or sending them somewhere unhelpful; concerns the persona has that the document does not address; numerical or analytical claims that don't match what the persona would expect from prior reading; and tone or framing that would alienate the persona or fail to engage them. (5) Record findings with severity and document references. (6) If the persona finishes the reading path with substantive concerns unaddressed by the docs they read, that itself is a finding (the reading path may need to extend to include additional docs).

Severity classification for persona findings. SIG: a finding that would cause the persona to dismiss the platform or substantially lose confidence in it. Examples: internal contradictions; key concerns completely unaddressed; numerical claims that don't reconcile across documents the persona reads in sequence; tone or framing that the persona finds disqualifying. MIN: a finding that would degrade the persona's experience without dismissing the platform. Examples: navigation friction; concerns addressed but in a different document than the persona's reading path naturally directs them; minor numerical or terminology inconsistencies. OBS: an observation about how the persona would interact with the platform that doesn't necessarily imply action. Examples: this persona would benefit from a forward reference to a subsequent document, but the omission isn't blocking.

Output format. A persona simulation produces a structured report with: persona name; reading path actually executed (matching the documented path or noting deviations); findings list with severity, type, location (document plus paragraph or section reference where appropriate), and persona-specific framing of why this is a finding for this persona. The report ends with overall assessment of the persona's experience: did the platform engage the persona well; were the persona's concerns adequately addressed; would the persona be likely to act on the platform (advocate, share, dismiss) after this reading path.

There is no command to run this test. The simulation is not programmatic. It is executed by a human reader or by an AI assistant given the standard prompt (next section). The audit_script.py at package root performs the structural audit; the persona simulation is run separately by reading and judgment. The Standard Prompt for Persona-Based Reading-Path Simulation (next section) provides the canonical instructions for delegating a persona simulation to an AI assistant.

Standard Prompt for Persona-Based Reading-Path Simulation

This section provides a copy-pasteable prompt for delegating a persona-based reading-path simulation to an AI assistant (Claude, GPT, Gemini, or any large language model with access to the platform package). The prompt is parameterized by persona; the AI executor is told which persona to simulate, and the prompt describes the persona, the reading path, the concerns, and the output format. Updates to the prompt should be made in this section and synchronized to any external copies.

===== BEGIN STANDARD PERSONA SIMULATION PROMPT =====

Role. You are running a Persona-Based Reading-Path Simulation on the We The People platform. You will read the platform from the perspective of an assigned persona, identify friction points the persona would experience, and produce a structured findings list. This is one audit angle within the platform's Harden Cycle (see Item 80, 05_Iterative_Hardening_Process_Documentation.docx, for full process context). The simulation is not programmatic; it is judgment-based, executed by reading and assessing.

Standing rules. (1) Read as the persona, not as yourself. The persona has specific knowledge, biases, and concerns; your assessment must use the persona's frame, not your own. (2) Friction matters. The simulation surfaces findings that programmatic structural checks cannot detect: tone mismatches, navigation friction, missing forward references, claims insufficiently substantiated for the persona's expectations. Be specific about why a friction point is a finding for THIS persona. (3) Use the platform's severity categories: SIG (would cause persona to dismiss platform), MIN (degrades experience without dismissal), OBS (observation rather than finding). (4) Output in the platform's standard format: structured findings list with persona name, reading path executed, findings (with severity, type, location, persona-specific framing), and overall assessment.

The persona to simulate. You will be told which of the six platform personas to simulate. The personas are: (P1) Skeptic - reader who doubts the platform's analytical rigor and looks for inconsistencies; reading path: Manifesto, Does This Raise Taxes, Federal Fiscal Impact Analysis; concerns: rate consistency, numerical defensibility, source documentation. (P2) Policy professional - reader with a degree in public finance evaluating analytical adequacy; reading path: Federal Fiscal Impact Analysis, Wage Floors as Tax Architecture, Coalition Walkthrough, Open Issues Registry; concerns: methodology rigor, peer-nation comparison, acknowledgment of limitations. (P3) Telecommunications industry professional - reader from a telecom carrier or tower REIT evaluating infrastructure proposals; reading path: Federal Infrastructure Fee, Federal Infrastructure Fee Transition Mechanics, Universal Broadband Access Substantiation; concerns: regulatory treatment, transition mechanics, pass-through prevention. (P4) Tribal infrastructure officer - reader from a tribal nation government evaluating sovereignty implications; reading path: Emergency Services Communications, Federal Infrastructure Fee, the Federal Infrastructure Fee document's tribal nation lands subsection; concerns: sovereignty preservation, consultation requirements, public-purpose exemption applicability. (P5) Small business owner - reader trying to understand platform impact on their business; reading path: Calculator (business-side), Federal Infrastructure Fee (worked examples), What This Means For You (small business section); concerns: total fee burden, exemption eligibility, transition timeline. (P6) Concerned citizen - reader trying to understand household impact; reading path: What This Means For You, Does This Raise Taxes, Calculator; concerns: net household impact, healthcare transition, retirement security.

Execution protocol. Step 1: confirm which persona you are simulating. Step 2: read the assigned persona's reading path documents in the specified order, from the persona's perspective. The persona's concerns are active throughout. Step 3: for each document, note where the persona would experience friction. Be specific: claims not substantiated to the persona's satisfaction; navigation unclear or unhelpful; concerns the persona has that the document does not address; numerical or analytical claims that don't match the persona's expectations from prior reading; tone or framing that would alienate or disengage the persona. Step 4: record findings with severity (SIG, MIN, OBS), type (the finding category), location (document plus paragraph or section reference), and persona-specific framing (why this is a finding for THIS persona, not just any reader). Step 5: at the end, write an overall assessment: did the platform engage the persona well; were the persona's concerns adequately addressed; would the persona be likely to act on the platform (advocate, share, dismiss) after this reading path; what is the highest-priority finding from this persona's perspective.

Output format. Begin with: Persona simulated: [P1-P6 name]. Reading path executed: [list of documents in order, with deviations noted if any]. Then a structured findings list, one finding per entry: severity (SIG, MIN, OBS); type (short categorical label, examples: NUMERICAL-DRIFT, NAVIGATION-FRICTION, CONCERN-UNADDRESSED, TONE-MISMATCH, MISSING-FORWARD-REFERENCE); location (document name plus section or paragraph reference); description (one to three sentences describing the finding from the persona's perspective). End with overall assessment as described above.

References. The persona descriptions and reading paths are documented in Item 80 (05_Iterative_Hardening_Process_Documentation.docx) under Persona-Based Reading-Path Simulations. The execution protocol is documented in the same file under Persona-Based Reading-Path Simulation Protocol. The harden cycle's overall process (in which this simulation is one audit angle) is documented in the same file's other sections. The Open Issues Registry (05_Open_Issues_Registry.docx) tracks all known platform issues; persona findings should not duplicate already-documented OIR entries unless the persona simulation reveals new dimensions of those issues.

Common pitfalls to avoid. (1) Reading as yourself rather than as the persona - the persona has specific concerns and frame; use them. (2) Surfacing programmatic-check findings (typos, broken references) as persona findings - those belong in the structural audit, not the persona simulation; persona findings are about reading experience, narrative coherence, and concern-fit. (3) Generic findings that any reader would surface - persona findings should be specific to that persona; if a finding applies equally to all six personas, it is a structural finding, not a persona finding. (4) Not finishing the reading path - some personas have frustrating reading paths by design; that is a finding, not a reason to abandon the simulation. (5) Excessive deference to the platform - the persona's frame is skeptical or evaluative by construction; assess honestly from that frame, do not soften findings to be polite to the platform.

===== END STANDARD PERSONA SIMULATION PROMPT =====

When using the prompt above with another AI, copy from BEGIN through END inclusive, and append a single line specifying which persona to simulate (for example: 'Simulate persona P1 (Skeptic)'). The AI receiving the prompt should also be given access to the platform package (or at minimum, the documents in the assigned persona's reading path). The prompt is intended to produce simulations consistent across AI profiles; differences between AIs in their assessment of friction are themselves informative about which findings are robust versus AI-dependent.

Deferred Mitigation Policy for Large Finding Sets

This section codifies a policy for handling iterations where Phase 1 audit produces a finding set too large to mitigate fully within the same iteration without distorting the iteration's primary deliverables. The policy emerged from v2.30.41, which executed content-level proofreading checks and a persona simulation, surfacing over 100 distinct findings. Mitigating all of them within v2.30.41 would have made the iteration unmanageable; the volume and the iteration's other substantive work (audit-script improvement, Reader's Path Scoping Specification creation) were too large in combination to deliver well in a single pass. v2.30.41 deferred the bulk of the mitigation work to subsequent iterations, which deviated from Standing Rule 1 ("if issues are found in any phase, pause and repeat all previous phases in order") without explicit codification. v2.30.42 codifies the deferral policy here so future iterations can apply it consistently.

When deferred mitigation is appropriate. The policy applies when all three of the following are true: (1) the finding set is large enough that mitigating fully within the iteration would consume substantial portions of the iteration's available work — typically twenty or more distinct findings, where each requires its own correction; (2) the iteration has other substantive deliverables (new analytical content, code improvements, or process changes) whose quality would be compromised by spreading iteration attention across mitigation work; (3) the findings can be reasonably tracked and scheduled for subsequent iterations without losing fidelity. If any of the three conditions fails, immediate mitigation is required (per Standing Rule 1).

Deferral protocol. When deferring mitigation, the iteration: (a) documents all findings in the OIR section for that iteration with full specificity (severity, type, location, description) so subsequent iterations can find and address them; (b) proposes an explicit subsequent-iteration schedule naming target iteration versions and what each addresses; (c) does not claim to have mitigated the findings, only to have documented and scheduled them; (d) in the iteration's narrative entries (README, VERSIONLOG, Package Version doc changelog), distinguishes "addressed" items from "documented but pending" items — readers should not infer that all findings were resolved within the iteration when they were not.

Subsequent-iteration commitment. The deferral creates a commitment that the proposed schedule actually executes. If the platform reaches a state where the proposed mitigation schedule has slipped substantially (more than three iterations past the initially proposed target without execution), the deferred findings should be promoted to formal Open Issues Registry Section 47 entries for systematic tracking, separate from the iteration narrative that originally documented them. This prevents deferred findings from quietly accumulating in iteration narratives and becoming difficult to recover.

What the deferral policy does not allow. The policy does not allow iterations to skip Phase 1 audit work to avoid surfacing findings the iteration is not prepared to mitigate. The policy does not allow iterations to silently drop findings; all findings must be documented even when mitigation is deferred. The policy does not extend to structural-integrity findings from audit_script.py (cross-reference validity, manifest integrity, version sync, iteration-count pattern regressions); these are smaller in volume by construction and should be mitigated in-iteration. The deferral applies primarily to content-level findings (numerical inconsistencies, capitalization consistency, acronym definitions, rate format consistency) and persona simulation findings, where finding sets routinely run to dozens or hundreds of items each.

Relationship to Standing Rule 1. Standing Rule 1 ("if issues are found in any phase, pause and repeat all previous phases in order") remains in force for in-iteration regression patterns and structural-integrity findings. The deferred mitigation policy is a calibrated exception for specific finding categories where strict in-iteration mitigation would be counterproductive; it is not a general relaxation of Standing Rule 1's discipline. When in doubt, in-iteration mitigation is preferred.

Expanded-Scope Audits

Expanded-scope audits are full-document scans for meta-trigger patterns across ALL narrative entries (not just the most recent), distinct from the automatic recursive meta-trigger detection introduced in v2.30.45 which scans only the latest entry in each location. Expanded-scope audits exist to catch incomplete past mitigations and verify ongoing classification accuracy of historical narratives. The auto-check's narrow boundary is the right default for routine audit runs (avoids persistent false-positive flags on legitimate historical documentation); expanded-scope audits provide the complementary periodic broader sweep.

Documented expanded-scope audits (as of v3.1.4). Three are formalized: (1) Full-OIR scan, which checks every section of the Open Issues Registry against META_TRIGGER_PATTERNS in audit_script.py and reports occurrences with section number; (2) Full-README and full-VERSIONLOG scan, which checks every entry in README.txt and VERSIONLOG.txt and reports occurrences with version reference; (3) Undefined-acronym scan (added v3.1.4), which checks every docx file in the package against ACRONYMS_REGISTRY in audit_script.py and reports any acronym used multiple times without its full-form definition present elsewhere in the file. All three are implemented in audit_script.py (audit_expanded_scope and audit_undefined_acronyms functions) and produce OBS (observation) severity findings. New expanded-scope audits added in future iterations should follow this pattern: full-document coverage, OBS severity, occurrence-listing output rather than blocking failures.

Cadence. Two cadence rules apply to expanded-scope audits. (1) Run-first cadence: expanded-scope audits run FIRST in the harden cycle process, before audit_script.py's other checks. The run-first ordering ensures the broad pass happens regardless of whether narrow-scope checks pass or fail. (2) Pre-engagement cadence: expanded-scope audits are required before any external engagement initiation. This includes (but is not limited to) outreach to potential funders, formal submission to policy venues, and any context where external reviewers may inspect the platform's audit history. The pre-engagement cadence ensures the platform's external-facing state is audited at expanded scope before reviewers can encounter it.

Why OBS (observation) severity rather than MIN. Expanded-scope audits produce OBS findings that the iteration author classifies in Phase 1.3. For the meta-trigger expanded-scope audits (full-OIR, full-README, full-VERSIONLOG), most occurrences are HISTORICAL ACCURATE — legitimate documentation of fixes from their own iterations — so OBS severity prevents inflating the audit's apparent failure rate. For the undefined-acronym expanded-scope audit (added v3.1.4), occurrences are real findings rather than HISTORICAL ACCURATE; the OBS severity preserves consistency with the other expanded-scope audits while leaving classification to the iteration author. If real findings are surfaced, they are mitigated in Phase 2 per Standing Rule 1.

What expanded-scope audits do not replace. Expanded-scope audits are complementary to, not a substitute for, the standard audit_script.py checks. The narrow-scope auto-check on latest narratives still applies (catches recursive meta-triggers introduced in current iteration). The structural integrity checks still apply (manifest, version sync, cross-references). Expanded-scope audits add a third layer: periodic broad verification that the historical record's classifications remain accurate and that no incomplete past mitigations have been left undetected.

Adding new expanded-scope audits. As future iterations identify new categories of audit checks that warrant full-document coverage, those checks are added to audit_script.py either as extensions to audit_expanded_scope (for meta-trigger-style scans across narrative files) or as new functions wired into the same expanded-scope main() block (for new audit categories like the undefined-acronym scan added in v3.1.4). Each new expanded-scope audit should: (a) cover the full scope of its target document type (all entries, all sections, all files); (b) produce OBS severity findings; (c) report occurrences as a list rather than a blocking failure; (d) be documented here in the harden cycle documentation with its scope, output format, and use case. The v3.1.4 addition of the undefined-acronym scan is the first example of this extension pattern beyond the original two meta-trigger scans, and serves as the reference for future additions.

Section 47 Item Status Criterion (v3.1.1)

As of v3.1.2, Section 47 uses a two-column framework where the columns track distinct things: Item Status reflects content completeness, Mitigated reflects author responsibility. The v3.1.2 framework refines the v3.1.1 criterion (which had tied Mitigated directly to Item Status, with CLOSED implying Y and OPEN implying N). The refinement preserves Item Status as a strict reading of content completeness while allowing Mitigated equals Y for OPEN items where the author has done what is within their capacity.

Item Status criterion (binary OPEN or CLOSED). Item Status is OPEN if any exploration or analysis task remains unfinished in any section of the platform for the item. Item Status is CLOSED if all such scenarios are satisfied. The criterion is universal across sections and does not depend on author responsibility — it tracks content completeness alone.

Mitigated criterion (binary Y or N). Mitigated is Y if either of two conditions is satisfied. (1) Item Status is CLOSED. (2) Item Status is OPEN AND the platform documents that external help is required to complete the remaining work AND the documentation of what kind of external help is required is itself present in the platform; in this case the author has done what is within their capacity by acknowledging the gap and naming the kind of expertise needed. Mitigated is N if Item Status is OPEN and the conditions for Y are not met — this indicates a discipline failure rather than a legitimate state.

How to apply the refined criterion. For each item, first determine Item Status by asking whether any exploration or analysis task in any section of the platform remains unfinished for the item. Then determine Mitigated by asking, if Item Status is OPEN, whether the platform documents the kind of external help required to close the item. The two questions are independent: an item can be OPEN with Mitigated = Y (content incomplete but author has done what they can), CLOSED with Mitigated = Y (content complete), or OPEN with Mitigated = N (discipline failure). The combination CLOSED with Mitigated = N is logically impossible.

Section 47 final state under v3.1.2. The table contains 33 tracked issues. All 33 are Mitigated equals Y under the refined criterion. The Item Status distribution is 25 CLOSED and 8 OPEN. The 8 OPEN items each have documented external-help acknowledgment in the platform: RESEARCH-1 through RESEARCH-6 in OIR Section 52 (each naming the specific external research expertise required); PROCESS-3 in OIR Section 54 (independent audit by credentialed professionals); and ITEM79-Q3 in OIR Section 53 plus the Tribal Sovereignty section in the Federal Infrastructure Fee document (formal government-to-government tribal consultation).

Section 47 and OIR Reference Integrity Checks

The audit infrastructure was extended in v3.1.7 with three new checks covering Section 47 internal consistency and OIR cross-reference integrity. The first check verifies that references to Section 47 IDs in narrative documents reference IDs that exist in the current Section 47. The second check verifies that Section 47 table rows are well-formed (non-empty fields, valid Status values, no duplicate IDs, no CLOSED-with-N inconsistency). The third check verifies that OIR Section number references (e.g., 'OIR Section 81') reference sections within the valid range (one through current maximum). All three checks follow the structured-tuple return convention and produce MIN findings when violated. The codification follows the v3.1.4 pattern: the underlying invariants held in earlier iterations through manual discipline, but codification provides continuous protection rather than only when iterations look for issues.

Manifest Version and TOC Consistency Checks

The audit infrastructure was extended in v3.1.9 with two new completeness checks. The manifest version consistency check verifies that each manifest entry's version field matches the doc's actual metadata version line, using a refined regex that matches only version-line metadata patterns (avoiding false matches on inline version mentions in narrative text). The TOC consistency check verifies that the table of contents matches the actual file list in the package, with sequential numbering, no duplicates, all package files represented (excluding auxiliary docs by convention via TOC_EXCLUDE_FILES), and no references to non-existent files. The codification follows the v3.1.4 and v3.1.7 pattern: continuous protection against future regressions rather than only catching current-iteration issues.

Cross-Reference Resolution Check

The audit infrastructure was extended in v3.1.12 with a cross-reference resolution check. The check scans all docx files for filename references matching the platform's standard XX prefix pattern and verifies each reference resolves to an actual file in the package. Catches stale references after file renames or removals — a discipline failure mode that produced two real findings on v3.1.11 before codification. The codification follows the v3.1.4 / v3.1.7 / v3.1.9 pattern: continuous protection against future regressions rather than only catching current-iteration issues.

Closing

The iterative hardening cycle is one of several quality-assurance patterns available. Its strengths are exhaustiveness (covers many angles), discipline (pause-and-redirect at every step), and transparency (every finding documented with severity and resolution). Its weaknesses are time investment (many iterations within a single working day) and potential diminishing returns (later iterations find subtler issues whose marginal value is lower than the iteration cost). For platforms at this scale (eighty items, hundreds of cross-references, multiple spreadsheet models, an interactive Calculator), the pattern is appropriate. For smaller platforms, a single thorough audit may suffice. For larger platforms, additional automation (full microsimulation modeling, comprehensive type-checking of cross-references, automated regression testing of Calculator math) would be warranted before this kind of iterative hardening becomes the bottleneck. This document is an artifact of the process; the process itself is the important thing to internalize.