Request for Proposal / Scope of Work for Independent Audit (PROCESS-3)
v1.0 · Created May 7, 2026 for v3.2.2 (audit scope document supporting actual audit commissioning per PROCESS-3 in the Open Issues Registry; intended for direct use as RFP / SOW with prospective auditors) · Jason Robertson · Ohio · 2026
Purpose
This document specifies the scope of work for an independent audit of the platform's quantitative models. PROCESS-3 in the Open Issues Registry identifies the absence of independent model audit as a closure-gating finding for the platform's analytical credibility. This document is the basis for engaging a qualified auditor: an auditor receiving this document has sufficient information to provide a fee proposal and timeline; an auditor accepting engagement has clear deliverables and standards to work against.
The audit's purpose is finding and rebutting (or accepting) substantive analytical concerns. The platform welcomes adverse findings; an audit that produces no findings may indicate insufficient depth of review rather than model correctness. The lead author commits to documenting all findings and to either incorporating them, rebutting them on the record, or marking them as known-unresolved with the auditor's framing intact.
Background
The We The People Platform is an integrated federal policy proposal covering retirement architecture (Sovereign Fund accumulation), wage-floor architecture (replaces minimum wage and serves as federal income tax structure), universal healthcare contribution mechanism, universal childcare contribution, universal mental health access, civic infrastructure including federal infrastructure fee for telecommunications, and most recently universal paid family time. Platform documentation runs to approximately ninety documents.
The platform's two principal quantitative models are: (one) the Combined Reform Model, an integrated fiscal-and-distributional model projecting federal revenue and expenditure trajectories across the eight pillars over a sixty-year horizon; (two) the Federal Fiscal Impact Analysis (FFIA), a steady-state and transition-window fiscal analysis providing per-pillar contribution streams, per-program cost projections, and net federal fiscal impact estimates. The Sovereign Fund accumulation projection (base case six percent real return producing approximately one hundred twenty-two trillion dollars at year sixty; four percent conservative scenario producing approximately sixty-two and a half trillion dollars) is documented in both models.
Audit Objective
The audit's objective is independent assessment of the Combined Reform Model and the FFIA against established model risk management standards. Specifically: are the input data appropriate, is the model architecture suitable for its intended use, is the calibration defensible, is the sensitivity analysis adequate, are the outputs validated against known-correct benchmarks. The audit produces a written findings memo with specific recommendations the platform can act on.
Scope Of Work
Models in scope
Combined Reform Model (CRM): integrated fiscal-and-distributional model documented in 04_Combined_Reform_Model.xlsx (the spreadsheet implementation) and supporting analytical framing documents. Federal Fiscal Impact Analysis (FFIA): steady-state and transition-window analysis documented in 05_Federal_Fiscal_Impact_Analysis.docx. Sovereign Fund accumulation projection: documented across the Community Contribution Plan WhitePaper and supporting documents; projection is implemented in computational code accompanying CRM. Pillar-specific cost models: per-pillar contribution and cost projections documented in respective pillar documents. The audit scope covers all of the above as integrated artifacts.
Models out of scope
Out of scope for this audit: the calculator user interface (06_We_The_People_Calculator.html, an end-user-facing tool that presents model outputs but does not itself perform model calculations beyond simple display arithmetic); persona simulation outputs (qualitative analyses, not quantitative models); cost-reduction-decomposition bounds (these are documented as RESEARCH-4 requiring health economics expertise; not appropriate for general model audit).
Audit categories per SR 11-7
The audit follows the SR 11-7 Model Risk Management framework (Federal Reserve / OCC supervisory guidance, 2011) which establishes the standard categories for model audits. Five categories apply.
Category one: input data quality. Are the input data sources appropriate for the model's intended use? Are vintages current enough (the platform's IRS Statistics of Income baseline is 2021; subsequent years available)? Are data quality issues acknowledged in the model documentation? Specific assessment: data sourcing for tax modeling, employment modeling, healthcare cost modeling, demographic modeling.
Category two: model architecture appropriateness. Does the model structure suit its intended use? Are simplifying assumptions documented and defensible? Are interactions between model components handled correctly? Specific assessment: tax-and-transfer interaction handling; behavioral response treatment across components; transition-window dynamics; steady-state assumptions and the path to steady state.
Category three: calibration and parameterization. Are the model parameters calibrated against defensible empirical sources? Are uncertainty ranges around parameters documented and propagated? Are out-of-sample tests passed where applicable? Specific assessment: ETI (elasticity of taxable income) calibration at 0.2 / 0.4 / 0.6 / 0.8 sensitivity points; wage-elasticity calibration for disemployment estimates; healthcare cost reduction trajectory pacing; Sovereign Fund return assumptions.
Category four: sensitivity analysis adequacy. Is the sensitivity analysis sufficient to characterize result robustness? Are key parameters tested? Are joint-parameter sensitivities considered where component independence is questionable? Specific assessment: sensitivity coverage for the headline projections; identification of sensitivity gaps.
Category five: output validation. Are model outputs validated against external benchmarks where available? Are out-of-sample tests passed where applicable? Are model outputs internally consistent across scenarios? Specific assessment: comparison of model projections against existing federal projections (CBO, Treasury, JCT (Joint Committee on Taxation), GAO) for overlapping quantities; comparison of Sovereign Fund accumulation projection against Norway Government Pension Fund Global historical accumulation; comparison of healthcare cost reduction trajectory against peer-nation single-payer transition experience.
Audit Standards Reference
Beyond the SR 11-7 framework as the principal organizational structure, the audit may incorporate standards from CFA Institute (Chartered Financial Analyst Institute) on financial model audit standards, ISDA (International Swaps and Derivatives Association) on quantitative model documentation standards, and ASOP (Actuarial Standard of Practice) where actuarial-style projection methodologies apply (Sovereign Fund accumulation in particular). The auditor selects which subset of secondary standards apply to which audit categories; the audit memo documents the auditor's standards-application choices.
Deliverables
Deliverable one: findings memo
Written memo, approximately twenty to forty pages, organized by audit category. Per category: findings, recommendations, severity assessment (substantial / material / minor / observation, parallel to SR 11-7 severity taxonomy). Findings memo is the principal deliverable.
Deliverable two: replication test results
Where the auditor has performed replication tests on model outputs, the replication test results documented in tabular form: model output, replication output, difference, materiality assessment. Replication tests are not required for every model output but should cover at least the headline figures (Sovereign Fund accumulation, FFIA net federal fiscal impact, modified income tax architecture revenue impact).
Deliverable three: sensitivity analysis assessment
Written assessment of the platform's existing sensitivity analyses, including identification of sensitivity gaps and recommendations for additional sensitivity testing the platform should undertake.
Deliverable four: recommendations summary
Concise summary (under five pages) of the auditor's specific recommendations: required model changes, suggested model changes, documentation improvements, additional analyses the platform should undertake. The recommendations summary is the input the platform's lead author works from to determine which findings to incorporate, which to rebut, and which to mark as known-unresolved.
Auditor Qualifications
Required expertise
Auditor must possess: documented experience auditing fiscal or actuarial projection models (specific examples in fee proposal); familiarity with SR 11-7 Model Risk Management framework; quantitative analytical capacity sufficient to perform replication tests on model outputs. Sovereign Fund accumulation audit benefits from sovereign-wealth-fund or institutional-investor model experience; tax-architecture audit benefits from microsimulation experience (analogous to JCT, TPC, or Penn Wharton modeling experience).
Required independence
Auditor must have no pre-existing interest in the platform's adoption or rejection. Auditor must not be a co-author or contributor to any platform document; must not have provided substantive feedback that the platform has incorporated; must not have a financial relationship with the lead author. The audit's independence is documented at engagement start; any subsequent change in independence status (auditor's firm acquires interest in platform-adjacent matter, etc.) requires disclosure and may require audit termination.
Acceptable forms
Acceptable engagement forms: independent consulting firm with model audit practice (FTI Consulting, Charles River Associates, NERA, Cornerstone Research, or comparable); academic researcher with model-audit experience operating as private consultant; individual practitioner with documented model-audit credentials. Group engagements involving multiple auditors with different specialties are acceptable when the auditors document coordination.
Timeline
Estimated four to six month engagement window from contract signing through final deliverables. Phasing: contract signing and engagement start, week zero; auditor receives platform materials and begins document review, weeks one to three; auditor performs replication tests and analytical assessment, weeks four to ten; auditor produces draft findings memo, weeks eleven to fourteen; lead author reviews draft findings and provides clarification responses, weeks fourteen to sixteen; auditor produces final deliverables, weeks sixteen to twenty. Specific timeline depends on auditor capacity and may be compressed or extended by mutual agreement.
Budget Range
Estimated audit cost: twenty thousand to fifty thousand dollars depending on auditor selection and depth of replication testing. Lower end (~twenty thousand): individual practitioner engagement, primarily document review with limited replication testing. Mid-range (~thirty to forty thousand): consulting firm engagement with standard replication testing and SR 11-7-aligned findings memo. Upper end (~fifty thousand): consulting firm engagement with extensive replication testing including independent rebuilding of key model components for comparison. Lead author preference is mid-range engagement; lower-end engagement is acceptable if budget-constrained; upper-end engagement is acceptable if substantial findings are expected and rebuttal capacity is needed.
Evaluation Criteria
Auditor selection criteria: (one) demonstrated relevant expertise per qualifications above; (two) clarity of methodology in fee proposal; (three) availability within timeline; (four) fee within budget range; (five) quality of references from prior comparable engagements. The lead author will select the auditor whose proposal best balances these criteria; lowest-fee proposal is not necessarily selected.
Response Requirements
Prospective auditors should respond with: (one) acknowledgment of the SR 11-7 framework as the audit's organizational structure and any modifications proposed; (two) fee proposal with breakdown by audit phase or by deliverable; (three) timeline proposal with key milestones; (four) team composition (which individual auditors will perform which audit categories); (five) two prior comparable engagements as references with permission to contact; (six) confirmation of independence status. Response length: under fifteen pages. Response format: PDF or Word document submitted via email to the lead author's contact address.
Response window: thirty days from receipt of this scope document. Lead author will acknowledge receipt of all responses within five business days, will request clarification from selected respondents within fifteen days of response receipt, and will award the engagement within thirty days of response receipt. Unsuccessful respondents will be notified.
Post-Audit Process
After audit completion, the lead author commits to: documenting all substantive findings in a versioned response document; incorporating findings the auditor identifies as required; either incorporating findings the auditor identifies as suggested, or documenting on the record why the platform does not incorporate them; marking findings the auditor identifies as observations as known and not requiring action; acknowledging the auditor by name in subsequent platform iterations unless the auditor requests otherwise. The audit findings memo and the platform's response document together become the public record of the audit's outcome.
Cross-References
This scope document operationalizes PROCESS-3 in the Open Issues Registry. Companion documents in this iteration: 05_Academic_Outreach_Letter_Templates.docx (Tier 2 #7 from the actionable items list) and 05_Tribal_Consultation_Framework.docx (Tier 2 #10 from the actionable items list). After audit completion, the audit findings and the platform's response document are themselves new platform deliverables that close PROCESS-3 in Section 47 (CLOSED status with audit-completion documentation as the closure record).