Skip to content

Add long-run calibration contracts#669

Draft
MaxGhenis wants to merge 6 commits intomainfrom
codex/us-data-calibration-contract
Draft

Add long-run calibration contracts#669
MaxGhenis wants to merge 6 commits intomainfrom
codex/us-data-calibration-contract

Conversation

@MaxGhenis
Copy link
Copy Markdown
Contributor

Summary

  • add explicit long-run calibration profiles, quality tiers, and audit metadata
  • record named target-source provenance in year sidecars and dataset manifests
  • add nonnegative feasibility/frontier tooling plus LP-backed fallbacks for entropy calibration

What changed

  • adds CalibrationProfile contracts for long-run age/SS/payroll/TOB calibration, including year-bounded approximate windows
  • stamps each generated artifact with calibration_quality, max_constraint_pct_error, and target-source metadata
  • adds assess_calibration_frontier.py for checking where exact nonnegative calibration remains feasible
  • adds rebuild_calibration_manifest.py to backfill manifests/sidecars with the new contract data
  • introduces an explicit trustees_2025_current_law long-run target-source package instead of relying on an implicit legacy file path
  • updates the long-run README and storage docs to describe the contract-driven flow

Why

The old long-run workflow depended on implicit flag combinations, silent fallback behavior, and ambiguous target-source provenance. This PR makes the calibration contract explicit and inspectable so downstream consumers can reject mismatched artifacts instead of trusting them implicitly.

Validation

  • uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q
  • python3 -m py_compile policyengine_us_data/datasets/cps/long_term/calibration.py policyengine_us_data/datasets/cps/long_term/calibration_profiles.py policyengine_us_data/datasets/cps/long_term/calibration_artifacts.py policyengine_us_data/datasets/cps/long_term/run_household_projection.py policyengine_us_data/datasets/cps/long_term/ssa_data.py policyengine_us_data/datasets/cps/long_term/rebuild_calibration_manifest.py policyengine_us_data/datasets/cps/long_term/assess_calibration_frontier.py

Follow-up

A stacked follow-up PR will add the provisional OACT target-source package and builder script on top of this contract work.

Copy link
Copy Markdown
Contributor Author

Split this work into two draft PRs so the general calibration-contract changes can be reviewed independently from the provisional OACT source package. The stacked follow-up is #670.

Copy link
Copy Markdown
Contributor Author

Follow-up from the late-tail investigation:

  • I pushed 6bc34e02 onto this PR with two stable follow-ups:
    • support-quality metrics in the calibration audit (positive_weight_count, positive_weight_pct, effective_sample_size, top_10_weight_share_pct, top_100_weight_share_pct)
    • metadata normalization for historical LP fallback labels plus the widened 2079-2085 approximate window (10% instead of 5%)
  • Focused verification still passes: uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q

Substantively, the new diagnostics clarify the late-year problem:

  • 2091 in the validated Trustees build has 88 positive households, ESS 41.4, top 10 households holding 30.9% of total weight.
  • A tiny linear blend back toward baseline weights immediately restores thousands of positive-weight households, so the 88-household count is partly an LP extreme-point artifact.
  • But ESS barely improves under those blends, which means the deeper issue is not just zeros; it is true late-year concentration under the current target bundle.

So the current read is:

  • the tail pathology is not evidence that Trustees necessarily imply many more very old workers
  • the LP fallback is exaggerating the support collapse
  • but the repeated-cross-section support is still genuinely too concentrated by the early 2090s under age + SS + payroll + OASDI TOB

I have not pushed the experimental dense approximate entropy fallback yet. The first prototype failed numerically on 2091, so I kept that local until it actually outperforms the LP fallback. Next step is still microsim-only: prototype a denser late-year calibrator and/or support expansion without falling back to an aggregate tail.

Copy link
Copy Markdown
Contributor Author

Late-tail update from the microsim-only investigation:

  • Pushed 4dfa5397 (Add late-year age aggregation for calibration) to this branch.
  • The current late-tail cliff is still primarily payroll-driven, but one-year age constraints were making the nonnegative frontier worse than necessary.
  • At 2091, the nonnegative best-case error for ss-payroll drops from 18.29% with single-year age bins to 16.89% with 5-year bins.
  • I wired that into the approximate calibration windows so late years can aggregate age targets / age matrix into 5-year buckets while preserving the open-ended 85+ bin.
  • Focused verification still passes: uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q

Important caveat: this is not the whole tail fix by itself. The LP approximate fallback is still overly sparse, and the deeper ESS/concentration problem remains. But this change improves the late-year feasible set with a defensible repeated-cross-section adjustment rather than another hidden tolerance bump.

I have not included the standalone support-profiling script in this commit yet; it is still local-only while I decide whether it belongs in the repo.

Copy link
Copy Markdown
Contributor Author

Follow-up pushed as 047545b0 (Add support concentration gates to calibration).

This extends the calibration contract beyond target error / negative weights and adds explicit late-tail support-quality gates:

  • min_positive_household_count = 1000
  • min_effective_sample_size = 75
  • max_top_10_weight_share_pct = 25
  • max_top_100_weight_share_pct = 95

Those thresholds are applied in both classification and validation. On the sampled years:

  • existing validated run stays healthy through 2073
  • old 2074+ concentration now gets flagged
  • the new age-binned 2075/2076 outputs would also be rejected despite exact target matching, because support is still too concentrated

That is intentional: the runner should now fail fast on a microsim support collapse instead of saving a misleading late-year artifact.

Focused verification still passes:
uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q

I also started a one-year 2075 smoke rerun after this change so we can confirm the new validation trips where expected.

Copy link
Copy Markdown
Contributor Author

Added a support-augmentation diagnostic pass in 5525b3d2.

What landed:

  • support_augmentation.py with two experimental clone profiles:
    • late-clone-v1: older SS-only, older SS+pay, and payroll-only donor clones
    • late-clone-v2: a more aggressive version that also pushes payroll-only donors much further up the age distribution
  • evaluate_support_augmentation.py to compare the nonnegative feasibility frontier before and after augmentation for a single year/profile
  • focused tests for donor selection and clone ID remapping in test_long_term_calibration_contract.py

Key result at 2091:

  • ss: exact already, augmentation has no effect
  • ss-payroll: base best-case max error 16.88648481756073%; late-clone-v1 and late-clone-v2 both leave it unchanged up to numerical noise
  • ss-payroll-tob: same story; no material improvement from either clone profile

Interpretation:

  • the late-tail infeasibility is not solved by adding more whole-household age-shifted copies of existing support
  • age + SS is already feasible, so the hard tradeoff is on the payroll side (and TOB does not appear to be the distinctive blocker at 2091)
  • if we want a microsim tail, the next support-expansion step needs to change payroll-per-older-household composition, not just create more older versions of existing households

Copy link
Copy Markdown
Contributor Author

Follow-up support-expansion result in e3d99121:

I added a composite-household diagnostic path in support_augmentation.py and tested two profiles:

  • late-composite-v1: clone older beneficiary households, then graft payroll from younger payroll-only donors
  • late-composite-v2: same idea, but with much more aggressive payroll transfer scales to test whether the frontier is simply missing older payroll intensity

I also added a focused composite-augmentation test in test_long_term_calibration_contract.py.

What the 2091 diagnostics show:

  • ss is still exact with or without augmentation
  • ss-payroll base best-case max error is 16.88648481756073%
  • late-composite-v1 changes that to 16.88648477158211%
  • late-composite-v2 lands at the same value to numerical precision

Interpretation:

  • simple age-shift clones were already insufficient
  • composite older-beneficiary-plus-payroll synthetic households are also insufficient
  • even forcing substantially higher payroll into older synthetic households does not materially expand the nonnegative feasible set at 2091

So the next support-expansion step has to be more structural than whole-household cloning or payroll grafting. The late frontier does not appear to be missing only “older payroll intensity” in a way that can be fixed by splicing current-household components together.

Copy link
Copy Markdown
Contributor Author

Added appended synthetic-sample diagnostics in 5b91f1e5.

What changed:

  • support_augmentation.py now has explicit appended synthetic single-person older-household grid profiles:
    • late-synthetic-grid-v1
    • late-synthetic-grid-v2 (same idea but with much higher payroll levels)
  • these profiles preserve the base CPS support untouched and append tagged synthetic older households on an age/SS/payroll grid
  • focused tests now cover the synthetic-grid path in test_long_term_calibration_contract.py

Key result at 2091 for ss-payroll:

  • base best-case max error: 16.88648481756073%
  • late-synthetic-grid-v1: 16.88648475766269%
  • late-synthetic-grid-v2: same to numerical precision

Interpretation:

  • even appended synthetic older-worker support, including a deliberately extreme payroll grid, does not materially improve the late-year age + SS + payroll nonnegative frontier
  • so the problem is not just “we need more older households with higher payroll”
  • the next step has to be more structural than cloning, grafting, or appended payroll-heavy older-household grids

Copy link
Copy Markdown
Contributor Author

Pushed c99ccbac to this PR.

This change moves long-run TOB out of the hard calibration target bundle and into post-calibration benchmarking:

  • ss-payroll-tob and ss-payroll-tob-h6 now calibrate on age + OASDI benefits + taxable payroll only.
  • Those profiles still compute OASDI/HI TOB, but they write it under calibration_audit.benchmarks instead of constraints, so TOB no longer affects the solver or quality classification.
  • Added ASSUMPTION_COMPARISON.md documenting how our calibration assumptions differ from Trustees/OACT, especially on TOB.
  • Switched frontier / support-augmentation tooling defaults to ss-payroll so late-tail diagnostics stop conflating support feasibility with TOB.

Validation:

  • uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q
  • python3 -m py_compile on the touched long-term modules

Copy link
Copy Markdown
Contributor Author

Pushed 642752cb with two late-tail fallback changes:

  • added a bounded-entropy approximate fallback before raw LP minimax
  • if bounded entropy still fails, densify the LP solution by blending back toward baseline inside the allowed error band (lp_blend) instead of saving the raw basic-feasible-point weights

Focused verification still passes:

  • uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q
  • python3 -m py_compile policyengine_us_data/datasets/cps/long_term/calibration.py

Empirical result: this is better than raw LP, but still not enough for publishable late-tail microsim support.

Current diagnostics under the no-TOB-hard-target profile:

  • 2083: lp_blend, max error 10.000%, ESS 12.25, positive households 6859, top-10 share 81.66%, top-100 share 94.14%
  • 2091: lp_blend, max error 20.000%, ESS 13.08, positive households 6856, top-10 share 76.73%, top-100 share 97.56%

So this removes the pathological 19-household raw-LP support collapse, but it still fails the ESS / concentration gates by a wide margin. This now looks like a good checkpoint for an external code review, because we have a real alternative implementation and a concrete before/after result.

Copy link
Copy Markdown
Contributor Author

Pushed e00f6e59 addressing the most actionable review findings from the external Claude review.

What changed:

  • fixed approximate_window_for_year(profile, None) to prefer the open-ended tail window instead of the 2086-2095 window
  • made the legacy flag builder auto-upgrade use_tob=True into a GREG-derived profile instead of creating an impossible IPF+TOB contract
  • removed the bare except: cases in projection_utils.py; uprating failures now warn explicitly and no longer swallow KeyboardInterrupt / SystemExit
  • removed the duplicate bounded-entropy objective evaluation in the L-BFGS-B path (jac=True with a shared callable)
  • changed negative_weight_pct to measure negative weight mass, and added negative_weight_household_pct separately
  • added nonfatal validation support in run_household_projection.py via --allow-validation-failures / PEUD_ALLOW_INVALID_ARTIFACTS=1; validation issues are now recorded in metadata instead of necessarily crashing the run
  • manifest entries now include validation status / issue count and the household-count negative-weight metric

Verification:

  • uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q
  • python3 -m py_compile policyengine_us_data/datasets/cps/long_term/calibration.py policyengine_us_data/datasets/cps/long_term/calibration_profiles.py policyengine_us_data/datasets/cps/long_term/projection_utils.py policyengine_us_data/datasets/cps/long_term/run_household_projection.py policyengine_us_data/datasets/cps/long_term/calibration_artifacts.py policyengine_us_data/tests/test_long_term_calibration_contract.py

I also started a one-year late-tail smoke run with --allow-validation-failures to confirm artifact writing on a failing late year; that was still in compute when I pushed this comment, so I’m not claiming a completed runner smoke yet.

Copy link
Copy Markdown
Contributor Author

Pushed 90361837 with the follow-up fixes from the second-pass Claude review.

Included in this commit:

  • cached objective_gradient_hessian() inside solve_with_root() so fun(z) / jac(z) at the same point do not recompute the expensive state
  • moved objective_with_gradient() outside the bounded-entropy start loop
  • normalize_metadata() now backfills validation_passed / validation_issues for older sidecars by re-running validation against the named profile
  • densify_lp_solution() now reports whether densification actually changed the LP point, and the audit uses lp_minimax instead of lp_blend when lambda stayed at zero
  • manifest now carries a top-level contains_invalid_artifacts flag

Verification:

  • uv run pytest policyengine_us_data/tests/test_long_term_calibration_contract.py -q
  • python3 -m py_compile policyengine_us_data/datasets/cps/long_term/calibration.py policyengine_us_data/datasets/cps/long_term/calibration_artifacts.py policyengine_us_data/tests/test_long_term_calibration_contract.py

The branch is now in a good state for another external review if we want one; the remaining risk looks primarily methodological rather than hidden harness bugs.

Copy link
Copy Markdown
Contributor Author

Pushed 79858d6c adding policyengine_us_data/datasets/cps/long_term/assess_publishable_horizon.py, a one-off diagnostic that runs the current calibration contract on selected milestone years and emits the same quality/support metrics the runner uses.

I used it to check the publishable cutoff under the current ss-payroll-tob profile and trustees_2025_current_law source.

Boundary result:

  • 2073: exact, validation passes, ESS 90.35, top-10 share 22.48%
  • 2074: exact, validation passes, ESS 80.62, top-10 share 23.86%
  • 2075: first failing year; non-TOB targets are still essentially exact, but support gates fail (aggregate, ESS 33.23, top-10 share 47.69%)

Milestone diagnostics from the same tool:

  • 2080: aggregate, lp_minimax_exact, ESS 13.49, top-10 76.41%
  • 2085: aggregate, lp_blend, max constraint error 10.00%, ESS 13.65
  • 2090: aggregate, lp_blend, max constraint error 20.00%, ESS 13.80
  • 2095: hard failure under current window (23.72% > 20.00%)
  • 2100: aggregate, lp_blend, max constraint error 35.00%, ESS 11.41

So the current evidence points to a publishable microsim horizon of through 2074, with 2075+ diagnostic-only under the current fixed-support repeated-cross-section methodology.

Copy link
Copy Markdown
Contributor Author

Pushed ff099fd5 adding a more structural support-augmentation diagnostic: late-mixed-household-v1 in support_augmentation.py.

This profile appends synthetic mixed-age households by taking an older beneficiary household and adding a younger payroll-rich donor person as a separate subunit in the same household. That changes the household age/payroll direction, unlike the earlier age-shift and payroll-graft rules.

I ran:

uv run python policyengine_us_data/datasets/cps/long_term/evaluate_support_augmentation.py 2091 --profile ss-payroll --target-source trustees_2025_current_law --support-augmentation late-mixed-household-v1

Result at 2091:

  • base best-case nonnegative max error: 16.88648481756073%
  • mixed-household augmented best-case nonnegative max error: 16.886484695406168%
  • delta: -0.000000122%

So even a genuinely mixed-age household augmentation barely moves the frontier. That makes the current conclusion stronger: the late-tail issue is not just “missing older workers” or “missing older + younger co-resident households” in a simple sense. If 2100 microsim is a hard requirement, we likely need a much more radical synthetic-support generation path than support grafting onto the 2024 CPS donor geometry.

Copy link
Copy Markdown
Contributor Author

Pushed 7a03b8b1 adding prototype_synthetic_2100_support.py, a standalone diagnostic that:

  1. builds a coarse actual 2024 tax-unit summary (head age, spouse age, dependents, payroll, SS, pension, dividends),
  2. generates a fully synthetic minimal-support candidate set from archetypes,
  3. scales the income grids into 2100 nominal space using the macro SS/payroll growth factors, and
  4. solves for the best nonnegative 2100 composition against age + SS + payroll.

Key result at 2100 with trustees_2025_current_law:

  • best-case max error on the minimal synthetic support: 7.66%
  • so this minimal archetype support is dramatically more feasible than the fixed CPS support, though still not exact

The synthetic composition it wants is informative:

  • prime_worker_single: 25.7%
  • prime_worker_family: 25.0%
  • mixed_retiree_worker_couple: 24.0%
  • older_worker_couple: 10.2%
  • older_worker_single: 7.8%
  • prime_worker_couple: 5.9%

Compared with the actual 2024 support count mix, the largest gaps are:

  • mixed_retiree_worker_couple: 24.0% synthetic vs 2.37% actual support count
  • older_worker_couple: 10.2% vs 2.54%
  • older_worker_single: 7.8% vs 1.93%

Notably, once we drop TOB from the hard target set and just target age + SS + payroll, this minimal synthetic solution uses zero pension/dividend income and still wants an average taxable-benefits proxy share of 85%. That reinforces the current view that the hard late-tail support need is concentrated in older-worker / mixed retiree-worker composition, not generic older households or asset-income-heavy retirees.

So this doesn’t solve 2100 microsim, but it does sharpen what the synthetic support generator would need to add.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants