PRAMS Data Weighting — National Delivery and Automation
Oct 2024 – Mar 2025
I co-led a nationwide effort to turn raw files from many jurisdictions into clear, consistent reports over a demanding six-month season. I organized the work, kept the schedule on track, resolved issues with partners, and built repeatable steps so results were reliable and delivered on time.
Read more
- National scope: Supported a large network of sites across the country — states, major cities, and territories — representing the great majority of U.S. births. Coordinated intake, reviews, and releases so each site moved smoothly from start to finish.
- Season planning and coordination: Set the calendar, intake checklist, and checkpoints. Ran standing touchpoints, unblocked issues quickly, and kept everyone aligned on what was needed and when.
- Data intake and validation: Converted plain-text “birth” and “frame” files into structured datasets using a standard template. Verified counts and identifiers, checked required fields, and documented any quirks before analysis so there were no surprises later.
- Sampling verification: Reviewed how each site drew its sample, compared expected and observed patterns, and flagged anything unusual for follow-up. Kept a clear record of findings and fixes so the next run started cleaner.
- Fair results when some people do not respond: Compared who answered with everyone who was eligible, grouped people with similar chances of responding, and adjusted their influence so results reflected the whole population, not just those who replied.
- Coverage checks: Compared important totals to official counts and looked for signs that certain groups might be missing. Built in stop-points so a run would pause if something needed attention.
- One-click packaging: Automated the creation of consistent tables, figures, and notes for reviewers. Produced tidy, share-ready documents and a short summary of what was checked and what changed.
- Lookup-driven, repeatable runs: Used a single “lookup” table to hold the site-specific settings. The system read that row and applied the right rules automatically, which reduced manual edits and made results consistent across sites and seasons.
- Partner support and problem solving: When a file did not match the plan, worked directly with the site to identify the cause, agree on a correction, and record the exact steps taken so the fix did not have to be rediscovered.
- Training and handoff: Led short, practical sessions on the process; shared simple checklists and examples; and kept a clear trail of decisions so teammates could reproduce the full run.
- Outcomes: On-time releases across many jurisdictions, fewer back-and-forth cycles, faster reviews, and clearer paperwork for later audits — with a season playbook that the team can reuse and improve.
Statistical Advisory Group (SAG) — Mentorship Program
Oct 2023 – Jun 2024 · Research Mentor
I mentored Centers for Disease Control and Prevention researchers through the Statistical Advisory Group’s mentorship program—helping participants refine study questions, prepare clean data, and apply appropriate statistical methods. Across several projects, I provided one-on-one guidance, clear examples, and step-by-step reasoning so mentees could move their analyses forward with confidence.
Read more
- Regular mentorship: Met twice per month over the six-to-nine-month program, setting goals, reviewing progress, and helping mentees plan analysis steps that matched their study design.
- Regression & modeling guidance: Supported logistic and generalized linear models; explained model selection (AIC, BIC, ROC/AUC), interaction terms, confounding control, and interpretation of coefficients and effect sizes.
- Advanced methods: Taught when and how to use Lasso and Ridge regression, cross-validation, decision trees, and random forests—highlighting tradeoffs between prediction, interpretability, and sample size.
- Latent & structural models: Guided structural equation modeling and latent-construct approaches for projects involving behavioral, mental-health, and disclosure outcomes.
- Exploratory analysis: Helped researchers evaluate missingness, build correlation matrices, define variables, and create reproducible, well-documented data pipelines.
- Interpretation & communication: Worked with mentees to explain results clearly—risk ratios, odds ratios, post-hoc tests, diagnostics—and craft manuscripts and presentations suitable for scientific review and clearance.
- Impact: Mentees advanced their studies with stronger analysis plans, clearer statistical reasoning, cleaner pipelines, and ready-to-share results; several projects moved from early concept to polished products.
PRAMS Training and Templates — Consistent Analyses at Scale
Jun 2023 – Apr 2025
Built simple tools and taught teams how to produce consistent tables faster. Created reusable templates, kept format libraries current, and ran a short training series so analysts could deliver clean outputs without reinventing the steps.
Read more
- Reusable tools: Templates for common tables with options for how to handle missing values and direct export to spreadsheets.
- Season enablement: Trainings on pre-weighting checks and the weighting workflow, plus step-by-step notes and checklists.
- Maintenance: Kept site-specific variable lists and format catalogs up to date so runs were smooth and consistent.
- Impact: Faster reviews, fewer errors, and easier handoffs across sites.
Partner Enablement and Quality Operations — Reviews, Fixes, and Field Support
Jun 2023 – Apr 2025
Hands-on problem solving for jurisdictions and internal teams: sampling-plan reviews, data-quality triage, codebook and format clean-ups, and on-site capacity building—so deliveries stayed reliable and repeatable.
Read more
- Sampling change reviews: Verified proposed changes and documented decisions so the next season started cleaner.
- Data-quality triage: Investigated miscoding or missing fields and wrote clear next steps for sites.
- Accessible documentation: Modernized codebooks to be easier to read and use; standardized filenames and formats.
- Team operations: Helped plan working sessions and retreats; kept cross-team priorities moving.
- Safeguards in analysis: Protected small cells, labeled outputs clearly, and exported “ready-to-share” tables.
PRAMS Nonresponse Bias Review — Evaluation of Response-Rate Threshold
Jan 2024 – Apr 2025
I contributed to PRAMS’ review of whether a fixed survey response-rate threshold was still necessary.
My role focused on reviewing analyses conducted by colleagues, participating in technical discussions,
and providing feedback on how the evidence was interpreted.
Read more
- Analytical review: Read and discussed the team’s nonresponse-bias analyses (led by colleagues), offering technical feedback and clarifying where conclusions were well-supported.
- Interpretation support: Helped frame findings in clear, plain-language terms so program leadership could weigh risks and understand practical implications.
- Internal guidance shift: Contributed feedback as the team moved toward focusing on transparent evaluation of potential bias rather than relying on a single response-rate cutoff.
- Outcome: The revised approach was accepted through federal review, supporting a transition away from a fixed threshold and toward routine bias assessment.