Portfolio

A selection of my most meaningful work—national public-health delivery at CDC, program building at LSU, independent client projects, and a few technical demos. Each card includes a short summary, and the larger ones expand to show how the work actually got done.

Centers for Disease Control and Prevention

CDC

PRAMS Data Weighting — National Delivery and Automation

Oct 2024 – Mar 2025

I co-led a nationwide effort to turn raw files from many jurisdictions into clear, consistent reports over a demanding six-month season. I organized the work, kept the schedule on track, resolved issues with partners, and built repeatable steps so results were reliable and delivered on time.

Read more
  • National scope: Supported a large network of sites across the country — states, major cities, and territories — representing the great majority of U.S. births. Coordinated intake, reviews, and releases so each site moved smoothly from start to finish.
  • Season planning and coordination: Set the calendar, intake checklist, and checkpoints. Ran standing touchpoints, unblocked issues quickly, and kept everyone aligned on what was needed and when.
  • Data intake and validation: Converted plain-text “birth” and “frame” files into structured datasets using a standard template. Verified counts and identifiers, checked required fields, and documented any quirks before analysis so there were no surprises later.
  • Sampling verification: Reviewed how each site drew its sample, compared expected and observed patterns, and flagged anything unusual for follow-up. Kept a clear record of findings and fixes so the next run started cleaner.
  • Fair results when some people do not respond: Compared who answered with everyone who was eligible, grouped people with similar chances of responding, and adjusted their influence so results reflected the whole population, not just those who replied.
  • Coverage checks: Compared important totals to official counts and looked for signs that certain groups might be missing. Built in stop-points so a run would pause if something needed attention.
  • One-click packaging: Automated the creation of consistent tables, figures, and notes for reviewers. Produced tidy, share-ready documents and a short summary of what was checked and what changed.
  • Lookup-driven, repeatable runs: Used a single “lookup” table to hold the site-specific settings. The system read that row and applied the right rules automatically, which reduced manual edits and made results consistent across sites and seasons.
  • Partner support and problem solving: When a file did not match the plan, worked directly with the site to identify the cause, agree on a correction, and record the exact steps taken so the fix did not have to be rediscovered.
  • Training and handoff: Led short, practical sessions on the process; shared simple checklists and examples; and kept a clear trail of decisions so teammates could reproduce the full run.
  • Outcomes: On-time releases across many jurisdictions, fewer back-and-forth cycles, faster reviews, and clearer paperwork for later audits — with a season playbook that the team can reuse and improve.

Statistical Advisory Group (SAG) — Mentorship Program

Oct 2023 – Jun 2024 · Research Mentor

I mentored Centers for Disease Control and Prevention researchers through the Statistical Advisory Group’s mentorship program—helping participants refine study questions, prepare clean data, and apply appropriate statistical methods. Across several projects, I provided one-on-one guidance, clear examples, and step-by-step reasoning so mentees could move their analyses forward with confidence.

Read more
  • Regular mentorship: Met twice per month over the six-to-nine-month program, setting goals, reviewing progress, and helping mentees plan analysis steps that matched their study design.
  • Regression & modeling guidance: Supported logistic and generalized linear models; explained model selection (AIC, BIC, ROC/AUC), interaction terms, confounding control, and interpretation of coefficients and effect sizes.
  • Advanced methods: Taught when and how to use Lasso and Ridge regression, cross-validation, decision trees, and random forests—highlighting tradeoffs between prediction, interpretability, and sample size.
  • Latent & structural models: Guided structural equation modeling and latent-construct approaches for projects involving behavioral, mental-health, and disclosure outcomes.
  • Exploratory analysis: Helped researchers evaluate missingness, build correlation matrices, define variables, and create reproducible, well-documented data pipelines.
  • Interpretation & communication: Worked with mentees to explain results clearly—risk ratios, odds ratios, post-hoc tests, diagnostics—and craft manuscripts and presentations suitable for scientific review and clearance.
  • Impact: Mentees advanced their studies with stronger analysis plans, clearer statistical reasoning, cleaner pipelines, and ready-to-share results; several projects moved from early concept to polished products.

PRAMS Training and Templates — Consistent Analyses at Scale

Jun 2023 – Apr 2025

Built simple tools and taught teams how to produce consistent tables faster. Created reusable templates, kept format libraries current, and ran a short training series so analysts could deliver clean outputs without reinventing the steps.

Read more
  • Reusable tools: Templates for common tables with options for how to handle missing values and direct export to spreadsheets.
  • Season enablement: Trainings on pre-weighting checks and the weighting workflow, plus step-by-step notes and checklists.
  • Maintenance: Kept site-specific variable lists and format catalogs up to date so runs were smooth and consistent.
  • Impact: Faster reviews, fewer errors, and easier handoffs across sites.

Partner Enablement and Quality Operations — Reviews, Fixes, and Field Support

Jun 2023 – Apr 2025

Hands-on problem solving for jurisdictions and internal teams: sampling-plan reviews, data-quality triage, codebook and format clean-ups, and on-site capacity building—so deliveries stayed reliable and repeatable.

Read more
  • Sampling change reviews: Verified proposed changes and documented decisions so the next season started cleaner.
  • Data-quality triage: Investigated miscoding or missing fields and wrote clear next steps for sites.
  • Accessible documentation: Modernized codebooks to be easier to read and use; standardized filenames and formats.
  • Team operations: Helped plan working sessions and retreats; kept cross-team priorities moving.
  • Safeguards in analysis: Protected small cells, labeled outputs clearly, and exported “ready-to-share” tables.

PRAMS Nonresponse Bias Review — Evaluation of Response-Rate Threshold

Jan 2024 – Apr 2025

I contributed to PRAMS’ review of whether a fixed survey response-rate threshold was still necessary. My role focused on reviewing analyses conducted by colleagues, participating in technical discussions, and providing feedback on how the evidence was interpreted.

Read more
  • Analytical review: Read and discussed the team’s nonresponse-bias analyses (led by colleagues), offering technical feedback and clarifying where conclusions were well-supported.
  • Interpretation support: Helped frame findings in clear, plain-language terms so program leadership could weigh risks and understand practical implications.
  • Internal guidance shift: Contributed feedback as the team moved toward focusing on transparent evaluation of potential bias rather than relying on a single response-rate cutoff.
  • Outcome: The revised approach was accepted through federal review, supporting a transition away from a fixed threshold and toward routine bias assessment.

Louisiana State University — Program Leadership

LSU

Virtual Math Research Circle — Year-Round, International Expansion

Oct 2025 – Present

Grew the program from a summer offering into a year-round effort with international partners. I set up the structure so we can add sessions without losing quality, including a memorandum of understanding with Zhejiang and a smoother day-to-day operation. See an example session: 2026 projects.

Read more
  • What changed: partnerships and scheduling across time zones that work for families, schools, and mentors; cleaner operations (simple sign-ups, dependable communications, better instructor onboarding); clear roles, timelines, and checklists so the program scales without confusion.

National Outreach Engine — Lists, Campaigns, and Results

2025 – Present

Built and ran a steady outreach engine to find mentors and students. I grew a national contact list, wrote the emails, and tracked what worked. I also negotiated a bulk list of high-school contacts so we can reach the right people quickly. For program context, visit the VMRC home or FAQ & Contact.

Read more
  • How I run it: audience of ~3,000+ contacts across universities and schools (kept fresh and organized); plain-spoken messages tailored for graduate departments, faculty, and educators; measured each send (opens, clicks, replies) and adjusted timing and subject lines; negotiated 100,000 high-school contacts with strong deliverability terms.
  • Sample results: recent campaigns show strong engagement (high opens and meaningful clicks); faster mentor recruiting and a larger, more diverse applicant pool.

VMRC Website Overhaul — Design & Code

Nov 2025

Rebuilt the Virtual Math Research Circle website by hand in clean, easy-to-read HTML. The new pages are faster, simpler to navigate, and written in plain language so families, mentors, and search engines can quickly understand what we do. Explore: VMRC home · Current projects · Past Sessions · Public Archives · FAQ & Contact.

Read more
  • Highlights: One clear “front door” with a single row of buttons; reusable pieces (hero, buttons, stat chips, cards, accordions) so updates are quick and consistent; mobile-friendly with strong keyboard focus and reduced-motion support.
  • Impact: Fewer “where do I find…?” emails and faster onboarding for new mentors and families; content is easier to maintain—no heavy framework or plugins to fight.

Applicant Data — Clear, Actionable Insights

Oct 2025

Turned years of scattered spreadsheets into a single, clean picture of who applies and who enrolls. I matched applicants to school and community information and built easy-to-read visuals so we can target outreach and hiring where it matters.

Read more
  • What I built: one combined data set with duplicates removed and common fields; simple charts that answer practical questions—where interest is growing, what support is needed, and which messages work.
  • Why it matters: better decisions about scholarships, mentor staffing, and where to advertise; less time cleaning data and more time supporting students and mentors.

Mentor Application — Reusable, Fast, and Friendly

2025 – Present

Reworked our mentor application so we don’t rebuild it every year. It now has clear choices for term, year, and session; smart follow-ups only when needed; and a layout that works well on phones. View the live form: VMRC mentor application.

Read more
  • What changed: reusable sections for common items (consent, disclosures) that we edit once and reuse; guided questions to reduce back-and-forth and cut down on typos; cleaner exports for reviewers—names line up, fields match, and sorting is straightforward.
  • Impact: publish a new cycle in minutes, not hours; less manual cleanup and quicker hiring decisions.

Client Work

Independent Consulting

Automated Analyst Prototype — Data & Forecasting Architecture (Confidential Client)

2025 – present · Co-Developer

I am co-developing the analytical foundation for an “automated analyst” tool that helps business leaders interpret trends, flag risks, and forecast outcomes with minimal effort. My role centers on the data model, metric logic, and early forecasting framework that will power the product’s first prototype.

Read more
  • Data & workflow design: Shaping the process for ingesting user uploads, merging external economic indicators, and creating a clean, analysis-ready dataset.
  • Indicator & metric framework: Defining the core signals (trend, risk, volatility, comparisons) and establishing consistent, reproducible logic behind each metric.
  • Forecasting architecture: Evaluating and selecting simple, interpretable forecasting approaches suitable for a first-generation prototype.
  • Power BI model groundwork: Outlining the schema, relationships, calendar structure, and measure logic that will support automated visuals and summaries.
  • Product scope & tiers: Working with the founder to shape the initial offering, use cases, and roadmap without over-promising beyond what early tooling can reliably support.
  • Documentation & clarity: Maintaining clear notes, plain-language definitions, and rationale behind design choices so future team members can extend the system safely.

Virginia Department of Health – Comprehensive Harm Reduction (CHR)

Oct 2024 · Statistical Consultant

Led end-to-end survey analysis for Virginia’s CHR program. I started with intensive data preparation (cleaning, recoding, and formatting REDCap/Excel exports), then built SAS pipelines that produced two deliverables: an overall totals report (counts/key performance indicators) and a demographic-subgroups report with clear charts and tables. The code has been modernized into a single, public program and run on a privacy-safe synthetic dataset.

Read more
  • Data prep & cleaning: standardized fields, converted checkbox/“checked vs. unchecked” values into analysis categories, and aligned site/region mappings.
  • Reporting in SAS: reliable routines for totals and subgroup breakouts; SAS/GRAPH visuals for the survey’s core questions and demographics; exports suitable for sharing.
  • Deliverables: two reports (overall totals + demographic subgroups), plus CSV/Excel files for handoff and archival.
  • Public, privacy-safe example: one modernized SAS program, a synthetic dataset, and a sample PDF so others can see the process without any PHI/PII.

Links: 🔗 View the repository · 📄 See the sample PDF · 📊 Download the synthetic CSV · 🧩 Open the SAS program

Licensing: MIT for code; CC BY 4.0 for reports/docs. Privacy: synthetic data only (no PHI/PII).

Selected Demos & Templates

Technical Demos

ATS-Ready AltaCV Résumé Template

Oct 2025 · LaTeX · GitHub Actions

Two-column look with single-column readability, plus a portal-friendly version and an automated build for both.

Academic Program Site Template

Oct 2025 · HTML/CSS · Vanilla JS

Accessible, lightweight microsite with responsive proposal cards, accordions, badges, and print-friendly styles.

Capital Allocation Monte Carlo

Oct 2025 · Python

Risk/return simulation using market data with clear charts and a board-ready summary for choosing allocations.

Exploratory Applicant Data Report

Sep 2025 · R · RMarkdown

Reproducible analysis with maps and trend lines, built from a cleaned, harmonized dataset.

What would you like to explore?

Email Calendly LinkedIn GitHub