Tech Lead Data Scientist, AI Evaluation & Monitoring
Job Summary
The Tech Lead Data Scientist, AI Evaluation & Monitoring is the principal technical expert for how Geisinger evaluates, monitors, and optimizes AI systems in production. This is a hands-on technical leadership role. The Tech Lead sets the technical direction for AI evaluation across a large and growing portfolio, provides technical leadership to a team of data analysts who execute evaluation work, and partners directly with AI program teams to raise the quality of how AI is validated, monitored, and improved over time. The role exists because AI at Geisinger has scaled past the point where oversight can be a document-review exercise. We need a technical leader who can guide program teams toward better-designed evaluations up front, instrument meaningful production monitoring, and continually advance the methods we use, from LLM-as-Judge frameworks to simulation-based testing to pragmatic experiment design that actually scales in healthcare.
Job Duties
What You Will Own:
- The technical evaluation methodology applied to AI programs across the enterprise, pre-production validation, production monitoring, and ongoing optimization
- Hands-on guidance to program teams as they design validation studies, equity audits, monitoring plans, and escalation playbooks for their AI systems
- Instrumentation of production monitoring: translating program-specific failure modes into concrete, measurable metrics
- The evaluation toolkit: LLM-as-Judge frameworks, golden sets, simulation harnesses, experimental study designs, drift detection, subgroup fairness analysis
- Reusable evaluation playbooks and templates that let each new program move faster than the last
- Technical direction, design review, and mentorship for a team of data analysts supporting the evaluation function
What You Will Not Own:
- People management, HR administration, or formal performance evaluations for the analyst team (those sit with the analysts' line manager; the Tech Lead provides technical input)
- Program-level product strategy or go/no-go decisions
- Final clinical validation judgment on whether a given AI is safe for a given clinical use
- The software infrastructure behind the evaluation and monitoring tooling (built by the AI Platform team — the Tech Lead defines what's measured and how; Platform builds the backend)
Shape of the Work:
This is a role that lives at three altitudes at once:
With program teams (hands-on advisory). Partner with program owners early, before evaluations are designed, to shape study approach, sample size, stratification, gold-standard definition, and decision thresholds. Translate ambiguous failure modes into concrete, defensible evaluation designs. Coach teams through the technical work so that what arrives at governance review is rigorous, not performative.
With the evaluation toolkit (hands-on build). Design and operate the reusable assets that let evaluation scale: LLM-as-Judge rubrics and calibration methods, golden sets, simulation harnesses, A/B and shadow-mode study templates, subgroup fairness analyses, and drift monitors. Keep a pragmatic eye on what actually works in a clinical environment versus what works in a paper.
With the analyst team (technical leadership). Set technical direction, assign work across active evaluations, review analysis code and study designs, and raise the technical bar. Mentor analysts on methodology, statistical rigor, and the domain knowledge that makes evaluation credible. Grow them from execution into independent evaluation design.
Methods You'll Use:
- Experimental and quasi-experimental design for production AI systems
- LLM and generative AI evaluation: golden sets, judge-based evaluation, hallucination and grounding checks
- Fairness and equity evaluation across patient and stakeholder subgroups
- Production monitoring design: drift detection, performance decay, adoption, and outcome metrics
- Causal inference methods appropriate to healthcare settings where full RCTs are often impractical
- Simulation and adversarial testing for pre-production stress testing
- Python, SQL, modern ML and evaluation tooling, cloud-native data platforms
Work is typically performed in an office or remote environment. Accountable for satisfying all job specific obligations and complying with all organization policies and procedures. The specific statements in this profile are not intended to be all-inclusive. They represent typical elements considered necessary to successfully perform the job.
*Relevant experience may be a combination of related work experience and degree obtained (Master's Degree = 2 years; PHD = 4 years ).
Position Details
Required Skills & Qualifications:
6+ years in data science, statistics, ML engineering, or applied quantitative research, with demonstrated experience as the senior technical voice on cross-functional projects
Strong foundation in experimental design and causal inference — and judgment about which method fits which situation
Hands-on experience designing and running model evaluation studies in real production settings
Experience evaluating LLM or generative AI systems, or comparable experience evaluating complex ML systems where ground truth is messy
Proven ability to translate ambiguous failure modes into concrete, defensible evaluation designs and monitoring metrics
Strong fluency in Python and SQL; working comfort with modern ML tooling and cloud-native data environments
Experience with fairness and equity evaluation for ML systems
Track record of providing technical leadership and mentorship without formal people-management authority
Clear written communication — the role produces evaluation memos and specifications that non-technical decision-makers rely on
Healthcare, clinical, or regulated-industry experience strongly preferred
MS or PhD in a quantitative field preferred; equivalent experience accepted
Education
Bachelor's Degree-Related Field of Study (Required)Experience
Minimum of 6 years-Relevant experience* (Required)Skills
Group Collaboration; Critical Thinking; Programming Languages; Data Analysis; Machine Learning Methods; Leadership; Clinical Databases; Communication; Data Presentations; Structured Query Language (SQL); Analyzing, processing and building AI/ML solutions from Clinical and Operational data sources, such as Epic Clarity, HL7, DICOM, or ECG dataAbout Geisinger
Founded more than 100 years ago by Abigail Geisinger, the system now includes ten hospital campuses, a 550,000-member health plan, two research centers and the Geisinger Commonwealth School of Medicine. With nearly 24,000 employees and more than 1,700 employed physicians, Geisinger boosts its hometown economies in Pennsylvania by billions of dollars annually. Learn more at geisinger.org or connect with us on Facebook, Instagram, LinkedIn and Twitter.
Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, pregnancy, genetic information, disability, status as a protected veteran, or any other protected category under applicable federal, state, and local laws.
Our Vision & Values
Everything we do is about making better health easier for our patients, our members, our students, our Geisinger family and our communities.
KINDNESS: We strive to treat everyone as we would hope to be treated ourselves.
EXCELLENCE: We treasure colleagues who humbly strive for excellence.
LEARNING: We share our knowledge with the best and brightest to better prepare the caregivers for tomorrow.
INNOVATION: We constantly seek new and better ways to care for our patients, our members, our community, and the nation.
SAFETY: We provide a safe environment for our patients and members and the Geisinger family.
Our Benefits
We offer healthcare benefits for full time and part time positions from day one, including vision, dental and prescription coverage.
A place where you can lead a healthy lifestyle and follow your dreams.
Only at Geisinger.
Best employer for healthy lifestyles – National Business Group
Access to 121 state parks
