Hippocratic Oath in an Age of AI
Written by a seasoned, board-certified clinician and an experienced healthcare AI expert.
Believe it or not, Dr. Eduardo Vadia and I have been working on this modern version of the Hippocratic Oath for months, tediously debating every statement and every word, because we felt it was important to get it right and to offer something the medical community could use as a real standard. We wanted to keep it true to the original Oath, keep politics out of it, and bring it into the new reality of AI and other technological innovations in medicine.
Whether this proposed Oath becomes the new standard for doctors is up to the medical community to decide. But we tried our absolute best.
Here are some brief bio facts about us.
Eduardo Vadia
Dr. Eduardo Vadia is a physician who is, or has been, board certified in Internal Medicine, Pulmonary Medicine, and Critical Care Medicine. He completed his residency, chief residency, and pulmonary and critical care fellowship at UT Southwestern Medical Center in Dallas Tx, where he later served as a faculty physician with a focus on medical education in the Parkland Hospital ICU. He is the co-founder of Access Physicians, a multi-specialty inpatient telemedicine physician group that scaled to over 200 hospital deployments across 30 states before being acquired by SOC Telemed in 2021. Dr. Vadia is also the co-founder of ArithmoAI, a healthcare AI company that secured seed funding in January 2026. He also serves as Vice Chairman of the Industry Advisory Board for the University of Miami Institute of Data Science and Computing.
Sergei Polevikov
Sergei Polevikov is a Ph.D.-trained data scientist, AI entrepreneur, economist, and quant investor with 30+ academic manuscripts. As the author of the popular Substack newsletter AI Health Uncut, with nearly 10,000 subscribers, he delivers brutally honest insights drawn from his experience as a healthcare AI startup founder at WellAI and Chart2Chart, and as a private equity investor. His work is driven by a relentless pursuit of brutal truth in healthcare. That mission has pushed him beyond building products and into investigating healthcare fraud. Today, Sergei focuses on health AI education and truth-telling. He also co-hosts the podcast Digital Health Inside Out with Alex Koshykov, Ben Schwartz, and Rik Renard.
Keeping AI Human. A Modern Hippocratic Oath.
As physicians and scientists, we stand at a transformative moment in healthcare. Artificial intelligence promises to enhance diagnosis, personalize treatment, and improve clinical outcomes. We should embrace this opportunity for our patients. However, adoption must come with rigorous caution, explicit responsibility, and an unwavering commitment to patient welfare. This is our moment to shape the future of medicine or risk ceding it to external forces.
The oath that follows sets forth a framework to ensure AI is safe, effective, and worthy of clinical trust. AI is not a panacea. As systems evolve toward greater autonomy, they must be governed with the same scientific rigor required of any medical intervention. There is no hall pass for AI. It must earn its place in clinical practice through transparent methods, reproducible evidence, and independent real-world validation. Physicians must lead this process of scrutiny. To reject AI outright risks obsolescence. To embrace it blindly risks patient harm. The responsible path is to integrate AI carefully, critically, and always under clear human authority.
The digitization of healthcare has already strained the patient-physician relationship. AI could fracture this sacred bond if it sterilizes our encounters or reduces patients to data points. Used wisely, it can do the opposite. It can return time to the bedside, deepen trust, and reinforce the compassionate ethos at the heart of medicine.
This moment offers an opportunity to reset the institutional compass, ensuring that clinical excellence, not administrative priorities, is the guiding principle of healthcare leadership. Administrative forces have sidelined physicians in decisions that shape care delivery. The result has been systems optimized for higher fee-for-service volume and institutional protection, not for outcomes and quality. The emergence of AI allows us to re-center care around the standards and judgment that define our profession. By participating directly in the development and validation of healthcare AI, we can build systems that reflect clinical expertise, patient-centered values, and ethical integrity. This will ensure that technology serves medicine, not the other way around. To remain passive is to surrender to actors who do not carry our duty to care.
We must resist two errors. The first is rejecting AI entirely and forfeiting its potential. The second is adopting it uncritically. Both abdicate responsibility and risk permanent erosion of the physician’s role to the detriment of patients. Patient care demands accountability, and our legal and ethical codes place that accountability on physicians. If we cede it, we invite external control over the delivery of care. By embracing responsibility and the accountability that comes with it, we keep AI a tool under human control and in the service of patients.
We must also guard against de-skilling. Over-reliance on AI can dull clinical judgment and reasoning. Physicians must keep their skills sharp so that AI augments rather than substitutes expertise. Medicine demands lifelong learning. So does the safe and ethical use of AI.
Let this oath be a rallying call. Govern AI with wisdom. Lead its development with purpose. Uphold our sacred duty to patients with unflinching responsibility. The future of medicine is ours to shape. Let us seize it.
The Modern Hippocratic Oath for Artificial Intelligence in Medicine
AS A CLINICIAN USING AI IN PATIENT CARE:
I solemnly pledge to use artificial intelligence (AI) as a tool under my authority, ensuring that my clinical decision making is autonomous and distinct from the AI systems being employed.
I will only use AI in patient care if its methods are fully understood. I take full responsibility for understanding the inputs, limitations, and risks of the AI systems used in my clinical practice.
I will only use AI that has been validated through reproducible, independent, real-world evidence and reject tools that are untested, faulty, or premature.
I will not let opaque systems or “black box” outputs override my independent appraisal of data, my clinical acumen, or my experience.
I will address biases in AI systems without partisan framing, prioritizing truth, accuracy, and trust.
I will respect my patients’ autonomy, privacy, and dignity, in all my uses of AI.
I will safeguard personal health information in the development and use of AI, including sensitive genetic or genomic data.
I will ensure appropriate transparency and informed consent for AI use when indicated.
I will actively contribute to improving AI systems used in the delivery of patient care.
I will demand transparency and accountability from those who build AI.
I will work with AI developers while insisting their work aligns with my duty to patients and does not obscure how outputs are generated.
I will share clinical expertise to design, test, and validate AI tools for safety and effectiveness.
I will seek and incorporate diverse input from engineers, ethicists, patients, and other stakeholders, to help shape AI systems that prioritize patient needs and values.
I will report failures and near-misses to improve AI safety, just as I do for other clinical technologies.
I will openly disclose and manage conflicts of interest in my use of AI.
I will oppose commercial, financial, political, or other external influences that compromise the integrity of AI systems, even under pressure.
I will reject AI tools that risk harm, erode dignity, or shift care away from being patient-centered.
I will not use AI in ways that violate human rights, civil liberties, or basic ethical principles.
I will ensure that AI enhances and does not degrade the patient-physician relationship.
I will not rely on AI blindly or adopt it uncritically.
I will maintain and grow my clinical skills so that AI augments rather than substitutes my expertise.
I will not let the existence of AI excuse any decline in my preparedness for patient care.
I MAKE THESE PROMISES AS A CLINICIAN WHO HOLDS ULTIMATE AUTHORITY OVER AI.
I DO SO FREELY, SOLEMNLY, AND UPON MY HONOR.
© 2026. All rights reserved. This document contains proprietary and confidential information that is the intellectual property of Sergei Polevikov and Eduardo Vadia. Unauthorized copying, disclosure, or distribution of any portion of this document, in any form or by any means, without the prior written consent of the owners is strictly prohibited.








Excellent idea!
Most of what you have is strong but I wonder about improving a few lines that are problematic or unrealistic in practice.
--“I will only use AI in patient care if its methods are fully understood.” We don't fully understand the methods of ML/LLM systems in the way we understand a rule‑based system. For example, WHO and AMA emphasize understanding purpose, data, limitations, and performance characteristics, not inner mechanics.
Consider: “if its purpose, performance, limitations, and risks are sufficiently characterized for my clinical use case.”
--“I take full responsibility for understanding the inputs, limitations, and risks of the AI systems used in my clinical practice.” This over‑individualizes responsibility and ignores shared accountability across vendors, institutions, regulators, and payors, which WHO and AMA explicitly flag. (https://www.who.int/publications/i/item/9789240029200).
Consider: “I will not use AI unless I have access to, and have reviewed, adequate information on inputs, limitations, and risks, and I will advocate for clear accountability among all stakeholders.”
-- “I will only use AI that has been validated through reproducible, independent, real-world evidence…” As an aspiration this is excellent, but in many domains such evidence is not yet available and sometimes cannot be generated before limited deployment. WHO and AMA instead require clear evidence of benefit proportionate to risk and appropriate post‑deployment monitoring. (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
Consider adding: “...and I will ensure ongoing monitoring and re‑evaluation in my setting, withdrawing or limiting use when performance is inadequate or inequitable.”
--Missing: equity, access, and global justice
Current international guidance treats equity, inclusiveness, and avoidance of discrimination as central ethical pillars for AI in health, not just secondary “bias” tuning. (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Consider: You could add explicit commitments such as:
- To evaluate differential performance across populations and not deploy AI that worsens inequities or discriminates against protected groups. (https://www.who.int/publications/i/item/9789240029200)
- To consider access: not endorsing AI tools that effectively exclude or disadvantage patients by cost, language, disability, or digital literacy barriers (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Right now you touch bias, but not structural inequity or distributive justice.
--Missing: system‑level governance and monitoring
The oath is heavily clinician‑centric and underplays the organizational and regulatory context which AMA and WHO now see as essential: governance frameworks, monitoring, and clear accountability. (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
Consider: You might add points about:
- Participating in institutional oversight (AI committees, safety boards, quality improvement) and insisting on lifecycle monitoring, auditing, and sunset or rollback mechanisms. (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
- Supporting mandatory reporting channels and learning systems for AI‑related harm, not only personal reporting. This shifts from “I will report failures” to “I will help build and use robust reporting and governance structures.”
--Missing: regulation, standards, and liability
There is no explicit commitment to:
- Use AI that complies with relevant regulations and standards (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
- Recognize how AI interacts with medico‑legal liability and documentation standards (the AMA explicitly addresses liability and warns that voluntary compliance is insufficient). (https://www.ama-assn.org/system/files/ama-ai-principles.pdf)
You could add “I will use only AI systems whose deployment complies with applicable laws, regulations, and professional standards, and I will document my use transparently in the clinical record.” (https://www.who.int/publications/i/item/9789240029200)
And “I will not defer legal or ethical responsibility to vendors, employers, or payors when AI influences my decisions.”
--Missing: documentation and explainability in context
You call for transparency and for not letting black‑box outputs override judgment, but there is nothing about **documenting** how AI influenced care, or how to act when explainability is limited. International guidance emphasizes explainability “appropriate to context” rather than full transparency. https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Possible additions:
- “When AI meaningfully influences clinical decisions, I will document its role, including relevant outputs, limitations, and my independent reasoning.”
- “I will avoid using AI in high‑stakes decisions if its behavior cannot be made sufficiently intelligible to me or to the patient for informed consent.” (https://www.who.int/publications/i/item/9789240029200)
--Missing: patient participation and co‑design
You do include “seek and incorporate diverse input…patients,” but patients are treated as stakeholders in design, not as **partners in decisions about using AI in their own care**.
Recent “digital Hippocratic oath” proposals emphasize validating the patient as an equal‑level partner and focusing on the person, not just their data(https://www.jmir.org/2022/9/e39177/)
So you might add:
- “I will treat patients as partners in decisions about whether and how AI is used in their care, respecting their preferences and right to decline certain tools when feasible.” (https://www.aha.org/aha-center-health-innovation-market-scan/2022-02-22-it-time-develop-digital-hippocratic-oath)
- “I will remember I treat a human being, not their data or an algorithm’s suggestion.” (htttps://www.aha.org/aha-center-health-innovation-market-scan/2022-02-22-it-time-develop-digital-hippocratic-oath)
--Missing: data provenance, secondary use, and model training
You commit to safeguarding PHI, but not to:
- How patient data may be used to **train, fine‑tune, or improve** models.
- Ensuring consent and governance over secondary use, and avoiding non‑consensual harvesting of data for AI development, which WHO explicitly warns about. (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
Consider adding:
- “I will not permit patient data to be used to develop or train AI systems without appropriate legal basis, safeguards, and, where required, informed consent.” (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
- “I will advocate for data minimization, secure storage, and clear limits on data sharing with third parties, including technology vendors.” (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
--Missing or thin: misinformation, misuse, and non‑clinical applications
WHO flags the risk of **health misinformation, manipulation, and non‑clinical misuse** (e.g., payor algorithms, administrative triage) that indirectly affect patients. (https://www.who.int/publications/i/item/9789240029200)
So you might add:
- “I will challenge the use of AI by insurers, employers, and institutions when it restricts access, discriminates, or conflicts with patients’ best interests.” (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
- “I will help patients critically interpret AI‑generated health content and will counter disinformation that could harm them.” (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
Right now the focus is purely clinician‑facing tools, not the wider AI ecosystem shaping care.
--Clarify “actively contribute to improving AI systems”
“I will actively contribute to improving AI systems used in the delivery of patient care” is laudable but vague and potentially burdensome.
You might clarify:
- Participation should be within competence and capacity (e.g., reporting issues, engaging in evaluation studies, providing feedback), not implying a duty to do technical co‑development. (https://pmc.ncbi.nlm.nih.gov/articles/PMC8826344/)
Otherwise, clinicians could feel obligated to engage in activities beyond their role or expertise.
This is great Sergei and so much more meaningful to a practicing clinician than the battery of working groups within CHAI and the policy wonk gatherings across the country. Help us work out for ourselves how we can adopt and adapt our practice without a 150 page thesis.