Most of what you have is strong but I wonder about improving a few lines that are problematic or unrealistic in practice.
--“I will only use AI in patient care if its methods are fully understood.” We don't fully understand the methods of ML/LLM systems in the way we understand a rule‑based system. For example, WHO and AMA emphasize understanding purpose, data, limitations, and performance characteristics, not inner mechanics.
Consider: “if its purpose, performance, limitations, and risks are sufficiently characterized for my clinical use case.”
--“I take full responsibility for understanding the inputs, limitations, and risks of the AI systems used in my clinical practice.” This over‑individualizes responsibility and ignores shared accountability across vendors, institutions, regulators, and payors, which WHO and AMA explicitly flag. (https://www.who.int/publications/i/item/9789240029200).
Consider: “I will not use AI unless I have access to, and have reviewed, adequate information on inputs, limitations, and risks, and I will advocate for clear accountability among all stakeholders.”
-- “I will only use AI that has been validated through reproducible, independent, real-world evidence…” As an aspiration this is excellent, but in many domains such evidence is not yet available and sometimes cannot be generated before limited deployment. WHO and AMA instead require clear evidence of benefit proportionate to risk and appropriate post‑deployment monitoring. (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
Consider adding: “...and I will ensure ongoing monitoring and re‑evaluation in my setting, withdrawing or limiting use when performance is inadequate or inequitable.”
--Missing: equity, access, and global justice
Current international guidance treats equity, inclusiveness, and avoidance of discrimination as central ethical pillars for AI in health, not just secondary “bias” tuning. (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Consider: You could add explicit commitments such as:
- To consider access: not endorsing AI tools that effectively exclude or disadvantage patients by cost, language, disability, or digital literacy barriers (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Right now you touch bias, but not structural inequity or distributive justice.
- Supporting mandatory reporting channels and learning systems for AI‑related harm, not only personal reporting. This shifts from “I will report failures” to “I will help build and use robust reporting and governance structures.”
- Recognize how AI interacts with medico‑legal liability and documentation standards (the AMA explicitly addresses liability and warns that voluntary compliance is insufficient). (https://www.ama-assn.org/system/files/ama-ai-principles.pdf)
You could add “I will use only AI systems whose deployment complies with applicable laws, regulations, and professional standards, and I will document my use transparently in the clinical record.” (https://www.who.int/publications/i/item/9789240029200)
And “I will not defer legal or ethical responsibility to vendors, employers, or payors when AI influences my decisions.”
--Missing: documentation and explainability in context
You call for transparency and for not letting black‑box outputs override judgment, but there is nothing about **documenting** how AI influenced care, or how to act when explainability is limited. International guidance emphasizes explainability “appropriate to context” rather than full transparency. https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Possible additions:
- “When AI meaningfully influences clinical decisions, I will document its role, including relevant outputs, limitations, and my independent reasoning.”
You do include “seek and incorporate diverse input…patients,” but patients are treated as stakeholders in design, not as **partners in decisions about using AI in their own care**.
Recent “digital Hippocratic oath” proposals emphasize validating the patient as an equal‑level partner and focusing on the person, not just their data(https://www.jmir.org/2022/9/e39177/)
- “I will remember I treat a human being, not their data or an algorithm’s suggestion.” (htttps://www.aha.org/aha-center-health-innovation-market-scan/2022-02-22-it-time-develop-digital-hippocratic-oath)
--Missing: data provenance, secondary use, and model training
You commit to safeguarding PHI, but not to:
- How patient data may be used to **train, fine‑tune, or improve** models.
--Missing or thin: misinformation, misuse, and non‑clinical applications
WHO flags the risk of **health misinformation, manipulation, and non‑clinical misuse** (e.g., payor algorithms, administrative triage) that indirectly affect patients. (https://www.who.int/publications/i/item/9789240029200)
Right now the focus is purely clinician‑facing tools, not the wider AI ecosystem shaping care.
--Clarify “actively contribute to improving AI systems”
“I will actively contribute to improving AI systems used in the delivery of patient care” is laudable but vague and potentially burdensome.
You might clarify:
- Participation should be within competence and capacity (e.g., reporting issues, engaging in evaluation studies, providing feedback), not implying a duty to do technical co‑development. (https://pmc.ncbi.nlm.nih.gov/articles/PMC8826344/)
Otherwise, clinicians could feel obligated to engage in activities beyond their role or expertise.
This is great Sergei and so much more meaningful to a practicing clinician than the battery of working groups within CHAI and the policy wonk gatherings across the country. Help us work out for ourselves how we can adopt and adapt our practice without a 150 page thesis.
Very interesting and a great great idea. Will have some more detailed thoughts at some point, but interesting to replace AI with medicine and see how many of these tenets are true of medicine in general.
My other gut response is on “solemnly”. We have had enough pain with technology in the EHR, that personally trying to move to “enthusiastically” as a replacement for that word. Personally we can start to see the light at the end of the tunnel, and that looks like a better way to practice medicine and not a train coming to run us over.
Excellent idea!
Most of what you have is strong but I wonder about improving a few lines that are problematic or unrealistic in practice.
--“I will only use AI in patient care if its methods are fully understood.” We don't fully understand the methods of ML/LLM systems in the way we understand a rule‑based system. For example, WHO and AMA emphasize understanding purpose, data, limitations, and performance characteristics, not inner mechanics.
Consider: “if its purpose, performance, limitations, and risks are sufficiently characterized for my clinical use case.”
--“I take full responsibility for understanding the inputs, limitations, and risks of the AI systems used in my clinical practice.” This over‑individualizes responsibility and ignores shared accountability across vendors, institutions, regulators, and payors, which WHO and AMA explicitly flag. (https://www.who.int/publications/i/item/9789240029200).
Consider: “I will not use AI unless I have access to, and have reviewed, adequate information on inputs, limitations, and risks, and I will advocate for clear accountability among all stakeholders.”
-- “I will only use AI that has been validated through reproducible, independent, real-world evidence…” As an aspiration this is excellent, but in many domains such evidence is not yet available and sometimes cannot be generated before limited deployment. WHO and AMA instead require clear evidence of benefit proportionate to risk and appropriate post‑deployment monitoring. (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
Consider adding: “...and I will ensure ongoing monitoring and re‑evaluation in my setting, withdrawing or limiting use when performance is inadequate or inequitable.”
--Missing: equity, access, and global justice
Current international guidance treats equity, inclusiveness, and avoidance of discrimination as central ethical pillars for AI in health, not just secondary “bias” tuning. (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Consider: You could add explicit commitments such as:
- To evaluate differential performance across populations and not deploy AI that worsens inequities or discriminates against protected groups. (https://www.who.int/publications/i/item/9789240029200)
- To consider access: not endorsing AI tools that effectively exclude or disadvantage patients by cost, language, disability, or digital literacy barriers (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Right now you touch bias, but not structural inequity or distributive justice.
--Missing: system‑level governance and monitoring
The oath is heavily clinician‑centric and underplays the organizational and regulatory context which AMA and WHO now see as essential: governance frameworks, monitoring, and clear accountability. (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
Consider: You might add points about:
- Participating in institutional oversight (AI committees, safety boards, quality improvement) and insisting on lifecycle monitoring, auditing, and sunset or rollback mechanisms. (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
- Supporting mandatory reporting channels and learning systems for AI‑related harm, not only personal reporting. This shifts from “I will report failures” to “I will help build and use robust reporting and governance structures.”
--Missing: regulation, standards, and liability
There is no explicit commitment to:
- Use AI that complies with relevant regulations and standards (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
- Recognize how AI interacts with medico‑legal liability and documentation standards (the AMA explicitly addresses liability and warns that voluntary compliance is insufficient). (https://www.ama-assn.org/system/files/ama-ai-principles.pdf)
You could add “I will use only AI systems whose deployment complies with applicable laws, regulations, and professional standards, and I will document my use transparently in the clinical record.” (https://www.who.int/publications/i/item/9789240029200)
And “I will not defer legal or ethical responsibility to vendors, employers, or payors when AI influences my decisions.”
--Missing: documentation and explainability in context
You call for transparency and for not letting black‑box outputs override judgment, but there is nothing about **documenting** how AI influenced care, or how to act when explainability is limited. International guidance emphasizes explainability “appropriate to context” rather than full transparency. https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
Possible additions:
- “When AI meaningfully influences clinical decisions, I will document its role, including relevant outputs, limitations, and my independent reasoning.”
- “I will avoid using AI in high‑stakes decisions if its behavior cannot be made sufficiently intelligible to me or to the patient for informed consent.” (https://www.who.int/publications/i/item/9789240029200)
--Missing: patient participation and co‑design
You do include “seek and incorporate diverse input…patients,” but patients are treated as stakeholders in design, not as **partners in decisions about using AI in their own care**.
Recent “digital Hippocratic oath” proposals emphasize validating the patient as an equal‑level partner and focusing on the person, not just their data(https://www.jmir.org/2022/9/e39177/)
So you might add:
- “I will treat patients as partners in decisions about whether and how AI is used in their care, respecting their preferences and right to decline certain tools when feasible.” (https://www.aha.org/aha-center-health-innovation-market-scan/2022-02-22-it-time-develop-digital-hippocratic-oath)
- “I will remember I treat a human being, not their data or an algorithm’s suggestion.” (htttps://www.aha.org/aha-center-health-innovation-market-scan/2022-02-22-it-time-develop-digital-hippocratic-oath)
--Missing: data provenance, secondary use, and model training
You commit to safeguarding PHI, but not to:
- How patient data may be used to **train, fine‑tune, or improve** models.
- Ensuring consent and governance over secondary use, and avoiding non‑consensual harvesting of data for AI development, which WHO explicitly warns about. (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
Consider adding:
- “I will not permit patient data to be used to develop or train AI systems without appropriate legal basis, safeguards, and, where required, informed consent.” (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
- “I will advocate for data minimization, secure storage, and clear limits on data sharing with third parties, including technology vendors.” (https://pmc.ncbi.nlm.nih.gov/articles/PMC11426405/)
--Missing or thin: misinformation, misuse, and non‑clinical applications
WHO flags the risk of **health misinformation, manipulation, and non‑clinical misuse** (e.g., payor algorithms, administrative triage) that indirectly affect patients. (https://www.who.int/publications/i/item/9789240029200)
So you might add:
- “I will challenge the use of AI by insurers, employers, and institutions when it restricts access, discriminates, or conflicts with patients’ best interests.” (https://www.clinicaladvisor.com/news/ama-principles-for-ai-in-medicine/)
- “I will help patients critically interpret AI‑generated health content and will counter disinformation that could harm them.” (https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health)
Right now the focus is purely clinician‑facing tools, not the wider AI ecosystem shaping care.
--Clarify “actively contribute to improving AI systems”
“I will actively contribute to improving AI systems used in the delivery of patient care” is laudable but vague and potentially burdensome.
You might clarify:
- Participation should be within competence and capacity (e.g., reporting issues, engaging in evaluation studies, providing feedback), not implying a duty to do technical co‑development. (https://pmc.ncbi.nlm.nih.gov/articles/PMC8826344/)
Otherwise, clinicians could feel obligated to engage in activities beyond their role or expertise.
This is great Sergei and so much more meaningful to a practicing clinician than the battery of working groups within CHAI and the policy wonk gatherings across the country. Help us work out for ourselves how we can adopt and adapt our practice without a 150 page thesis.
Thank you, Stuart! 🙏
yes!!!! I agree 100%🔥🔥🔥🔥
Very interesting and a great great idea. Will have some more detailed thoughts at some point, but interesting to replace AI with medicine and see how many of these tenets are true of medicine in general.
My other gut response is on “solemnly”. We have had enough pain with technology in the EHR, that personally trying to move to “enthusiastically” as a replacement for that word. Personally we can start to see the light at the end of the tunnel, and that looks like a better way to practice medicine and not a train coming to run us over.
🔥
Thank you! 🙏