Presently, there is NO specific legal regulation in place in India that regulates or guides the development and implementation of AI in healthcare sector.
The Indian Council of Medical Research (ICMR) has come up with the country’s first Ethical Guidelines for the Application of Artificial Intelligence in Biomedical Research and Healthcare[1] [“the AI Guidelines 2023”] to provide an ethical framework for the development of AI-based tools which will benefit all stakeholders. These guidelines apply to AI based tools created for all biomedical and health research and applications involving human participants and/or their biological data.
Purpose[2]
- To ensure ethical conduct and address emerging ethical challenges in the field of Artificial Intelligence (AI) in biomedical research and healthcare.
- To provide a framework for ethical decision-making in medical AI during the development, deployment, and adoption of AI-based solutions.
- The guidelines are intended for all stakeholders involved in research on AI in biomedical research and healthcare, including creators, developers, researchers, clinicians, ethics committees, institutions, sponsors, and funding organizations.
- The guidelines include sections on ethical principles, guiding principles for stakeholders, an ethics review process, governance of AI use, and informed consent. They have been developed through extensive discussions with experts and ethicists. The guidelines are a living document and will be updated as ethics in AI evolve.
ETHICAL PRINCIPLES OF MEDICAL AI
The AI Guidelines 2023 describe 10 general principles need to be applied to all biomedical and healthcare research involving human participants, their biological material and data. These general ethical principles address most of the ethical aspects of any biomedical and health research.
Following are the Guiding Ethical Principles:
- Autonomy
- Data Privacy
- Accountability and Liability
- Trustworthiness
- Validity
- Non-Discrimination and Fairness
- Optimization and Data Quality
- Accessibility, Equity and Inclusiveness
- Risk Minimization and Safety
- Collaboration
WHO SHOULD BE RESPONSIBLE FOR ERRORS MADE BY AI?
Currently, there are no regulations in place to address the legal and ethical challenges involve in implementation of AI in healthcare sector, which is required.
Guiding Principle on Accountability and Liability
AI technologies intended to be deployed in the health sector must be ready to undergo scrutiny by concerned authorities at any point in time. AI technologies must undergo regular internal and external audits to ensure their optimum functioning. These audit reports must be made publicly available.
Who is Responsible?
Based on the AI Guidelines, in case of AI malfunctioning on misapplication of AI in healthcare, responsibility is determined in the following manner:
- On Healthcare Professional using the AI tool: The health professional who will use the technology, will assign responsibility. Like other diagnostic and decision-making tools used in clinical practice, the responsibility of optimal utilization of the technology is on the health professional using AI-based solutions for delivering healthcare.
- On the developer of the AI tools: If the harm is caused due to malfunction, then primarily due to flaws in functionality, then the designer, developer, or manufacturer may be held responsible.
- On the End-user or Organization: If the harm is caused due to defective implementation of technology, then the end-user or organization may be held accountable.
Following guidelines for regulation with respect to liability in case of AI errors are:
- Innovators in the field of AI may be unfamiliar with medical ethics, research regulations, and regulatory guidelines applicable to this area. It is therefore important to have representatives from health sector at all stages of development and deployment of AI based tools and technologies.
- Implementation and functioning of AI must be supervised all the time. The concept ‘Human In The Loop’ (HITL) places human beings in a supervisory role and is more relevant for healthcare purposes. This will ensure an individualized decision making by the health professionals keeping the interest of the patient in the center. It also helps in optimal sharing of accountability by the team involved in development and deployment of AI-based algorithms.
- It is critical to ensure that the entity(s) seeking such responsibility have proper legal and technical credentials in the area of AI technologies for health.
- During the deployment of AI technology based tools, the legal responsibility of its usage needs to be defined before adopting it into clinical or public use.
- There should be an appropriate mechanism to identify the relative roles of stakeholders in damage, extending from the manufacturer to the user and their legal liability. All stakeholders engaged in conceptualization to implementation chain must associate and work together to minimize harm.
[1] Id.
[2] Ethical guidelines for application of Artificial Intelligence
in Biomedical Research and Healthcare 2023, https://main.icmr.nic.in/content/ethical-guidelines-application-artificial-intelligence-biomedical-research-and-healthcare.