On February 27, 2026, Regional Council of Medicine (CRM) CFM Resolution No. 2,454/2026 was published in the Official Gazette of the Union (DOU), establishing rules on the medical use of Artificial Intelligence (AI).

Notwithstanding the questionable authority of CFM to address matters involving the development, governance, and oversight of AI systems, the regulation sets forth guidelines and obligations for the responsible use of AI models, systems, and applications in the medical field, including tools already in use. The key points are outlined below.

Physicians' Rights and Duties in the Use of AI


CFM Resolution No. 2,454/2026 grants physicians the right to use AI tools as instruments to support medical practice, clinical decision-making, healthcare management, scientific research, and medical education, subject to the ethical and legal boundaries of the profession. These professionals must have access to clear and comprehensible information regarding the operation, purposes, limitations, risks, and degree of scientific evidence of the AI systems used.

Conversely, the regulation provides that a physician must:

(i) employ AI solely as a support tool, remaining ultimately responsible for clinical, diagnostic, therapeutic, and prognostic decisions;

(ii) exercise critical judgment over the information and recommendations provided by AI, assessing their consistency with the clinical picture and the available scientific evidence;

(iii) remain up to date on the capabilities, limitations, risks, and known biases of the AI systems used;

(iv) use only AI systems that comply with the ethical, technical, legal, and regulatory standards in force within the national territory;

(v) record the use of AI systems as support for medical decision-making in the patient's medical record;

(vi) ensure that the use of models, systems, and applications do not compromise the physician-patient relationship, attentive listening, empathy, confidentiality, or respect for human dignity.

Physicians are expressly prohibited from:

(i) delegating to AI the communication of diagnoses, prognoses, or therapeutic decisions without proper human mediation, and shall also respect the patient's autonomy regarding informed refusal of the use of AI; and

(ii) using AI systems that do not guarantee minimum information security standards compatible with sensitive personal data.

The regulation reinforces that AI systems used in a care setting shall be designed and deployed so as to safeguard physician autonomy, and may not limit or replace the professional's decision, who may accept or reject AI suggestions according to their own judgment.

Physician-Patient Relationship and Data Protection


In the clinical context, the regulation provides that any AI tool used as relevant support in care, diagnostic, or treatment activities shall be communicated and explained to patients in a clear and accessible manner, underscoring that such systems serve as support to the physician but do not replace human authority and final decision-making over care. Additionally, the physician must respect patient autonomy, including their right to refuse the use of AI tools.

With regard to data protection, the physician must safeguard the confidentiality, integrity, and security of health data used by AI systems, in compliance with applicable legislation, including but not limited to the General Personal Data Protection Law (LGPD). The sharing of patients' personal data with AI systems must be appropriate to the purposes disclosed to data subjects and occur only when strictly necessary.

Furthermore, the use of personal data for training, validation, or enhancement of AI algorithms must observe ethical principles and fundamental rights, including:

(i) the principles of beneficence, non-maleficence, autonomy, justice, and the centrality of human care;

(ii) the right to clear information about one's health condition and treatment options;

(iii) the right to obtain a second opinion;

(iv) the right not to be subjected to experimental interventions without specific consent; and

(v) the right to privacy and confidentiality of one's personal data.

AI Risk Classification


CFM Resolution No. 2,454/2026 provides that the use or development of AI models, systems, and applications by medical institutions (whether public or private) shall be subject to a preliminary assessment to identify the degree of algorithmic risk. This approach is similar to the proposal under discussion in the National Congress, within the scope of Bill No. 2,338/2023.

The classification defined by CFM is divided as follows:

Classification Measures to Be Adopted Examples
Low Risk

The potential for negative consequences is minimal, typically operating in administrative or low-impact support functions.

They shall be monitored and reviewed periodically (with no specific timeframe) to ensure they remain within the parameters of this category and that any technological or contextual changes do not alter their risk classification.

Appointment scheduling;

Hospital supply logistics management;

Health chatbots (provided they do not personalize clinical advice);

Tools for translating medical records or summarizing medical literature for internal use.

Medium Risk

Involve some potential for adverse impact, which is, however, mitigated through supervision and intervention by the responsible physician, as well as control mechanisms that prevent a harmful outcome from materializing.

They require regular monitoring (with no specific timeframe) and performance assessments at appropriate intervals.

Clinical decision support systems.
High Risk

Entail a high potential for physical, psychological, or moral harm, including systems that directly influence critical medical decisions or perform automated actions with significant clinical consequences.

They require validation processes, regular audits (with no specific timeframe), and continuous monitoring, given the severity of potential consequences to fundamental rights, health, and life.

Systems that directly influence critical medical decisions or perform automated actions with clinical consequences involving patients in a vulnerable state or life-or-death situations.


Although the published text mentions "unacceptable" risk AI solutions, no definition is provided for this category.

Governance and Oversight


At the institutional level, the regulation prohibits any punishment of professionals who do not follow the guidance of an AI solution, provided they act in accordance with applicable technical and ethical standards. Healthcare facilities are also prohibited from imposing targets or policies that subordinate physicians' conduct.

Healthcare institutions that adopt proprietary AI systems shall establish an AI and Telemedicine Committee, under medical coordination and reporting to the medical director, whose function is to ensure the ethical use of the system through internal governance processes capable of guaranteeing safety, quality, and ethics. The main mandatory measures include:

  • Transparency regarding the use and governance of AI, through disclosure of basic information about how AI solutions operate, their purposes, the types of data used, and the oversight mechanisms adopted;
  • Implementation of procedures for monitoring, preventing, and mitigating unlawful or unethical discriminatory biases. Upon detection of improper bias, the institution shall adopt corrective measures (e.g., algorithm adjustments, retraining with more balanced data, or use restriction) or discontinue use of the tool;
  • Prioritization of the development or procurement of interoperable AI solutions that can be integrated with different health information systems and potentially shared with other institutions, avoiding duplication of efforts and increasing collective efficiency. Integration with open APIs and industry protocols is recommended whenever possible;
  • Establishment of periodic review routines for the system throughout the product's lifecycle, encompassing model updates (retraining with new data, if applicable), bug fixes, and the controlled introduction of new functionalities, so as to ensure the safe evolution of these solutions and the mitigation of associated risks;
  • Access by oversight bodies and external entities to audit and monitoring reports and to configuration information of AI systems.

CFM Resolution No. 2,454/2026 also recommends that healthcare facilities prioritize tools that offer access to parameter settings, the possibility of additional training with local data, and auditable interfaces, over fully closed systems that do not allow adjustments necessary to the specificities of the clinical environment.

Finally, with respect to the conduct of research, studies, or pilot projects involving AI in medicine, such activities must be aligned with the ethical principles of research and care, including the Medical Code of Ethics in force and the standards of the National Research Ethics Authority (INAEP).

The Life Sciences and Healthcare practice can provide further information on this topic.