The popularization of artificial intelligence (AI) in the workplace has accelerated undeniable efficiency gains. However, the tool has expanded regulatory, reputational, and operational risks, especially when personal data and strategic company information circulate outside the environments approved by the organization. The legislative initiative concerning the regulation of these technologies – Bill No. 2,338/23 (PL 2,338/23) –, which is currently under review in the Chamber of Deputies, further increases this risk.
In this context, the use of AI must observe certain parameters to ensure, at the same time, operational efficiency and corporate security.
How do risks materialize and what are their legal consequences?
The unsuitable use of AI tools outside the corporate ecosystem can generate a range of risks. These risks begin with the way the data is entered into platforms and unfold into potential contractual, legal, and regulatory violations, as shown below:
- Confidentiality, secrecy, and security incidents. Inserting customer data, strategic documents, statements, or internal information into unapproved AI tools exposes trade secrets and, in some cases, personal data. Many of these platforms operate under terms of use that do not reflect the company's security standards and may allow retention, model training, or other secondary uses.
Such behavior may result in breaches of contractual confidentiality obligations, triggering notifications to clients or suppliers, and even liability for damages.
In addition, the submission of sensitive information may constitute a security incident, exposing the company to the obligation to notify the National Data Protection Agency (ANPD). As some AIs store prompts, inserting CPFs, contracts, or credentials creates a risk of retention and undisclosed data processing — violating internal policies, contractual confidentiality, and the transparency principle established under the General Data Protection Law (LGPD).
- International Data Transfer. Outside the corporate environment, data entered into AI tools may be processed in foreign data centers. This scenario may constitute an international data transfer, subject to the specific requirements of the LGPD and Resolution CD/ANPD 19/24. When employees use tools without prior assessment, the company loses visibility over international data flows, potentially resulting in regulatory non-compliance.
- Metadata, inferences, and exposure of the corporate domain. Registering corporate email addresses in external services expands the organization's attack surface. These services may collect metadata and infer information such as name, job title, department, and usage patterns, which, when combined, facilitate targeted phishing or social engineering campaigns. Seemingly harmless data empowers attackers to create convincing messages, increasing the risk of credential compromise and subsequent leaks.
- Intellectual property and compliance. Sending source code, templates, spreadsheets, or material expressing the company’s know-how to public AI tools compromises competitive advantage and may lead to intellectual property disputes. Depending on the platform’s terms, the submitted content can be used to train models or be reproduced unpredictably, resulting in loss of ownership or breach of confidentiality.
- Client relationship and reputation. The use of unauthorized tools undermines the trust of clients and partners, especially in data-sensitive industries. Incidents arising from this use can lead to extraordinary audits, contract terminations, and commercial barriers. Public disclosure of incidents directly affects the company’s reputation and perceived level of data protection maturity.
From a legal standpoint, the misuse of AI has a direct impact on internal and external obligations. Failure to comply with corporate policies, confidentiality agreements, and security standards may justify disciplinary measures and employee liability.
From a regulatory perspective, the company must demonstrate ongoing compliance with the LGPD, including effective technical and organizational measures. When employees use unauthorized tools, the organization has greater difficulty demonstrating governance and good practices, and may face inquiries from the ANPD or other authorities.
Finally, trust and corporate reputation are relevant legal assets: leaks or incidents have rapid repercussions, affecting commercial relationships and institutional credibility
Bill 2.338/23 and the regulation of ai agents
In addition to the risks already present under current legislation, Bill No. 2,338/23, which addresses AI regulation, is moving toward approval.
According to the bill, already approved by the Federal Senate and under discussion in the Chamber of Deputies, AI agents are defined as the developers, distributors, and deployers of this technology. As a result, companies that use AI in their daily activities will need to comply with certain legal requirements, such as:[1]
- Refraining from excessive risk uses: once the bill is approved, certain specified uses of AI will become prohibited, such as uses that induce behaviors that cause damage to health or safety, analyze personality traits to assess risk of criminal behavior, perform remote biometric identification, among other possibilities.
- Ensuring the safety of AI systems: the bill imposes a cross-cutting duty to provide safety and support to individuals or groups affected by the use of AI. In other words, the responsibility for the use of AI does not lie solely with its developer, but rather with the entire chain involved.
- Conducting a preliminary risk assessment: Although not mandatory, preliminary risk assessment may be essential to demonstrate due diligence and be used as evidence of compliance. Sectoral authorities may request access to this assessment.
- Adhering to best-practice programs and codes: while also not mandatory, it is highly advisable to adopt best-practice programs and codes to demonstrate good faith or even mitigate potential sanctions. This may include the adoption of internal policies, supervision, auditing, implementation of a whistleblowing channel, incident-response plans, and procedures for responding to requests from affected individuals or groups.
Practical and legal recommendations for the safe use of ai tools
- Policies and governance. It is recommended to adopt corporate policies with guidelines for the use of AI tools, defining permitted use cases, prohibited data, approved tools, risk assessment criteria, approval process, and roles and responsibilities. Deliberations regarding sensitive cases should be documented. This type of documentation can be crucial when demonstrating the company's due diligence to regulatory authorities.
- Approved tools. Working within corporate environments (with VPN), using only approved AI systems, blocking the submission of sensitive data to public AI tools, and maintaining active password vaults and multi-factor authentication (MFA) are strongly recommended measures. Additionally, Ai-related risks can be mitigated by anonymizing personal data or confidential company information.
- Contracts and third-party chain. Contracts with providers and partners should be adjusted to include or restrict the use of AI, as well as establish rules for confidentiality, minimum retention, incident notifications, and due diligence. As for contracts with AI platforms themselves, it is recommended to pay close attention to provisions on data retention, secondary uses, storage location, and security safeguards.
- Training and culture. It is essential to train employees on AI-related risks, the use of personal accounts within the corporate digital environment, information classification, official channels, and incident response. Employees are responsible for applying the rules and, as such, building a culture of privacy and data protection.
- Incident Response Plan. Updating or developing an Incident Response Plan for AI-related scenarios that covers identification, containment, preservation of evidence, risk assessment to data subjects, and notification criteria is a good strategy to standardize a company's response practices.
Conclusion
The use of AI can accelerate and improve results, but it requires legal and organizational discipline through clear policies, approved tools, control mechanisms, adjusted contracts, continuous training, and readiness to deal with potential incidents.
Everyday choices, such as reusing passwords, using unauthorized AI, lack of care when submitting corporate documents to theses systems, among other conducts, directly affect a company’s AI governance. The best defense is to combine responsible behavior, governance, and registering compliance efforts.
In addition to the risks already present, the topic should be on the radar of companies, given that there is a clear legislative intention for regulation. The path toward compliance, therefore, should already be pursued by organizations, in order to avoid hastily implemented projects designed solely to meet regulatory demands. Machado Meyer is available to advise and support organizations throughout this process.
[1] Determinations currently set forth in article 13, article 17, article 12, paragraph 6 and article 40 of Bill 2.338/23.
