If 2021 was a year of challenges for businesses, 2022 brings a number of excellent business prospects. Many of them related to the use of artificial intelligence (AI) based mechanisms.

The possibilities are many: systems that perform predictive analysis from a large mass of data, mechanisms that perform facial recognition for various purposes (such as releasing access to the user, access to physical environments, searching for people in public spaces), as well as automated tools for decision making, use of autonomous vehicles, programs capable of analyzing large amounts of texts, which optimize prices of products and services for consumers and systems dedicated to solving problems with customers.

The massive presence of AI tools in our daily lives has its impact on legal regulation, leading to the creation of standards to establish rules for the use of this technology. The big challenge is how to use these tools without them attracting more risks than profits for the business. From a legal point of view, it is important to assess the form and purpose of using technology so that a powerful and promising resource does not become a threat. Companies should be aware of compliance issues.

The use and processing of personal data, for example, requires attention to the General Data Protection Act (LGPD) - Law 13.709/18. When using AI-based tools that hand over personal data, it will be necessary, above all, to define the precise purpose of the operation, to ensure the use only of the data necessary for this purpose and to understand what legal justification supports the use of the tool.

The preparation of a specific Personal Data Protection Impact Assessment, with risks and safeguards adopted, can be very useful, demonstrating the company's concern and diligence in relation to the subject. In automated decision-making processes using personal data, it is important to ensure mechanisms that allow the review of the decision, as this is a right of the data subject (art. 20 of the LGPD).

It is also important to rely on frameworks ethical-legal development and use of artificial intelligence tools, which allow greater transparency, ethical application and elimination of any discriminatory practice in processes.

Some questions need to be answered. For example, how the tool receives the data it will use (in an automated manner, collects on open sea and in public bases, through inputs employees of the company, etc.)? There is a risk that inputs contaminated by a high degree of discrimination? Is the application of the tool supervised? Is there transparency in the processes developed? Have deviations been detected in the (often illegal) results that recommend better design of the mechanism?

The preparation of an Algorithimics Impact Assessment - AIA can be useful in these cases, as it helps in the analysis of the consequences of the use of artificial intelligence technology, from the development phase to effective use.

It is also worth mentioning the recent ISO IEC 24027:2021 Standard, from November 2021, which provides guidelines regarding the existence of vieses in artificial intelligence technologies, especially considering tools used for decision making.

These vieses can be understood as structural deficiencies in the design of the tool or inputs provided by humans who feed it or the data collection ecosystem itself.

The regulation of the theme advances in the country and 2022 should prove to be a decisive year, as the implementation of the Brazilian Artificial Intelligence Strategy – Ebia will continue. In the National Congress, the expectation is that the debates on the regulatory frameworks of artificial intelligence continue and define the standards that will be applied in Brazil.