The legal market, like so many other professional services sectors, is experiencing a period of intense – and perhaps somewhat unbridled – acceleration around generative artificial intelligence. Tools aimed at legal practice are multiplying rapidly, the discourse on AI agents as a revolution is consolidating, and the pressure for adoption comes from multiple fronts. In law firms, this pressure stems from the pursuit of productivity and competitive differentiation. In legal departments, it manifests in the expectation of doing more with less, meeting rising demand without increasing headcount, and demonstrating operational efficiency to the company's leadership.

This pressure, combined with the pervasive fear of being left behind, has produced a worrying dynamic: the adoption of AI as something that "everyone needs to have," without reflection on how, where, and why to use it. It is no exaggeration to say that the enthusiasm for acquiring the most recommended tools on the market is rarely accompanied by structured reasoning about their alignment with daily practice and about the legacy they build – or fail to build – in reality. Nor is this momentum preceded by a careful mapping of challenges, bottlenecks, and processes to be improved: a study that should precede any AI strategy.

Recent data reinforce this diagnosis. A study published in March 2026 by Anthropic, the developer of Claude, introduced an "observed exposure" metric that measures not what AI could do in theory, but what it is actually doing in professional practice. The conclusion is revealing: in virtually all sectors, actual coverage remains a fraction of theoretical potential[i]. In the business legal sector, effective adoption reaches a modest percentage of estimated capacity – although multiple factors explain this gap, the absence of institutional strategy is one of the most neglected. The tool exists; what is lacking, in many cases, is the structure to use it.

We do not intend to offer universal recipes, but it is possible and necessary to draw guidelines that reduce the most common risks. In the face of this accelerated race, a warning is warranted: it is essential not to move forward without understanding the impact of the decisions being made now. Getting ahead does not necessarily mean achieving the best results in the future.

The risks that speed hides


The question that should guide any adoption decision — "how is work organized today, and where exactly can AI add value?" — is often replaced by another, more dangerous one: "what tool are others using?"

This risk compounds others already known. Concerns with information security and privacy, for example, are fundamental, but also visible and, today, manageable, provided they are evaluated by competent teams in these areas. The most insidious risk, however, is of another kind: less obvious, and still little understood by organizations.

The erosion of legal reasoning. This is a central risk. Lawyers need to understand why they are using AI and know its limits. It should not feel replaceable; on the contrary, it is essential to reinforce critical judgment, recognizing how decisive legal expertise is in extracting the best possible result from AI-assisted work. As several experts warn, we operate today on an "irregular frontier"[ii] of technological capacity: AI can summarize complex contracts with brilliance, but hallucinate in simple jurisprudence. The data is eloquent: in the field study conducted by Dell'Acqua, Mollick, and colleagues with BCG consultants, professionals who used GPT-4 within the tool's capability frontier were 12.2% faster and produced significantly higher quality results. However, those who used AI outside this boundary – in tasks that required contextual judgment – performed 19 percentage points lower than the control group. The "irregular frontier" is not just a metaphor: it is empirical data that reinforces the need to know where AI works and where it fails.

It is up to those who operate AI to identify errors. Without this preparation and dialogue, the result is not efficiency: it is rework. And the risk is not only internal. In 2023, in the case Mata v. Avianca before the federal courts in New York, lawyers filed entirely fictitious case law citations in court, generated by ChatGPT, without any human verification.[iii] The episode has become emblematic: when supervision fails, those who suffer the consequences are the client and the credibility of the lawyer.

And here comes a concept that deserves attention: "AI Slop.[iv]" The term refers to mediocre, verbose, superficial auto-generated content that, if left unfiltered, pollutes decision-making and generates an increasing volume of rework. In the legal environment, slop manifests itself in a particularly dangerous format: an apparently coherent draft, with appropriately technical language, but which does not reflect the thesis of the case or ignores relevant factual nuances. The tool that should save time ends up requiring more revision, more correction, more supervision. The very short-term benefit ("it was ready faster") hides a real and growing cost, with unequivocal financial impact: the increase in non-billable hours dedicated to correcting bad drafts and the opportunity cost of the senior team, which stops working on strategy to review AI deliveries.

The hidden cost of undirected adoption. Good AI tools are not cheap. When adopted without context or without a clear plan for use, they run the risk of becoming underutilized investments or employed in such a dispersed way that they do not build an effective legacy. In law firms, the risk is underutilized investments that erode margins. In legal departments, the additional risk is the tool imposed without the participation of the legal team in the selection process or, conversely, the legal team that adopts a tool without being able to demonstrate the return on investment to corporate leadership.

The tool itself, when not inserted into a clear plan, can enhance precisely the problems it was intended to solve: operational inefficiency (now digitized and at scale) and the fragmentation of knowledge, creating information silos that are never shared. Investments without return are a symptom of a deeper problem: the confusion between adopting technologies and transforming the work model. As Mark A. Cohen, a reference in Legal Operations, warns, legal innovation fails when treated as a technology project and not as a business model transformation: "technology is an enabler, not a strategy.[v]"

Adoption must be structured and driven by leadership. As David De Cremer points out[vi], to succeed with AI, leaders need to engage the entire organization (bring everyone on board), defining not only the technologies but also the culture of use. If leadership does not know how to direct the use of probabilistic technology, it will not be able to control results and will increase exposure to risk. That is why the adoption of AI in the legal profession cannot be spontaneous and fragmented.

In practice, this requires facing a preliminary question, apparently simple, but that few ask before acquiring any tool: how is work effectively organized today, and where, concretely, is there room for AI to add value?

The three degrees of use: where AI is an agent, co-pilot, or supporting actor


Not every task benefits from AI in the same way. Structured adoption requires distinguishing at least three levels:

Where AI can be an agent. Tasks such as cataloguing documents, structuring data analytics, extracting and organizing large volumes of information, screening low-risk contracts, or preliminary compliance analysis. Even here, the direction and context given by the human are fundamental – without them, mistakes appear. But the efficiency gain is real and measurable.

Where AI is a co-pilot. Unlike the agent level, in which the human defines the parameters and validates the output, at the co-pilot level the professional actively participates in the construction of the result. Here lies the majority of medium-complexity legal work. In its current iterations, AI does not demonstrate contextual analysis capabilities comparable to those of experienced professionals in highly complex scenarios – although this may evolve. Currently, AI contributes jointly to document reading, systematization and complaint analysis, conducting internal due diligence or analyzing clauses in supplier contracts, and can suggest first drafts or prepare specific excerpts, but it depends on the professional to contextualize, validate results, and direct the deepening.

With a well-constructed prompt validated by the team, AI processes documents properly and directs the lawyer's attention to where it needs to be. It can help to build parts of a document, to cross-reference information between processes, to identify connections between cases. But the professional remains at the center of the work, elaborating with the care that the case requires. AI, at this level, does not replace analysis, but accelerates it and expands its comprehensiveness, provided that the professional maintains control over the premises and direction of reasoning.

Where the human is still irreplaceable. Lawyers remain relevant – and increasingly so – in highly complex cases: in law firm litigation, the human remains essential in arbitrations, disputes that involve multiple layers of context, and procedural strategy that hinges on nuances that only experience and legal reasoning can grasp; in legal departments, in strategic counseling to the board, in the management of regulatory crises, and in decisions involving the company's reputational exposure. AI does not have the capacity for contextual analysis at this level, and although the speed of this evolution cannot be predicted, in its current iterations it remains below what highly complex scenarios require. In these scenarios, it can support the systematization and organization of information, but the construction of the thesis, the strategic positioning, and the reading of the case remain the exclusive domain of the professional.

This hierarchy protects the quality of the work and preserves legal reasoning as a central asset of practice. The differentiation between high- and low-complexity tasks is supported by the analysis of McGinnis and Pearce, who in 2014 predicted the polarization of the[vii] legal market – although, for the authors, this polarization represented not a break-even point, but the beginning of a deeper structural transformation of the profession. They predicted that machine intelligence would profoundly transform legal practice, not just peripheral tasks. Recent data and studies suggest that disruption is real, but its effective penetration speed is still below theoretical potential. This lag does not invalidate the forecast; on the contrary, it reinforces that there is a window – perhaps a brief one – for organizations to structure themselves before the disruption accelerates.

From individual use to institutional method


The distinction between value-generating and frustration-generating AI adoption ultimately lies in one question: Does knowledge stay with the individual or is it shared with the organization?

Professor Michele DeStefano often warns about the skills gap in legal innovation: it is not enough to have the tool; it is essential to have the collaborative competence to use it.[viii] When each professional uses the tool they want, the way they want, the result is individual experimentation that is not scalable. When the organization maps the flows, creates documented workflows, validates prompts, and establishes clear guidelines on where and how the human needs to intervene, what is built is method. Knowledge ceases to be tacit and becomes part of the methodology of the law firm or legal department. New professionals who arrive already find a path mapped out, and team changes do not mean starting from scratch. The legacy is institutional.

This transition, however, is not without friction. In many legal organizations, resistance to AI does not manifest itself in open opposition, but in quiet indifference: the tool is formally made available, but in practice remains underutilized. Professionals with a consolidated trajectory and methods that have been working for years see no reason to change and, without clear guidance from leadership, they do not change. Ignoring this resistance does not eliminate it; it only converts it into a façade of adoption. Addressing it requires a combination of legitimization by leadership (which needs not only to authorize, but to use and value the use) and structured training, which demonstrates, in practice, how AI integrates into work without replacing professional judgment. The tool is not imposed by decree; it is adopted through the experience of perceived value, conducted in a targeted format.

A practical example of this method is the creation of dedicated teams (squads) by type of delivery or thematic area - disputes, appeals, supplier contracts, labor compliance, mass litigation. This model, inspired by agile development practices adapted to the legal context, proposes that each squad is composed of professionals of different seniority levels: whoever leads the team defines the scope and quality criteria; mid-level professionals drive AI-supported execution; junior members operate the tools and document the flows. Together, in short validation cycles – ideally by delivery or in weekly reviews – they identify where AI adds value and where limits appear. Prompts that work are logged in shared libraries; those that fail are discarded with documented reasons. The result is a validated and documented workflow: here AI enters, here the human enters, and why. This trade-off, when formalized, transforms the tool into an institutional method – replicable, auditable, and independent of who makes up the team at a given time. For legal departments, there is an additional layer: the integration of the internal method of using AI with the contracted firms. Setting clear expectations on how AI may or may not be used in external deliveries is an essential part of governance.

The innovation lies precisely in this: in recognizing the preponderance of the human factor and the importance of structuring the work. AI is one piece of the puzzle, not the whole puzzle.

A checklist before adopting


Based on these reflections, we propose a minimum checklist for legal organizations that want to turn diagnosis into action:

1. Map before automating. Theorist Richard Susskind calls this "decomposition": breaking down the legal work into distinct tasks to identify which are amenable to automation and which require expert judgment.[ix] This means documenting work flows by area of activity, identifying where there is repetition that consumes time without adding value, and where the bottlenecks that compromise deadlines or quality are. The product of this exercise is a process map; not a generic spreadsheet, but a faithful representation of how work happens today. Without it, any tool is a shot in the dark. As we have argued in previous work on Legal Operations: without redesigning processes, technologies only create a superficial layer of innovation[x]. With AI, this logic applies even further.

2. Define the three degrees of use. Identify where AI can be an agent, where it is a co-pilot, and where the human is irreplaceable. This clarity protects the quality of the work and the integrity of legal reasoning. In practice, this means creating a matrix of tasks per area of activity, indicating the appropriate degree of autonomy for each one, and periodically reviewing it as technologies evolve.

3. Leadership must lead. Leadership needs to define which tools will be used, for what purposes, and with which protocols. This includes establishing supplier selection criteria, safety and governance parameters, and clear usage expectations by area and seniority. Without this direction, the result is fragmentation, not efficiency – each professional using a different tool, in a different format, without the knowledge being consolidated. In law firms, legal leadership typically holds the power to decide on tools. In legal departments, this power is often shared with areas such as IT, procurement, and innovation, which makes it even more critical for legal leadership to actively participate in the selection process and define the criteria for use, rather than fully delegating the technology decision.

4. Train for dialogue, not just for use. Training should not only teach how to operate the tool, but how to engage critically with it: how to formulate good questions, how to identify inaccurate answers, how to confront results with primary sources. Daniel Martin Katz, a leading researcher in computational law, argues that the effective adoption of AI in law requires the development of computational thinking as an essential skill – not for professionals to learn to program, but for them to understand the logic underlying the tools they use.[xi] This implies creating continuous training programs, with practical validation exercises and critical analysis of outputs, segmented by profile: junior professionals need to learn how to operate and how to question; mid-level professionals, to validate and integrate; leaders, to direct the use and evaluate results. The goal is not to train tool operators, but professionals who can identify when AI output is useful, when it is insufficient, and when it is dangerous.

5. Build legacy, not dependency. The goal is to create a methodology that stays within the organization – validated prompts, documented workflows, quality checklists, formalized review criteria. These artifacts should be stored in accessible repositories, updated periodically, and integrated into the onboarding of new professionals. If the knowledge is only in the heads of those who use the tool, any team or technology change starts from scratch.

6. Measure the real impact and respect the maturation curve. "It was ready faster" is not synonymous with efficiency if the result requires three rounds of review. Establish indicators that capture the complete cycle: total work time (including revisions), rework rate, internal and external customer satisfaction, and evolution of the quality of deliveries over time. For legal departments, indicators should include the perception of the business areas served: has AI accelerated response time? Reduced the need for follow-ups? Has it improved the clarity of opinions? Periodic reviews of these indicators allow you to adjust routes and identify where AI effectively adds value and where it does not. It is also important to recognize that results are not immediate. The first few months of adoption tend to generate more friction than gain: the team is still learning, prompts have not been validated, flows are not stabilized. Recognizing this curve and communicating it honestly is a condition for the initiative to survive the first cycle of results.

Structure guided by leadership, before speed


The available data and recent studies confirm that AI produces real gains when inserted into well-designed structures. Outside of them, it enhances the problems it should solve.

The way forward is not to halt adoption. It is to direct it and create the conditions for AI to adapt to the structure of the law firm or legal department, not the other way around. It is recognizing that the competitive advantage is not in who adopts AI first, but in who builds, around it, the structure that transforms technological capacity into quality of delivery.

Let this reflection serve for this: not to fear technology, but to adopt it with the same seriousness that we apply to the legal reasoning and governance of the organizations we serve. Because, in the end, that is exactly what it is all about.

 


[i]  MASSENKOFF, Maxim; McCRORY, Peter. "Labor market impacts of AI: A new measure and early evidence". Anthropic Research, 2026. Disponível em: https://www.anthropic.com/research/labor-market-impacts. The study introduced the "observed exposure" metric (observed exposure), which combines the theoretical capability of language models with actual usage data, giving more weight to automated and task-related uses.

[ii] DELL'ACQUA, Fabrizio; MOLLICK, Ethan et al. "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality". Harvard Business School Working Paper, 2023.

[iii] Mata v. Avianca, Inc., Case No. 1:22-cv-01461 (S.D.N.Y. 2023). Judge P. Kevin Castel sanctioned lawyers for submitting fictitious AI-generated case law citations without verification.

[iv] The term was popularized by developer and analyst of technology Simon Willison in 2024 and adopted by major outlets (The Atlantic, New York Magazine) for mass-generated low-quality to describe contents. Source: WILLISON, Simon. "The expansive, expensive era of AI Slop". 2024.

[v] COHEN, Mark A. "Legal Operations: The Key to 'Better, Faster, Cheaper'". Forbes, 2019. Available at: forbes.com. Cohen argues that legal transformation requires rethinking the business model, not just adopting tools.

[vi] DE CREMER, David. "For Success with AI, Bring Everyone On Board". In: HBR's 10 Must Reads 2026. Harvard Business Review Press, 2025.

[vii] McGINNIS, John O.; PEARCE, Russell G. "The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services". Fordham Law Review, v. 82, n. 6, p. 3041–3066, 2014.

[viii] The skills vacancy (skills gap) is not technique (knowing how to program), but collaborative and service provision. Lawyers need to learn to work jointly with technology and with other professionals (the concept of "T-Shaped Lawyer"). Fonte: DESTEFANO, Michele. Legal Upheaval: A Guide to Creativity, Collaboration, and Innovation in Law. American Bar Association, 2018.

[ix] Decomposition. The idea that task legal is not an indivisible block, but can be broken down into tasks, only some of which require a senior professional (expert trusted adviser) and others can be standardized or automated. Source: SUSSKIND, Richard. Tomorrow's Lawyers: An Introduction to Your Future. 3. ed. Oxford University Press, 2023.

[x] PERAZZA, Eduardo; CARDOSO, Juliana. "Legal operations: from theory to practice and the changing mentality". Source:https://lexlegal.com.br/legal-operations-da-teoria-a-pratica-e-a-mudanca-de-mentalidade/

[xi] KATZ, Daniel Martin. "Quantitative Legal Prediction — or — How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry". Emory Law Journal, v. 62, p. 909, 2013.