Artificial intelligence in the law firm

Summary

Summary

This article offers an in-depth and detailed analysis of the new European and Italian regulatory framework regarding lawyers“ liability in the use of new exponential technologies. The text examines the impact of the European AI Act (EU Regulation 2024/1689), the Italian Law 132/2025 (fully operational as of October 10, 2025), and the recent FBE 2.0 guidelines. Practical and operational guidance is provided on key transparency obligations to clients, civil and criminal liability of the practitioner, concrete risks of algorithmic ”hallucinations," and IT governance required to manage artificial intelligence in the law firm in compliance with new security standards. Through the analysis of a recent and paradigmatic case law (Lombardy Regional Administrative Court, October 2025), it is clarified how liability remains entirely with the lawyer. Finally, the article illustrates how a structured approach to digital transformation-supported by specialized partners such as Lanpartners through IT Consulting, IT Concierge and IT Training services-is the only way to ensure security, privacy and regulatory compliance in an increasingly complex market.

The new regulatory landscape: from the generic to the specific

Until the recent past, Italian lawyers were moving through a rather tricky regulatory “gray zone.” Although automation and early machine learning systems were already an operational reality in many large firms, the legal framework of reference appeared fragmented and inadequate to the speed of technological innovation: the GDPR covered privacy and data protection aspects, and the Code of Ethics for Lawyers dictated general principles of diligence and competence, but there was a total lack of an organic discipline regulating the adoption of artificial intelligence in the law firm by defining in a timely manner the specific roles, limits and responsibilities of the professional.

With the enactment of the EU Regulation 2024/1689 (the well-known AI Act) in August 2024 and, even more incisively for our legal system, with Law 132/2025 enacted on September 23, 2025, and entered into force on October 10, 2025, this regulatory gap has been definitively filled. For the first time, the Italian legal system has a specific national framework regulating how intellectual professionals-lawyers, accountants, engineers, doctors-can and must use these advanced technologies.

The message sent by the legislature is unequivocal and marks a watershed: AI is legally qualified as a supporting tool, never a replacement. Technology must be an “amplifier” of cognitive capabilities, not a surrogate for human decision-making. As a result, liability, both civil and criminal, remains inextricably with the human professional, who must be able to rely on a secure, monitored and resilient IT infrastructure to avoid career-threatening penalties.

Article 13 of Law 132/2025: the four pillars of accountability

L’Article 13 of Law 132/2025 constitutes the core of the new discipline for the legal profession and deserves detailed analysis. The standard not only suggests ethical behavior, but introduces four fundamental legal obligations that redefine the operations of the modern law firm.

1. Prevalence of human input: the prohibition of total delegation

AI can only be employed in an ancillary and instrumental function. It is explicitly forbidden for the practitioner to fully delegate to the system the complex legal evaluation, the definition of trial strategy or the final decision on the line of defense to be taken. The lawyer's intellectual, critical and creative input must always remain prevalent and demonstrable at every stage of the mandate. Where technology begins to replace, not assist, the practitioner's discretionary judgment, there is a direct violation of the rule, with immediate disciplinary consequences.

2. Personal and undeniable responsibility: the end of the “technological excuse”

Contrary to the wishes of some practitioners, the use of automated tools in no way mitigates the civil and criminal liability of the lawyer; on the contrary, it broadens the scope of risk by introducing “culpa in eligendo” (in the choice of the tool) and “culpa in vigilando” (in controlling the result). In case of errors in the system, the lawyer who signed the act is liable in full. The technical excuse of the “bug” or algorithmic error is not admissible: the professional is liable for fault, slight or serious, exactly as if the error were the result of his own manual processing.

3. Transparency to the customer: a new subjective right

The Law introduces a new subjective right in the hands of the client: the client has the right to be informed whether, how, and to what extent artificial intelligence was used in the law firm in the performance of the assignment given. Such disclosure must be made in a clear, simple and comprehensive manner, typically included as a specific clause in the engagement letter or power of attorney. This is not a discretionary power, but a legal obligation that complements the duties of loyalty, fairness and transparency of the professional mandate, allowing the client to give informed consent to the use of his or her data.

4. Human control and supervision (“human in the loop”)

Every output generated, be it case law research or a draft contract, must be subjected to timely critical review. Both the AI Act and the FBE 2.0 guidelines insist on the concept of human in the loop: Man must remain actively in the decision-making control loop. Blind and uncritical reliance on algorithmic results is forbidden and constitutes negligence. Here comes into play the strategic importance of having IT systems that are not bureaucratic obstacles, but ergonomic facilitators of this continuous control.

The AI hallucination: the Lombardy TAR case (October 2025)

To understand the concrete severity with which Italian courts apply these theoretical principles, it is instructive to analyze the recent and controversial ruling of the Regional Administrative Court of Lombardy (Sec. V, Oct. 21, 2025, No. 3348).

In the present case, a lawyer had signed and filed an administrative appeal citing a number of specific case law precedents in support of his defense argument. During the oral argument hearing, upon timely reporting by the adjudicating panel and the opposing party, it emerged that these judgments did not appear in any official database. When cornered, the lawyer admitted that he had used generative search tools (LLM) to speed up his work, which had produced erroneous results: real “hallucinations,” that is, information invented out of thin air but statistically plausible and formally correct.

The TAR's position was crystal clear and strict: the use of the technological tool in no way constitutes an exemption for error. The lawyer has the precise and unbreakable burden of performing verification and critical checking on every source cited. Blindly relying on the algorithm without the screening of official sources constitutes gross professional negligence, detrimental to the decorum of the profession. The TAR forwarded a copy of the ruling to the Milan Bar Association to open disciplinary proceedings. The warning to the entire profession is clear: “algorithmic misconduct” is, legally, human misconduct.

Civil and criminal liability: expanding the scope

L’introduction of AI does not narrow the scope of professional liability, but expands it to new cases.
In civil law (Civil Code Articles 1176 and 1218), the standard of professional diligence is enriched with specific new burdens: control of output, appropriate choice of technological tool (which must be professional and not consumer), and verification of primary sources.

In the criminal field, the situation is even more rigorous and complex. Law 132/2025 made substantial changes to the Criminal Code to counter new technological risks. Prominent is Art. 612-quater of the Criminal Code, which criminalizes the unlawful dissemination of generated or altered content (so-called deepfakes) designed to mislead; a case that can involve the lawyer in cases of false or altered documentary production, even unintentionally, if there has been no control. In addition, specific aggravating factors have been introduced (Art. 61 no. 11-decies of the Criminal Code) for common crimes committed through the use of intelligent systems, if they constitute an insidious means or hinder public or private defense. This is not academic theory, but an operational reality in courtrooms as of Oct. 10, 2025.

The crucial role of privacy and data protection

One of the most underestimated risks in implementing artificial intelligence in the law firm concerns the processing of clients' personal and sensitive data. Uploading confidential information, names, case details, defense strategies, or health data to unsupervised cloud services (such as free, consumer versions of ChatGPT or the like) configures personal data processing subject to the GDPR and a potential, very serious breach of attorney-client privilege and duty of confidentiality.

The GDPR imposes stringent requirements that cannot be ignored: legal basis for processing, mandatory Data Protection Impact Assessment (DPIA) for high-risk processing, adequate security measures (technical and organizational), and transparency to the data subject. In this context, the Security becomes a top priority that cannot be delegated. Many practitioners ignore that by entering data on public platforms, such information can be incorporated into the training of the global model, potentially becoming accessible to third parties or re-emerging in other contexts. The December 2025 Ministerial Guidelines reiterate the principle of minimization: input only strictly necessary information, anonymize data at source, and use only “Enterprise” platforms that ensure data segregation.

This is where the choice of technology partner becomes discriminating for the firm's protection: relying on a partner like Lanpartners, an ISO 27001-certified company, ensures that all processes and infrastructure meet the highest international standards of IT security, ensuring the safeguarding of the firm's sensitive data and regulatory compliance.

IT governance and access control: strategic support from Lanpartners

To operate safely in this complex and changing scenario, the law firm cannot improvise do-it-yourself solutions. Properly integrating artificial intelligence into the law firm requires a structured IT governance that transforms technology from a potential obstacle or source of risk into an amplifier of results and productivity. Lanpartners supports law firms, SMEs and corporations on a daily basis by implementing a people-friendly IT management framework that covers all critical dimensions for compliance and efficiency.

Proactive monitoring via IT Concierge

The key question is: Who is accessing what tools? Are the systems up-to-date, patched, and secure? Robust governance requires constant, 24/7 vigilance. Thanks to the unique IT Concierge, Lanpartners offers more than just a help-desk traditional. Acting in the spirit of a luxury hotel concierge who anticipates the guest's wishes, the team of specialists actively and remotely monitors the firm's systems.
This proactive approach makes it possible to intercept possible vulnerabilities in platforms, intrusion attempts or network malfunctions before they generate blocking disruptions, allowing the lawyer to stay focused on his or her business and court deadlines. The IT Concierge Specialist becomes the single point of contact to resolve requests from professionals and staff in a timely manner, ensuring business continuity even under Smart Working, which is essential today for a firm's flexibility.

Strategy and Auditing with IT Consulting

The adoption of new technologies cannot be random or dictated by the fashion of the moment. Through its IT Consulting service, Lanpartners acts not as a mere hardware supplier, but as a true strategic business partner. The consultancy begins with an in-depth auditing phase to understand the state of the art of existing IT systems, map data flows, and verify whether the artificial intelligence in the law firm introduced (or to be introduced) complies with GDPR, the AI Act, and internal policies.
Next comes strategic investment planning to choose the technologies best suited to the structure and size of the firm. The goal is to turn technological innovation into a tangible competitive advantage, ensuring that any software introduced increases document and data security, a sensitive issue for professional firms, SMEs and corporations.

Staff Training with IT Training

“Power is nothing without control.” This maxim applies doubly to the new exponential technologies. Any tool, no matter how powerful, loses effectiveness or becomes dangerous if used improperly or naively. Lanpartners firmly believes in the importance of the’IT Training: targeted training courses aimed at professionals, practitioners and staff (secretaries, paralegals) on the proper use of IT tools and cybersecurity awareness.
In the current context, this means training staff to recognize AI hallucinations, to protect data at the prompt (input) stage, to recognize evolved phishing attempts, and to keep the “human in the loop,” ensuring that artificial intelligence in the law firm is used to its fullest productive potential while zeroing in on legal and reputational risks.

Concrete risks and mitigation strategies

In addition to purely regulatory risks, the day-to-day use of artificial intelligence in the law firm carries concrete operational risks that require flawless technical management and clear procedures:

  • System hallucinations: software can invent facts, laws or legal precedents with great apparent confidence.
    • Mitigation: timely verification of primary sources on official databases and ongoing staff training through IT Training sessions to recognize suspicious outputs.
  • Breach of confidentiality: careless uploading of confidential data to unsecured third-party servers.
    • Mitigation: implementation of strict security policies, automatic anonymization systems, and constant monitoring by the IT Concierge to prevent data leaks (Data Loss Prevention).
  • Violation of customer privacy: failure to provide information, invalid consent, or inadequate security in data storage.
    • Mitigation: adoption of certified solutions and verified in Auditing, relying on the technology partner's ISO 27001 certification that guarantees armored processes.
  • Reduction of human control: the drift toward uncritical automation to save time and effort.
    • Mitigation: creation of a firm culture supported by an IT strategy that places humans at the center of decision-making, using technology to eliminate repetitive work rather than intellectual work.

Appropriate and controlled use cases: the opportunity

AI should not be demonized, but governed through informed and guided digital transformation. When used correctly and with the right safeguards, it offers undeniable and immediate competitive advantages in areas such as:

  • Support in targeted case law research (always with subsequent human verification).
  • Preliminary analysis of large bulk contracts and document due diligence to highlight critical clauses.
  • Summary of voluminous documents to expedite file study and hearing preparation.
  • Drafting first drafts of opinions, articles or routine correspondence to speed up secretarial work.
  • Automated categorization and document management to instantly find every file.

In all these scenarios, the golden rule remains unchanged: the AI proposes and assists, the advocate disposes, decides and takes responsibility, supported by a robust IT infrastructure that never leaves the advocate alone.

The risk-aware professional and the value of the technology partner

Law 132/2025, the AI Act, and recent case law delineate a clear and definitive perimeter: artificial intelligence in the law firm is a legitimate, powerful, and necessary tool to compete, but its use requires maturity, control, and ironclad governance.
L’“digital” lawyer” of 2026 is not the one who delegates to the machine hoping that it will not make mistakes, but the one who uses the machine as expertly, prudently, and competently as he uses the codes: conscientiously, documented, and strictly in accordance with the standards.

To best deal with an ever-changing world of buzzwords such as Privacy, Security and Smart Working, the law firm cannot act alone or rely on chance. Lanpartners stands as the ideal partner to manage technological and regulatory complexity, offering people-friendly IT management services that enable professionals to be more productive, serene and secure, turning new regulatory obligations into an extraordinary opportunity for structured and sustainable growth over time. Contact us For any information, we will respond to you promptly.