1 Understanding Generative AI and its Role in Law
Here Come the Robots 🤖
Avoiding artificial intelligence (AI) in the legal profession is now nearly impossible; it is everywhere. AI represents a transformative technology that extends beyond the practice of law. As David Mellinkoff stated, ‘The law is a profession of words.’[1] This insight takes on new relevance today, as large language models (LLMs) generate and interpret text in ways that parallel how lawyers use, interpret, and craft language. The legal profession and AI systems depend on a mastery of language, making AI a natural, if unexpected, companion to legal practice.
At its core, AI refers to computer systems designed to perform tasks that typically require human intelligence. These systems do not follow explicitly programmed rules. Instead, they develop their statistical frameworks to achieve specific tasks. AI has created opportunities for the legal profession, opening new avenues for increased technology adoption and productivity. However, this technology has also introduced new risks. Attention to AI has surged exponentially since the launch of OpenAI’s ChatGPT in November 2022. We have likely all tried using OpenAI’s ChatGPT or a similar conversational tool based on an LLM. It is now hard to avoid hearing about machine learning, generative AI or LLMs.
Generative AI can generate new content, including text, code, music, and video. This technology represents a significant departure from traditional automation tools within the legal profession, providing unprecedented capabilities in document creation, legal analysis, and client service delivery. Since 2022, many legal technology tools have introduced built-in generative AI features.[2] It has also led to new tech start-ups that offer practitioners specific AI tools (for example, Habeas and Graceview). The technology is now advancing at a rapid rate and there are many commercial language models to choose from (see Choosing a Model to Adopt for more information). Most models released in 2025 are good for basic capabilities. However, it is important for legal practitioners to be aware of the underlying technology and how it wroks in order to navigate the growing market for LLMs and generative AI tools.
Behind generative AI is its training on extensive datasets of existing content. This process allows these systems to learn patterns, structures, and relationships within written texts. The machine learning aspect then enables the technology to create new content. Unlike traditional legal software, which relies on templates or predefined rules, generative AI promises to produce contextually relevant and nuanced content that adapts to specified scenarios.
The legal profession has always been characterised by its dedication to precision, analytical rigour, and ethical conduct. These same principles must guide our approach to using generative AI. Prompt design or engineering (the art and science of effectively communicating with AI systems) represents a new core competency for legal practitioners. It bridges the gap between traditional legal expertise and technological innovation, enabling lawyers to leverage generative AI tools while maintaining the high standards our profession demands.
A Profession of Words
If the law is a profession of words, then LLMs trained on textual data hold the potential to transform legal service delivery. With a focus on content generation, generative AI can reimagine how legal practitioners approach their daily tasks, offering new capabilities in legal analysis, document creation, and research assistance.
To effectively harness these tools, it is crucial to understand their fundamental nature to appreciate their impact on legal practice. Generative AI systems produce new content, analyse complex information, and engage in nuanced text-based interactions. These systems are trained on vast amounts of data, enabling them to process and generate human-like text responses. However, in the legal context, their ability to comprehend and work with complex legal concepts, precedents, and documentation can depend on whether the models are trained on legal data (e.g., case law and legislation) or have been fine-tuned for application in a legal context.
Fine-tuning is a specialised adaptation process in which an LLM is further trained on specific data to enhance its capabilities for targeted tasks. In a legal context, this process involves training a model on carefully curated legal documents, case law, statutes, contracts, and other relevant legal texts to enhance its understanding of legal terminology, reasoning, and conventions. For instance, while a general-purpose LLM might grasp the word ‘consideration’ in its everyday sense, a model can learn to recognise and apply its specific meaning within contract law through fine-tuning. However, it is crucial to note that while fine-tuned models can be more accurate for legal applications, they still require careful professional oversight. The quality and comprehensiveness of the fine-tuning dataset, along with the careful validation of the model’s outputs, remain critical factors in ensuring reliable performance in legal contexts.
Retrieval-augmented generation (RAG) offers an alternative method for a generative AI system to access and utilise information to generate responses. RAG combines the generative capabilities of LLMs with the ability to retrieve and reference specific information from a curated knowledge base in real-time. Think of it as giving an AI model access to a specialised reference library (e.g., cases, statutes, or regulations) to consult while formulating responses. When presented with a query, an RAG system searches its knowledge base for relevant information. Then, it employs this retrieved information along with its general knowledge to produce more accurate and contextually appropriate responses. For more information, see Retrieval-Augmented Generation.
For example, in a legal context, RAG could allow an AI system to reference specific cases, statutes, or regulations while generating legal analysis rather than relying solely on its general training. This helps address one of the key limitations of traditional LLMs, their tendency to hallucinate or generate plausible-sounding but incorrect information.
The potential impact of Generative AI on legal practice is broad and profound. This is true for many professions today, where the most significant effect is found in areas where vast amounts of information need to be sifted and synthesised, emphasising precision. For instance, a lawyer analysing a complex contract might use generative AI to quickly identify all clauses related to indemnification across multiple agreements, extract key terms, and draft a comparison matrix. This task might otherwise take hours of manual review.
However, with these powerful capabilities come significant responsibilities. Legal practitioners must strike a balance between technological efficiency and ethical considerations, maintaining appropriate human oversight and setting realistic expectations about what these tools can and cannot accomplish.
 Watch
Ethical Considerations and Professional Responsibilities
Integrating generative AI into legal practice brings significant ethical considerations that must be carefully balanced against its potential benefits. As officers of the court and trusted advisers, practitioners must maintain the highest professional standards while leveraging these new tools.
The duty to ‘deliver legal services competently, diligently and as promptly as reasonably possible,’[3] takes on new dimensions in the AI era. Legal practitioners should thoroughly understand the capabilities and limitations of AI and exercise appropriate oversight of AI-generated content. This understanding should extend beyond technical proficiency to encompass the strategic and ethical implications of AI use.
Confidentiality is crucial in the digital age. The use of AI tools necessitates careful consideration of data security measures, the protection of client information, and the management of third-party access controls. Legal practitioners must ensure that their use of AI systems does not undermine their duty to maintain client confidentiality.
Setting Realistic Expectations
Understanding the boundaries of AI capabilities is crucial for the effective and ethical implementation of AI in legal practice. While the technology offers impressive capabilities, it is essential to maintain a balanced perspective on its limitations and the need for human oversight. Current AI capabilities are substantial. These systems excel at rapidly analysing large volumes of text, identifying patterns, and efficiently creating material. Their content generation abilities facilitate the automation of routine tasks. However, limitations must be acknowledged. AI models cannot replace legal judgment. Models are constrained by their training data and may not reflect recent legal changes or nuanced interpretations. Understanding context presents another challenge. Generative AI systems may struggle to comprehend nuanced legal interpretations and client-specific contexts. Reliability considerations include the possibility of hallucinations or incorrect information, highlighting the need for human oversight verification.
Implementation of best practices focuses on three key areas. First, appropriate boundaries should be set, including clear use cases, review protocols, and mechanisms for human oversight. Second, quality control measures should include regular output verification, cross-referencing with authoritative sources, and detailed documentation. Finally, ongoing training and development should ensure education on capabilities, regular updates on best practices, and systematic feedback.
The key to successful AI implementation lies in understanding that these tools augment rather than replace legal expertise. Legal practitioners should approach AI as a powerful assistant that can enhance their capabilities while maintaining their essential role in providing legal judgment and ensuring professional standards. An action item list for legal practitioners would include:
- Developing clear protocols for use.
- Establish training programs.
- Create documentation systems.
- Regularly review and update implementation strategies.
- Monitor ethical guidelines and regulatory requirements.
- David Mellinkoff, The Language of the Law (Little, Brown & Co., Â 1963). ↵
- LTH Research Team, ‘A Look Inside the February 2025 LTH GenAI Legal Tech Map’, Legal Technology Hub (Web Page, 10 March 2025) <https://www.legaltechnologyhub.com/contents/a-look-inside-the-february-2025-lth-genai-legal-tech-map/> (‘LTH Gen AI Legal Tech Map’). ↵
- Legal Profession Uniform Law Australian Solicitors Conduct Rules, r 4.1.3. ↵