7 An AI Fluency Framework for Legal Practice
The Five Competencies
Building on Dakan & Feller’s work on AI fluency,[1] the AI Fluency Framework below comprises five interconnected competencies that work together to ensure responsible and effective AI use in legal practice. Think of these competencies as a compass guiding your AI interactions.
1. Purpose – Knowing when and why to use AI
2. Prompting – Speaking AI’s language effectively
3. Evaluation – Assessing AI outputs critically
4. Responsibility – Maintaining professional standards
5. Context – Understanding the broader implications
🎯 Purpose
What it means: Understanding when and why to use AI in legal work, making informed decisions about task suitability, and aligning AI use with professional goals.
Purpose is the foundational competency that drives all AI interactions in legal practice. It encompasses your ability to make strategic decisions about when, why, and how to incorporate AI into your legal work. This competency requires you to develop a nuanced understanding of both AI capabilities and legal task requirements, enabling you to match the right tool to the right job at the right time.
At its core, knowing when and why to use AI involves three critical dimensions. First, it requires task analysis. Deconstruct legal work into its components and assess which elements genuinely benefit from AI assistance versus those that require human expertise. Second, it requires a risk assessment. Evaluate the potential benefits against possible harms in using an AI system for a task (considering factors like accuracy requirements, confidentiality concerns, and professional liability). Third, ensure use is outcome-oriented. Any AI use should directly serve client interests and professional objectives rather than using technology for its own sake.
This competency also involves understanding the temporal aspects of AI adoption. Some tasks may not be suitable for AI today, but could become appropriate as technology evolves or as you develop greater proficiency. Purpose thus requires both current situational awareness and forward-thinking strategy about your practice and technological advancement.
Key Skills:
- Identifying appropriate AI use cases in legal work
- Recognising when not to use AI (cultural matters, sensitive client situations)
- Balancing efficiency gains with professional obligations
- Understanding the business case for AI in different practice areas
Key Questions:
- Is AI appropriate for this task?
- What are my specific objectives?
- What are the risks versus benefits?
- How does AI serve my client’s interests?
In Practice:
Before using generative AI to review a licence agreement, a lawyer considers: Can the model identify standard warranty provisions? If you answer ‘yes’, then they consider: Should the model determine the commercial reasonableness of the warranty provision? The answer to this question is likely ‘no’. Such a determination may require human judgment and client-specific knowledge (both pieces of information may sit outside of a language model).
💬 Prompting
What it means: Crafting clear, precise instructions that help generative AI systems understand requirements while providing necessary context and constraints.
Prompting represents not only your ability to engage with generative AI systems but also your understanding of the capabilities and limitations of these models. This competency extends far beyond simple prompt writing; it encompasses how you frame queries, provide context, guide responses, and iterate based on the outputs. The chapters that follow in Prompting start you on this journey.
Effective prompting requires understanding that generative AI has specific strengths and limitations. Just as you would when briefing a new colleague, you must provide sufficient background information, clear objectives, and appropriate constraints. However, unlike human communication, every assumption must be made explicit, every requirement clearly stated, and every constraint properly defined.
To develop this competency, start with Fundamentals to Prompting, where you will begin by designing prompts. Prompt design is the process of carefully crafting the phrasing, structure, and content of prompts to elicit high-quality, relevant, and coherent responses from a language model. You will then advance to Prompt Engineering, where you will take a more technical and systematic approach to prompting that utilises principles researched and tested by computer scientists. It goes beyond basic prompt formulation by employing advanced techniques to optimise a language model’s performance.
This competency also involves mastering the iterative nature of prompting. Rarely does a single prompt produce perfect results. Instead, effective practitioners engage in a structured review and dialogue. Start by analysing initial outputs, identifying any gaps or misunderstandings, and refining the prompt design to achieve better results (for more information, see Evaluating and Refining Outputs). This iterative process can mirror the legal drafting process, where multiple revisions refine and clarify meaning.
Furthermore, prompting includes understanding how to leverage generative AI’s capabilities optimally. This involves knowing when to request step-by-step reasoning, when to provide examples, when to ask for alternative perspectives, and how to structure complex requests to achieve the best results. It is about creating an interaction that maximises AI’s strengths while compensating for its weaknesses.
Key Skills:
- Designing effective, efficient, ethical and safe prompts
- Providing sufficient context
- Specifying output formats and requirements
- Iterating based on responses
- Building your practice on previous interactions
In Practice: Instead of asking “What are the director’s duties?”, an effective prompt would be: “Under the Corporations Act 2001 (Cth), summarise the duties of directors for a proprietary limited company, particularly focusing on sections 180-184.”
Common Mistakes:
- Vague requests lacking context
- Assuming AI knows your jurisdiction
- Single-attempt prompting (known as zero-shot) without refinement
- Overlooking the need for Australian spelling and terminology
🔍 Evaluation
What it means: Reviewing model outputs with professional scepticism, verifying accuracy, identifying gaps, and ensuring completeness for legal purposes.
Evaluation embodies the critical thinking and professional scepticism essential to responsible AI use in legal practice. This competency transforms you from a passive consumer of AI outputs into an active quality controller who can distinguish between reliable assistance and potential professional hazard. It requires developing a systematic approach to assessing AI-generated content across multiple dimensions of legal quality. Following the exploration of prompting in this text, the chapter Evaluating and Refining Outputs explains how to evaluate an output from a language model and improve it through prompt refinement.
The competency operates at three interconnected levels. At the factual level, you verify the accuracy of specific claims, references, and legal propositions. This involves checking whether cases exist, statutes are correctly quoted, and legal principles are accurately stated. At the analytical level, you assess whether the model’s reasoning follows sound legal logic, whether conclusions flow from premises, and whether the analysis considers all relevant factors. At the contextual level, you evaluate whether the output appropriately addresses your specific legal context, client needs, and jurisdictional requirements.
Evaluation also requires understanding the various ways language models can err. Beyond apparent hallucinations, AI might blend jurisdictions, conflate similar but distinct legal concepts, oversimplify complex doctrines, be biased, or miss recent developments (due to a lack of data). It might produce plausible-sounding but fundamentally flawed analysis, or accurate but incomplete advice that misses crucial considerations. Developing an evaluation competency means training yourself to spot these subtle errors that could slip past casual review.
This competency further involves creating systematic verification processes tailored to different use cases. A quick legal definition may require only basic fact-checking, while AI-assisted legal advice demands a comprehensive review across all relevant dimensions. Check out the techniques for assessing AI-generated legal content in Evaluating and Refining Outputs.
In Practice: When AI generates a summary of unfair dismissal law, a lawyer:
- Verifies all Fair Work Act section numbers
- Checks case citations in AustLII
- Confirms the currency of Fair Work Commission decisions
- Identifies any missing elements (e.g., small business provisions)
- Assess whether the reasoning follows Australian authorities
🛡️ Responsibility
What it means: Maintaining ethical obligations, protecting confidentiality, ensuring appropriate disclosure, and taking accountability for AI-assisted work.
Responsibility encompasses your ethical and professional obligations when integrating AI into legal practice. This competency ensures that technological innovation enhances rather than compromises professional standards. It requires navigating the complex intersection of traditional legal ethics and emerging technology while maintaining your fundamental duties to clients, courts, and the public.
At its foundation, this competency involves understanding that AI assistance never diminishes your professional accountability. Every AI-generated document, analysis, or recommendation ultimately bears your professional stamp of approval. This means maintaining the same standards of accuracy, diligence, and care regardless of whether work is self-generated or AI-assisted. The chapter Ethical and Professional Responsibility Considerations discusses the professional obligations related to using AI in legal practice.
The competency also addresses the unique challenges AI presents to traditional ethical frameworks. Client confidentiality takes on new dimensions when considering which information can be processed by AI systems. The duty of competence expands to include not just legal knowledge but also a sufficient understanding of AI tools to use them appropriately. The obligation to supervise extends to overseeing AI outputs with the same rigour applied to junior staff work. Billing practices must fairly reflect AI’s contribution without overcharging for efficiency gains or undervaluing human expertise.
Furthermore, Responsibility involves transparency and informed consent. This means developing clear communication about AI use, understanding when disclosure is necessary, and ensuring clients can make informed decisions about AI involvement in their matters. It also requires staying current with evolving regulatory guidance, professional conduct rules, and insurance requirements related to the use of AI. See Ethical and Professional Responsibility Considerations for more information.
Core Obligations:
- Maintaining competence and diligence (ASCR r 4.1.3)
- Protecting client confidentiality (ASCR 9)
- Avoid any compromise to integrity and professional independence (ASCR r 4.1.4)
- Transparent billing practices (ASCR r 12.2)
- Appropriate supervision of AI outputs
In Practice: A solicitor using AI for research:
- Never input client names or identifying details into public AI systems
- Reviews all AI-generated content before relying on it
- Documents AI use in file notes
- Discloses AI assistance in client cost agreements
- Takes full responsibility for the final work product
Essential Safeguards:
- Use firm-approved AI tools with appropriate data protection
- Anonymise information before AI processing
- Maintain human oversight at all decision points
- Document verification processes
- Ensure compliance with professional indemnity requirements
Explore Crafting an AI Policy to help create an AI policy for your organisation, ensuring technological advancement while maintaining legal and ethical standards.
🌏 Context
What it means: Understanding how AI impacts access to justice, cultural considerations, the legal profession, and society while ensuring inclusive and fair implementation.
Context represents your awareness of AI’s broader impact on the legal ecosystem, society, and justice itself. This competency lifts your perspective beyond individual AI interactions to consider systemic implications, ensuring that your AI adoption contributes positively to the legal profession and the communities it serves. It requires developing a nuanced understanding of how technological choices ripple outward, affecting access to justice, professional development, market dynamics, and social equity.
This competency operates across multiple spheres. In the professional sphere, it involves understanding how AI adoption affects the legal workforce, from junior lawyer training pathways to the future of specialised expertise. It requires considering whether the use of AI enhances or diminishes the profession’s collective capability and how it shapes career trajectories. In the client sphere, Context means recognising how AI tools might differently impact various client populations (from sophisticated commercial clients who expect technological efficiency to vulnerable individuals who digital barriers may further marginalise).
Context also encompasses cultural sensitivity, particularly crucial in Australian practice. This includes respecting Indigenous data sovereignty principles [PDF 1.4MB], understanding how AI systems might perpetuate or challenge colonial legal frameworks, and ensuring that efficiency gains do not override necessary cultural protocols. It involves recognising that some knowledge systems and legal traditions may be incompatible with AI processing, requiring human-centred approaches..
In Practice: A community legal centre considering AI adoption evaluates:
- Will AI tools help serve more clients effectively?
- Do the tools work well for culturally and linguistically diverse (CALD) communities?
- How will this affect volunteer lawyer training?
- What happens to clients without digital access?
- Are we perpetuating or addressing systemic biases?
Practical Implementation
The AI Fluency Workflow
Every AI interaction in legal practice should follow this structured approach:
1. ASSESS (Purpose) “Is AI suitable for this task?”
↓
2. PREPARE (Prompting) “How do I convey my requirements?”
↓
3. GENERATE – AI produces output
↓
4. VERIFY (Evaluation) “Does this meet legal standards?”
↓
5. COMPLY (Responsibility) “Have I met professional obligations?”
↓
6. CONSIDER (Context) “What are the broader implications?”
Example
Task: Reviewing a software licensing agreement, looking for unusual terms.
Purpose: An AI system could make a first pass at evaluating the licence agreement. However, an AI system suitable for identifying standard vs non-standard clauses should be used.
Prompting: Instead of prompting “review this licence agreement”, prompt with more context, for example, “Review the attached software license agreement. Identify any clauses that deviate from standard Australian software licensing terms. Flag any terms that might disadvantage the licensee. Format as a table with clause number, issue, and risk level.”
Evaluation: Verify all clause references, check risk assessments against the client’s business model.
Responsibility: Remove client identifiers, utilise the organisation’s approved AI platform with appropriate security, and document review process.
Context: Consider whether standardisation helps or hinders innovation in contracting.
Quick Reference Guide
The table below considers which competency is relevant at various stages and contexts of using an AI system or platform.
Situation | Primary Competency | Secondary Competency |
---|---|---|
Deciding whether to use AI for a task | Purpose | Context |
Writing prompts for legal research | Prompting | Purpose |
Reviewing AI-drafted documents | Evaluating | Responsibility |
Setting up firm AI policies | Responsibility | Context |
Training junior lawyers on AI | All five |
Practical Exercise
Competency Mapping
Take a recent legal task you completed. Map which AI fluency competencies were (or should have been) applied:
- What was the task?
- Could AI have assisted? (Purpose)
- How would you have instructed the AI tool? (Prompting)
- What would you verify? (Evaluation)
- What safeguards were needed? (Responsibility)
- What were the broader implications? (Context)
Pause and Reflect
Before moving forward:
- Which competency do you feel most confident in? Least confident?
- What specific AI fluency skills would most benefit your current role?
- What barriers might prevent the development of AI fluency in your workplace?
- How would mastering AI fluency change your practice in 12 months?
- What ethical concerns about AI in law matter most to you?
Now that you understand how to think about AI collaboration through the lens of AI Fluency, the next chapter will help you make informed decisions about which AI tools to adopt for your legal practice. You will apply the Purpose competency to evaluate different AI platforms, then you will utilise your Prompting skills to assess their capabilities, and leverage all five competencies to make informed strategic adoption decisions.
Remember, AI fluency is not about becoming a technologist; it is about enhancing your legal expertise with powerful new tools while maintaining the highest professional standards. The competencies you have learned in this chapter will guide every AI interaction throughout your career, ensuring you work with AI in ways that are effective, efficient, ethical, and safe.
- Rick Dakan and Joseph Feller, 'Framework for AI Fluency', Artificial Intelligence at Ringling (Webpage, 13 January 2025) <https://ringling.libguides.com/ai/framework>. ↵