"

22 Build a Personal Prompt & Pattern Library

Throughout this text, you have explored the fundamentals of prompt design, advanced engineering techniques, and legal-specific applications of generative AI. You have learned about the AI Fluency Framework for Legal Practice five core competencies (Purpose, Prompting, Evaluation, Responsibility, and Context) and how they work together to ensure responsible and effective AI use in legal practice. Now comes the essential step of transforming this knowledge into a practical, personalised toolkit that embodies the AI Fluency Framework in your daily practice. Building a prompt library is more than just collecting useful templates; it is about creating a systematic approach to AI integration that maintains the standards of legal practice while maximising efficiency gains.

Building a Library

The activity below bridges the gap between understanding prompting principles and implementing them consistently in your work. By developing your own curated collection of prompts and patterns, you create a foundation for responsible AI use that evolves with your practice and demonstrates mastery of the five competencies. However, this is not a one-time exercise but the beginning of an ongoing process of refinement and improvement that will become integral to your professional development.

Objective

Create a personalised collection of prompt templates and patterns that embody the AI Fluency Framework, ensuring each template supports effective, efficient, ethical, and safe AI collaboration in your legal practice.

Phase 1: Legal Task Inventory

Step 1: Identify Your Tasks

List 5–10 recurring tasks in your practice where you think AI assistance could be valuable

Step 2: AI Task Assessment

Categorise each task by AI interaction mode:

  • AI as Assistant (Automation): Defined tasks with clear parameters
  • AI as Collaborator (Augmentation): Partnership in thinking and execution
  • AI as Representative (Agency): Independent work with defined parameters

Step 3: Apply the Purpose competency

For each task, apply the Purpose evaluation questions:

  • Is AI appropriate for this legal task?
  • What are my specific objectives?
  • What are the risks versus benefits?
  • How does AI serve my client’s interests?

Step 4: Apply Ethical Filters

For each identified task, complete this checklist:

Does this task involve confidential client information?
What level of human oversight is required?
Are there professional conduct rule implications?
What verification steps are necessary?
Could AI errors in this task cause client harm?

Step 5: Assignment Risk Level

Categorise each task by risk level:

  • Low Risk
  • Medium Risk
  • High Risk

Example Task Analysis

Task: Contract clause analysis

AI Interaction Mode: AI as Collaborator (Augmentation)

Purpose Assessment:
– AI Appropriate: Yes – can identify patterns and standard provisions
– Objectives: Speed initial review, ensure comprehensive coverage
– Risk vs Benefit: Medium risk of missing nuances vs significant time savings
– Client Benefit: Faster, more thorough initial analysis

Ethical Considerations:
– Requires verification against current law
– Must maintain independent professional judgment
– Need human assessment of commercial reasonableness
– Oversight Required: Senior lawyer review before client advice

Risk Level: Medium

Phase 2: Template Development

Step 1: Creation

For each task, develop templates that consistently produce good and accurate results. They should include placeholders or notes for variable information you will need to add each time. These patterns should demonstrate prompting competency through the use of legally precise prompts, a clear understanding of jurisdictional context, and well-defined output specifications. Explore some of the examples in Practical Applications and Patterns.

Step 2: Documentation

Document each template, noting:

  • Prompt used
  • Quality of output (1-5 scale)
  • Specific prompting techniques that worked well
  • Verification time required
  • Any refinements needed
  • The context that made it successful
  • Identifying which approaches work best for different AI interaction modes

Example Success Documentation:

Date: [DATE]

Prompt: [TEMPLATE]

Task: Contract term extraction

Input: 45-page software licensing agreement

Output Quality: 4/5 (missed one minor term)

Verification Time: 15 minutes

Refinements: Added specific instructions about termination clauses

Success Factors: Clear jurisdiction specification, structured output format

Phase 3: Evaluation and Quality Control Systems

Step 1: Create Evaluation Criteria

For each template, establish specific quality metrics. Remember to be specific. Instead of saying it had “good performance,” be specific: “accurate sentiment classification.”

For example, here are some questions to develop criteria from Anthropic.[1]

  • Task Fidelity: How well does the model need to perform on the task? You may also need to consider edge case handling, such as how well the model performs on rare or challenging inputs.
  • Consistency: How similar do the model’s responses need to be for similar types of input? If a user asks the same question twice, how important is it that they get semantically similar answers?
  • Relevance and Coherence: How well does the model directly address the question or instructions? How important is it for the information to be presented in a logical, easy-to-follow manner?
  • Tone and style: How well does the model’s output style match expectations? How appropriate is its language for the target audience?
  • Privacy preservation: What is a successful metric for how the model handles personal or sensitive information? Can it follow instructions not to use or share specific details?
  • Context utilisation: How effectively does the model use the provided context? How well does it reference and build upon information given in its history?
  • Latency: Was the model’s response time acceptable?

Applying the AI Fluency framework, consider the following metrics.

Critical Assessment Framework:

  • Factual Level: Accuracy of claims, citations, legal propositions
  • Analytical Level: Sound legal logic, proper reasoning flow
  • Contextual Level: Appropriate addressing of specific legal context and client needs

Quality Metrics by AI Fluency Competency:

  • Communication Quality: Clarity of instructions, appropriate context provided
  • Evaluation Effectiveness: Accuracy rate, completeness of verification
  • Responsibility Compliance: Ethical safeguards maintained, accountability clear
  • Purpose Alignment: Client interests served, appropriate AI deployment

Step 2: Create a Reference System

Create a structured library with these components:

LIBRARY STRUCTURE:
├── Templates/
│   ├── Research/
│   ├── Drafting/
│   ├── Analysis/
│   └── Communication/
├── Successful Examples/
│   ├── Input-Output Pairs/
│   ├── Before-After Comparisons/
│   └── Best Practice Cases/
├── Refinement Log/
│   ├── Common Issues/
│   ├── Improvement Strategies/
│   └── Version History/
└── Compliance Documentation/
    ├── Ethics Checkpoints/
    ├── Verification Procedures/
    └── Risk Assessments/

 

This structured approach ensures that your prompt library not only improves efficiency but also maintains the highest standards of legal practice and professional responsibility. Remember: your prompt library is a living document that should evolve in tandem with your practice, the law, and advancements in AI capabilities.


  1. Anthropic, 'Define your success criteria', Developer Guide (Webpage, 2025) <https://docs.anthropic.com/en/docs/test-and-evaluate/define-success>.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

GenAI for Legal Practice Copyright © 2025 by Swinburne University of Technology is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.