25 Crafting an AI Policy
As generative artificial intelligence (AI) becomes increasingly integrated into legal practice, developing a comprehensive AI policy is essential. Creating an effective policy requires balancing innovation with risk management, ensuring that your organisation can harness the transformative potential of AI technologies while maintaining standards of professional responsibility, client confidentiality, and ethical practice. This section provides a systematic framework for developing an AI policy that reflects your organisation’s unique practice areas, risk tolerance, and strategic objectives. Rather than offering a one-size-fits-all solution, this section provides a systematic framework for developing an AI policy tailored to organisational needs. It aims to help organisations create a customised framework through structured reflection, stakeholder engagement, and evidence-based decision-making. Although having a policy in place is a valuable component, it is rarely sufficient on its own. Any organisation should strive for a governance framework.[1]
Creating a Policy
This practical guide helps your organisation develop an AI policy tailored to your specific practice needs and risk profile. Rather than providing a generic adoption policy, this template offers structured questions, action items, and development tools that foster meaningful discussions among stakeholders and result in a customised governance framework.
Working through this AI policy development activity will take approximately 4-6 hours and is best completed collaboratively with key stakeholders over multiple sessions.
Do
The activity below can be completed over three phases.
Phase 1: Preparation (30-45 minutes) Start by reviewing your organisation’s current AI usage, existing policies, and regulatory obligations. Document any AI tools currently in use and identify immediate gaps or concerns.
Phase 2: Section-by-Section Development (3-4 hours) Work systematically through each of the 12 policy sections, using the provided action items and discussion questions to guide your discussions. Assign specific sections to different team members based on their expertise, while ensuring that all key decisions are reviewed collectively. Take detailed notes on your discussions and decisions, as these will inform your policy documentation and future reviews.
Phase 3: Integration and Review (1-2 hours) Consolidate your work into a coherent policy document, ensuring consistency across sections and alignment with your organisation’s broader governance framework. Schedule a final review session to address any gaps or conflicts identified during the integration process.
Remember that this is an iterative process. The initial policy should be viewed as a foundation that will require regular updates as AI technologies, regulatory requirements, and your organisation’s needs evolve. Plan to review and revise your policy at least annually, or more frequently during periods of rapid technological or regulatory change.
- PURPOSE AND OBJECTIVES
Action Items:
▢ Define 2-3 primary objectives for your AI policy (e.g., client protection, efficiency, risk management).
▢ Identify specific challenges your organisation faces regarding AI adoption.
▢ Link AI policy objectives to your organisation’s existing values and strategic goals.
Questions to Address:
- What specific benefits do you aim to achieve through AI adoption?
- What risks are you most concerned about mitigating?
- How will this policy support your organisation’s overall strategic direction?
- SCOPE AND COVERAGE
Action Items:
▢ List all categories of AI tools currently used or being considered.
▢ Define which personnel the policy will apply to.
▢ Determine whether/how the policy will address personal AI tools used for work.
Questions to Address:
- Will your policy cover only firm-approved tools or also personal AI accounts?
- Which practice areas have the greatest need for AI guidance?
- What types of data and matters will be subject to special restrictions?
- GOVERNANCE STRUCTURE
Action Items:
▢ Identify who will be responsible for AI oversight.
▢ Design an approval process for new AI tools.
▢ Establish reporting mechanisms for AI-related concerns.
Questions to Address:
- Will you create a dedicated AI committee or assign responsibility to existing leadership?
- Who will evaluate and approve new AI tools?
- How will you ensure input in AI governance?
- What authority will these individuals or groups have?
- TOOL SELECTION AND APPROVAL
Action Items:
▢ Create criteria for evaluating AI tools (security, privacy, effectiveness).
▢ Develop a process for testing and approving new AI tools.
▢ Establish a method for documenting approved tools and their permitted uses.
Questions to Address:
- What security and privacy standards must AI vendors meet?
- What testing process will new tools undergo before they are approved?
- How will you communicate which tools are approved and for what purposes?
- DATA PROTECTION AND CONFIDENTIALITY
Action Items:
▢ Draft specific requirements for AI vendors regarding data handling and management.
▢ Create protocols for sanitising confidential information before AI processing (if applicable).
▢ Develop monitoring procedures for data protection compliance.
Questions to Address:
- What contractual terms will you require from AI vendors regarding data use?
- How will you handle especially sensitive client information?
- What safeguards will prevent confidential information from being used for model training?
- CLIENT DISCLOSURE AND CONSENT
Action Items:
▢ Draft language for client engagement letters regarding the use of AI.
▢ Develop a tiered approach for disclosing different levels of AI use.
▢ Design a process for documenting client AI preferences.
Questions to Address:
- At what level of AI use should explicit notification be provided to the client?
- How will you explain AI benefits and limitations to clients?
- How will you handle clients who restrict AI use on their matters?
- ETHICAL USE GUIDELINES
Action Items:
▢ Identify specific ethical risks.
▢ Develop protocols for addressing bias or inaccuracy in AI outputs.
▢ Define boundaries for appropriate vs. inappropriate AI applications.
Questions to Address:
- What ethical rules are most relevant to AI use?
- What types of matters are too sensitive for AI assistance?
- How will you ensure AI use aligns with lawyers’ ethical obligations?
- VERIFICATION AND QUALITY CONTROL
Action Items:
▢ Create verification protocols for different types of AI outputs.
▢ Develop documentation requirements for AI verification.
▢ Design guidance for creating and refining effective prompts.
Questions to Address:
- What verification steps are required for different risk levels of AI use?
- How will you document that AI outputs have been adequately verified?
- What prompt engineering techniques will you standardise?
- TRAINING REQUIREMENTS
Action Items:
▢ Define minimum AI competency standards for different roles.
▢ Design basic and advanced training programs.
▢ Create a schedule for ongoing AI education and training.
Questions to Address:
- What skills must all staff develop vs. specialised AI users?
- How will you assess AI competency?
- How frequently will training updates be required?
- DOCUMENTATION AND RECORD-KEEPING
Action Items:
▢ Create templates for documenting significant AI use.
▢ Establish retention policies for prompts and outputs to ensure consistency and accuracy.
▢ Develop audit procedures for AI use compliance.
Questions to Address:
- What AI interactions need to be preserved in the client file?
- How will you track AI contributions to a work product?
- What documentation will demonstrate appropriate oversight?
- INCIDENT RESPONSE
Action Items:
▢ Define what constitutes an AI-related ‘incident’.
▢ Create a step-by-step response protocol.
▢ Assign responsibility for incident management.
Questions to Address:
- What AI errors would require client notification?
- What immediate steps should be taken when AI produces harmful content?
- How will incidents be used to improve AI practices?
- IMPLEMENTATION AND COMPLIANCE
Action Items:
▢ Create a phased implementation timeline.
▢ Develop compliance monitoring procedures.
▢ Establish consequences for policy violations.
Questions to Address:
- How will you roll out the policy?
- How will you ensure ongoing compliance?
- How will the policy adapt to the rapidly evolving capabilities of AI?
Next Steps
- Form an AI policy working group.
- Complete stakeholder interviews.
- Draft initial policy sections.
- Conduct a legal ethics review.
- Create an implementation timeline.
- Develop training materials.
- Establish a review schedule.
- See MinterEllison's guide to creating an AI Governance Framework: MinterEllison, 'An AI policy is not AI governance' (Webpage 25 July 2025) <https://www.minterellison.com/ai-illuminate>. ↵