"

17 Effective Legal Prompting

To effectively leverage generative AI tools in legal practice, it is crucial to understand and apply domain-specific knowledge. Effective legal prompting bridges the gap between general design principles for prompts and the specific demands of legal practice. It ensures that AI-generated outputs meet professional standards and provide practical value in legal contexts. You can craft prompts that guide a model toward producing high-quality, relevant, and trustworthy output for various tasks and scenarios. Effective prompting is a skill that involves ongoing practice, reflection, and adaptation as both the technology and the legal landscape continue to evolve.

Guidelines

The following guidelines can assist legal practitioners in optimising interactions with generative AI systems.

  1. Clarity and Specificity

Create clear, specific, and well-defined prompts to minimise ambiguity and guide the model toward producing targeted and relevant output. This may involve providing detailed background information, defining key terms and concepts, and specifying desired formats or structures.

  1. Legal Framing and Terminology

Incorporate appropriate legal terminology and frameworks (e.g., IRAC) into your prompts. This ensures that the output is grounded in applicable legal standards and reasoning. Additionally, specify the relevant jurisdiction, authority, or period to direct the output toward the appropriate legal context.

  1. Goal-Oriented and Actionable

Clearly articulate the intended goal or purpose in your prompts (e.g., drafting a specific clause, identifying relevant cases, or explaining a legal concept). Use action-oriented language and specific instructions to guide the AI in generating practical and usable output. Sometimes, complex tasks are divided into smaller, more manageable prompts that can be tackled sequentially or iteratively.

  1. Ethical and Professional Boundaries

Prompting should be conducted with ethical and professional obligations in mind, including the protection of client confidentiality and adherence to standards of competence and diligence. Be aware of the limitations of AI in providing legal advice and maintain appropriate human oversight to ensure accuracy and reliability.

  1. Quality Control and Human Oversight

Establish mechanisms for review, feedback, and iteration within your prompting process to ensure the quality and accuracy of the model’s output. Overall, prompting typically involves an iterative process of testing, evaluating, and refining prompts based on the quality and relevance of the model’s output. Critically assess the model’s output and be ready to refine or clarify your prompts as needed to achieve the desired results.

  1. Continuous Learning and Adaptation

Stay informed about emerging generative AI models, as well as best practices and techniques for effectively prompting them. Experiment with different prompting methods and learn from the outcomes to continually hone your skills and workflows. Share knowledge and insights with colleagues, helping to establish mutual standards and guidelines for the responsible use of artificial intelligence in legal practice.

Pitfalls of Prompt Creation

However, when creating prompts, be mindful of these common pitfalls:

  1. Ambiguity: Vague or imprecise language can lead to responses that are irrelevant or incorrect. Always be specific in your requests.

✴️ Poor: “What does this judgment say?”

❇️ Improved: “Summarise the key reasoning of the judge in this contract law judgment.”

  1. Overloading: Asking too many questions or requesting excessive information in a single prompt can result in incomplete or confusing responses.

✴️ Poor: “Analyse this contract for legal issues, draft a response to the other party, and prepare a risk assessment report.”

❇️ Improved: Separate each into a distinct prompt for each task.

  1. Lack of Context: Failing to provide necessary context can result in generic or irrelevant responses.

✴️ Poor: “Is this clause enforceable?”

❇️ Improved: “Considering recent Australian case law on non-compete agreements, assess the enforceability of this clause for a senior executive in the tech industry.”

  1. Leading Questions: Phrasing that suggests a desired answer can skew the model’s response.

✴️ Poor: “Don’t you think this argument is flawed?”

❇️ Improved: “Evaluate the strengths and weaknesses of the argument regarding mens rea in this criminal case.”

  1. Failing to Specify Output Format: Without guidance, the model may provide information in a format that is not useful for your needs.

✴️ Poor: “Explain the immigration options for my client.”

❇️ Improved: “Create a table comparing three potential visa pathways for a skilled worker from India, highlighting eligibility requirements, processing times, and cost.”

  1. Assuming Legal Expertise: While models can offer legal information (if trained), it is essential to remember that responses should always be verified.

✴️ Poor: “Provide legal advice on how to proceed with this case.”

❇️ Improved: “Based on general legal principles, what are potential strategies to consider in this type of case? Note that this will require further analysis and confirmation by a solicitor.”

  1. Neglecting Ethical Considerations: Failing to consider ethical guidelines when prompting may result in responses that do not align with the legal professional’s responsibilities. For example, refer to Australian Solicitors’ Conduct Rules 4, 5, 9, 17, 19, and 37.

Resource icon Confidential Information

Be sure not to add confidential information to public generative AI tools (including identifying information, medical results, financial accounts, proprietary information and login details). For more information, see the Wall Street Journal article ‘The Five Things You Shouldn’t Tell ChatGPT’. Understanding the privacy policies and data handling practices of the AI tools you are using is also crucial. Always err on the side of caution and consult your organisation’s IT and compliance departments when in doubt.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

GenAI for Legal Practice Copyright © 2025 by Swinburne University of Technology is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.