24 Ethical and Professional Responsibility Considerations
Ethical Considerations
Integrating generative artificial intelligence (AI) into legal practice brings significant ethical considerations that must be carefully balanced against its potential benefits. Justice Needham of the Federal Court of Australia summarised these issues.[1]
There are a number of ethical issues with the way in which open datasets scrape information from a wide range of sources. These include plagiarism, copyright breach, and confidentiality. A document sought to be summarised or analysed by an AI programme can become part of the dataset for that AI tool, raising very real privacy and privilege concerns at the very least. ChatGPT’s (and I am sure others) terms of use include ownership of any questions and documents fed into it. That does not sit well with legal professional obligations, and raises queries as to whether privilege may thereby be waived.
Justice Needham argues that the potential risk of hallucinations or the making up of information is mitigated by ‘closed data-set programs’ like Lexis Plus AI and Westlaw Precision.[2] However, even those platforms have been shown to hallucinate themselves.[3]
What went wrong?
Since the introduction of ChatGPT in 2022, many cases have emerged that demonstrate the pitfalls of relying on generative AI in an ‘unthinking’ way.[4]
In Luck v Secretary, Services Australia [2025] FCAFC 26, the litigant made an application to the court that contained a fictitious case. The Full Court redacted the name of the alleged case, saying:[5]
“We apprehend that the reference may be a product of hallucination by a large language model. We have therefore redacted the case name and citation so that the false information is not propagated further by artificial intelligence systems having access to these reasons.”
In LJY v Occupational Therapy Board of Australia [2025] QCAT 96, the applicant cited a case that did not exist and contained a medium neutral citation of a different case. The Tribunal noted that the effect is such that:[6]
…including non-existent information in submissions or other material filed in the Tribunal weakens their arguments. It raises issues about whether their submission can be considered as accurate and reliable. It may cause the Tribunal to be less trusting of other submissions which they make. It wastes the time for Tribunal members in checking and addressing these hallucinations. It causes a significant waste of public resources.
On the other hand, in Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95, a legal representative for the applicant made submissions that contained unverifiable cases, including non-existent AAT quotes. The practitioner was then referred to the Legal Services Commissioner.
In Murray on behalf of the Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731, documents submitted to the court contained footnotes referencing anthropological and historical reports and papers that did not exist. The junior solicitor deposed that she had prepared the footnotes using Google Scholar.[7] The principal solicitor of the firm deposed as to the supervision provided to the junior solicitor and the checking of her work:[8]
He described that work as having been performed collaboratively between team members, but the substance of his evidence was that he was not aware that anyone had checked the junior solicitor’s work. Mr Briggs accepted that that it was an error on his part to allow collaborative work to be performed remotely and he described the failure to ensure that anyone checked the junior solicitor’s work as “an oversight error”. He expressed his regret for the inconvenience to the parties and to the Court.
The result of this case was that the solicitors for the applicants personally paid the respondents’ costs on an indemnity basis.
Tracking Imaginary Case Law
Damien Charlotin has created a database that tracks legal decisions in cases where generative AI produced hallucinated content. There are currently 173 identified cases, with 17 coming from Australia. Visit the list here: https://www.damiencharlotin.com/hallucinations/
The Fallout
Each jurisdiction appears to have taken different approaches to generative AI. The NSW State Courts have taken the strongest position, prohibiting its use in the preparation of evidence and expert reports.[9] However, leave may be sought in “exceptional circumstances” to use generative AI.[10] The NSW Guideline to Judges also states that “Gen AI should not be used for editing or proofing draft judgments, and no part of a draft judgment should be submitted to a Gen AI program.”[11] Victoria issued Guidelines rather than a Practice Note, urging “particular caution” when using generative AI tools.[12] Queensland also issued guidelines for the “Responsible Use [of AI] by non-lawyers.”
Justice Needham created a handy table summarising the currently in force procedures (including Practice Notes and Guidelines).[13]
Area of focus | Federal Court of Australia Notice to the profession – 28 March 2025 | NSW Supreme Court PN SC GEN 23 – Use of Generative Artificial Intelligence (Gen AI)NSWLEC and NSWDC equivalents | Victoria VSC – Guidelines for Litigants – responsible use of AI in litigationVictoria County Court equivalent | Queensland The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers |
---|---|---|---|---|
Risks and limitations | The Federal Court is keen to ensure that any Guideline or Practice Note appropriately balances the interests of the administration of justice with the responsible use of emergent technologies in a way that fairly and efficiently contributes to the work of the Court. | Hallucinations; misinformation; biased or inaccurate output; search requests may be automatically added to the database; confidentiality / privacy / LPP; copyright | Be aware of how the tools work, privacy and confidentiality may not be guaranteed / secure: par 1-2
Use of AI programs must not indirectly mislead other participants in the proceedings (incl the Court). Use of AI subject to obligation of candour to the Court and those of the CPA: par 4 Generative AI more likely to produce inaccurate information, it does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the Court: par 8 Check that info is not out of date; incomplete; inaccurate or incorrect; inapplicable to the jurisdiction; biased: par 8 |
Gen AI are not actually intelligent in the ordinary human sense and are unable to reliably answer questions that require a nuanced understanding of language content: par 1
Gen AI chatbots predict the most likely combination of words, not necessarily the most correct or accurate answer. Limited training on Australian law and currency Gen AI responses may contain incorrect, opinionated, misleading or biased statements presented as fact Confidentiality, suppression, privacy: par 2 Ethical issues, including biases in training data, copyright and plagiarism, acknowledgment of sources: par 4 |
Disclosure | In the meantime, the Court expects that if legal practitioners and litigants conducting their own proceedings make use of Generative Artificial Intelligence, they do so in a responsible way consistent with their existing obligations to the Court and to other parties, and that they make disclosure of such use if required to do so by a Judge or Registrar of the Court. | Mandatory for use in preparatory steps for evidence and expert reports | No mandatory disclosure, but parties should disclose the use of AI to each other and the Court if necessary (e.g. where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents): par 3
Self-represented litigants and witnesses are encouraged to identify the use of generative AI by including a statement in the document to be filed: par 5 |
N/A |
Evidence and expert reports | Parties will continue to be responsible for material that is tendered to the Court | Must not be used to generate the contents of affidavits, witness statements, character references etc. Preparatory steps are OK: par 10. These documents must contain a disclosure: par 13. Leave may be sought in “exceptional circumstances”: par 15
Must not be used to draft or prepare contents of an expert report without prior leave of the Court: par 20 Leave must be sought to use Gen AI to prepare expert reports in professional negligence claims at the first directions: par 23 |
Particular caution if using generative AI to prepare affidavits / evidence and expert reports; the witness / expert should ensure documents are finalised in a manner that reflects that person’s own knowledge and words: par 10 | N/A |
Confidentiality and LPP | Parties will continue to be responsible for material that is tendered to the Court | Information subject to NPP/suppression, Harman undertaking, subpoena material etc must not be entered into any Gen AI program unless satisfied that the information will remain within the controlled environment of the technological platform and is confidential, used only in connection with the proceeding, and not used to train any program: par 9A | Be aware of how the tools work, privacy and confidentiality may not be guaranteed / secure: par 1-2 | Do not enter any private, confidential, suppressed or legally privileged information into a Generative AI chatbot: par 2 |
Permitted uses | Parties will continue to be responsible for material that is tendered to the Court | Generate chronologies, indexes and witness lists; preparation of briefs or draft Crown Case Statements; summarise or review documents and transcripts; prepare written submissions or summaries of arguments: par 9B
Where Gen AI has been used in written submissions or summaries, the author must verify all citations and authorities: par 16 |
AI that can search and identify relevant matters in a closed category of information is helpful, e.g. Technology Assisted Review which uses machine learning for large scale doc review: par 6
Specialised legally focused AI tools more useful and reliable: par 7 |
They may help you by identifying and explaining laws and legal principles that might be relevant to your situation; prepare basic legal documents, e.g. organise the facts into a clearer structure or suggest suitable headings; help with formatting and suggestions on grammar, tone, vocabulary and writing style |
Professional Obligations Related to Using AI in Legal Practice
In Australian legal practice, integrating generative AI necessitates strict adherence to established ethical and professional standards. The Victorian Legal Services Board + Commissioner, the Law Society of NSW, and the Legal Practice Board of Western Australia issued the following statement outlining the professional obligations related to using AI in legal practice.[14]
When using AI and other legal technology, lawyers must continue to maintain high ethical standards and fulfil their professional obligations under the Legal Profession Uniform Law (Uniform Law), and either the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (ASCR), or the Legal Profession Uniform Conduct (Barristers) Rules 2015 (BR), including:
Maintaining client confidentiality (ASCR r 9.1; BR r 114). Lawyers cannot safely enter confidential, sensitive or privileged client information into public AI chatbots/copilots (like ChatGPT), or any other public tools. If lawyers use commercial AI tools with any client information, they need to carefully review contractual terms to ensure the information will be kept secure.
Providing independent advice (ASCR r 4.1.4; BR rr 3(b), 4(e) and 42). AI chatbots/copilots and other LLM-based tools cannot reason, understand, or advise. Lawyers are responsible for exercising their own forensic judgement when advising clients, and cannot rely on the output of an AI tool as a substitute for their own assessment and analysis of a client’s needs and circumstances.
Being honest and delivering legal services competently and diligently (ASCR rr 4.1.2 and 4.1.3; BR rr 4(c)–(d), 8(a) and 35). AI chatbots/copilots, research assistants, and summarisers cannot be relied on as a substitute for legal knowledge, experience or expertise. No tool based on current LLMs can be free of ‘hallucinations’ (i.e. responses which are fluent and convincing, but inaccurate), and lawyers using AI to prepare documents must be able and qualified to personally verify the information they contain, and must actually ensure that their contents are accurate, and not likely to mislead their client, the court, or another party.
Charging costs that are fair, reasonable and proportionate (Uniform Law ss 172–173; ASCR r 12.2). Lawyers using AI to support their work should ensure that the time and work items they bill clients for accurately represent the legal work done by law practice staff for their client. Lawyers who use AI should ensure that it does not unnecessarily increase costs for their client above traditional methods (e.g. because of additional time spent verifying or correcting its output).
The Societies went on to say that lawyers who are using AI in their practice should also consider:
Implementing clear, risk-based policies to minimise data and security breaches, and set out what AI tools they have decided to use in their practice, who can use those tools, for what purposes, and with what information. These policies should also set out how they will continuously and actively supervise the use of AI tools by junior and support staff, and how documents containing AI-generated content will be reviewed for accuracy, and verified before they are settled. We recommend that lawyers make these policies available to clients upon request to increase transparency (see below).
Limiting the use of AI tools in their practice to tasks which are lower-risk and easier to verify (e.g. drafting a polite email or suggesting how to structure an argument) and prohibiting its use for tasks which are higher-risk and which should be independently verified (e.g. translating advice into another language, analysing an unfamiliar legal concept or executive decision-making). Generative AI tools can be biased, and cannot understand human psychology or other external complicating factors that may be relevant. There may also be ethical concerns with the training data used for an AI model, including intellectual property and ownership rights, or the inclusion of highly offensive or unlawful text or imagery (including child exploitation material) that may make them inappropriate to use in your work.
Being transparent about their use of AI, and properly recording and disclosing to their clients (and where necessary or appropriate, the court and fellow practitioners) when and how they have used AI in a matter and how the use of AI is reflected in costs, if requested by the client (see above, Charging costs that are fair, reasonable and proportionate). Lawyers should carefully consider any ethical concerns about the use of AI raised by their clients, and address them proactively.
Framework for Australian Legal Practice
Generative AI is no longer a peripheral tool. The technology is challenging how we perform tasks, reshaping how we research, draft, negotiate, and advise. Tasks that once required billable hours can now be accomplished in minutes, offering lawyers the promise of delivering faster and more cost-effective services. Yet, each new capability raises equally real risks: the disclosure of privileged material, bias, unverified citations, and breaches of emerging court directives that now mandate transparency regarding the use of generative AI.
The Responsible AI Capability Framework presented below provides a structured path for harnessing these technologies effectively, efficiently and ethically. Grounded in the Legal Profession Uniform Law, the Australian Solicitors’ Conduct Rules and current practice notes from federal and state courts, the Framework translates broad regulatory principles into practical steps that any practitioner can adopt. The framework helps to implement our professional obligations systematically by providing a structured approach to ensuring the ethical and compliant adoption of AI in Australian legal practice.
Successful integration of generative AI means having the confidence and competence to understand, evaluate, apply, and continually govern generative AI tools. The framework breaks AI competence into five clear pillars: Knowledge, Ethics, Governance, Application and Innovation (Table 1). It is then coupled with a maturity matrix (Table 2) and presents guardrails on AI use based on practice notes and guidelines (Table 3). This enables individuals and organisations to pinpoint their current position and chart a realistic upgrade path without being overwhelmed by hype.
The Pillars
The framework operates across five interconnected pillars that address the core questions every legal practice faces when integrating AI:
- Knowledge: Building the technical understanding necessary for competent AI use
- Ethics: Ensuring compliance with professional obligations and client protection
- Governance: Establishing the policies, procedures, and oversight mechanisms required by regulators
- Application: Identifying appropriate use cases while maintaining professional standards
- Innovation: Leading responsible advancement in legal service delivery
How to Use It
The Maturity Matrix helps you assess your current capability level and plan systematic advancement across each pillar. Most practices will find themselves at different levels across different pillars.
The Mandatory Guardrails table consolidates the most current regulatory requirements that Australian legal practices must follow immediately. These represent non-negotiable compliance obligations that have emerged from recent court directions and regulatory guidance.
Rather than attempting to achieve the highest maturity level across all pillars simultaneously, consider this framework as a roadmap for sustainable, responsible AI adoption that protects both your clients and your practice while enabling you to leverage AI’s benefits effectively.
Important: This framework should be read in conjunction with your professional indemnity insurance requirements and any specific guidance from your jurisdiction’s legal professional body. Regular review and updates will be necessary as both AI technology and regulatory guidance continue to evolve rapidly.
-
Benchmark your current state against the maturity matrix to assess your progress.
-
Prioritise the gaps that pose the most significant regulatory or commercial risk (e.g., lack of prompt logging, no policy on privileged data).
-
Implement guardrails.
-
Invest in capability-building: targeted CPD, secure AI platforms hosted in Australia, and independent audits.
-
Review progress annually, updating policies and tooling as the technology and the law develop.
Pillar | Core Questions | Key LPUL / Ethical Touch-points |
---|---|---|
Knowledge | What is AI? How does it work?
Concepts, limits, hallucination, bias |
Duty of competence (ASCR r 4.1.1) |
Ethics | Should we use it?
Confidentiality, IP ownership, privacy, fairness, and bias mitigation |
Confidentiality (r 9); honesty (r 4.1.2) |
Governance | How do we control risk?
Policies, vendor due diligence, audit trails, and incident response |
LPUL ss. 34–36 (practice management); VLSB+C, LSNSW & LPBWA regulator statements[15] |
Application | Where does it help?
Research, disclosure review, drafting, client portals, predictive analytics |
Uniform Civil Procedure Rules; court AI practice notes (see below) |
Innovation | How do we lead?
New legal products, alternative fee models, access-to-justice tools |
Futures Committee agenda (Law Council) (lawcouncil.au) |
Pillar | Level 1 Aware | Level 2 Capable | Level 3 Managed | Level 4 Strategic |
---|---|---|---|---|
Knowledge | CPD primers; read Law Society “Responsible AI” guide | Staff understand prompting design and its limits | Firm-wide AI professional development; certify skills | Internal AI academy; creation of shared resources |
Ethics | Spot hallucinations/model sycophancy; conduct manual checks | Standard checklist before use | Automated red-flags | Publish transparency & impact reports |
Governance | Ad-hoc policy or guidance | Policy and approved tool register | ISO 42001-aligned AI risk framework | Continuous monitoring dashboard; third-party audits |
Application | Pilot research | Integrate of tools into existing processes | AI-assisted workflows across matters | AI-powered client-facing solutions |
Innovation | Observe the market | Run sandbox projects or experiments | Dedicated AI product team | Commercialise AI-enabled services |
Source | Practical Obligation |
---|---|
NSW Supreme Court Practice Note SC Gen 23 (in force 3 Feb 2025) | No Gen AI may draft the content of affidavits, witness statements or character references; each document must affirm AI was not used. (supremecourt.nsw.gov.au) |
Uniform-law regulators’ joint “Statement on the Use of AI in Australian Legal Practice” (NSW Law Society, VLSB+C, LPBWA 2024) | Core principles: transparency / accuracy, human oversight, client consent, data security, continuing competence. (lawsociety.com.au) |
Federal Court Notice to the Profession (Apr 2025) | Parties must disclose any Gen AI involvement and verify citation. (fedcourt.gov.au) |
Law Society of NSW “Solicitor’s Guide to Responsible AI” (Oct 2024) | Prompts must exclude confidential data; outputs require human review; maintain privilege logs. |
Additional Resources
For more information, see the following resources:
Victorian Legal Services Board & Commissioner – Statement on the use of artificial intelligence in Australian legal practice
The Law Society of NSW – Solicitor’s guide to responsible use of AI [PDF]
The Law Society of NSW – Statement on the Use of AI in Australian Legal Practice [PDF]
Federal Court of Australia – Notice to the Profession
Federal Court of Australia – AI and the Courts in 2025
Supreme Court of NSW – Practice Note SC Gen 23 – Use of Generative Artificial Intelligence (amended 28 Jan 2025)
Supreme Court of NSW – AI Practice Note Amendment Notice [PDF]
Supreme Court of Victoria – Guidelines for litigants: responsible use of artificial intelligence in litigation
Victorian County Court – Guidelines for litigants: responsible use of artificial intelligence in litigation [DOCX]
Queensland Courts – The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers
Queensland Law Society – Guidance Statement on Artificial Intelligence in Legal Practice
District Court of NSW – District Court General Practice Note 2 – Generative AI Practice Note and Judicial Guidelines [PDF]
Land and Environment Court of NSW – Use of Generative Artificial Intelligence
- Justice Needham, 'AI and the Courts in 2025' (Speech, Federal Court of Australia, 27 June 2025) [28] ('AI and the Courts in 2025'). ↵
- Ibid [29]. ↵
- See Varun Magesh et al, 'Hallucination- Free? Assessing the Reliability of Leading AI Legal Research Tools' (2025) 22(2) Journal of Empirical Legal Studies 216-242 https://doi.org/10.1111/jels.12413. ↵
- AI and the Courts in 2025 (no 1). ↵
- at [14]. ↵
- at [26]. ↵
- at [6]. ↵
- Ibid [9]. ↵
- Supreme Court of NSW, Practice Note SC GEN 23: Use of Generative Artificial Intelligence (Gen AI), 28 January 2025. ↵
- Ibid 15. ↵
- Supreme Court of NSW, Guidelines For New South Wales Judges in Respect of Use of Generative AI. ↵
- Supreme Court of Victoria, Guidelines for litigants: responsible use of artificial intelligence in litigation, 3 July 2024. ↵
- AI and the Courts in 2025 (no 1) Appendix 1. ↵
- See Victorian Legal Services Board + Commissioner, ‘Statement on the use of artificial intelligence in Australian legal practice’, News (Web Page, 6 December 2024) <https://lsbc.vic.gov.au/news-updates/news/statement-use-artificial-intelligence-australian-legal-practice>. ↵
- Ibid. ↵