- Lawyer Programs
- Learning Centre
- Student Resources
- Sole Practitioner Resources
- New Lawyer Resources
- Key Resources
- Legal Practice
- Continuous Improvement
- Cultural Competence & Equity, Diversity and Inclusion
- Lawyer-Client Relationships
- Practice Management
- Professional Conduct
- Professional Contributions
- Truth and Reconciliation
- Well-Being
- Anti-Money Laundering
- Public Resources
- Request a Presentation or Resource
- Home
- Resource Centre
- Key Resources
- Practice Management
- Generative AI and Technological Competence: Quick Tips for Alberta Lawyers
Last updated July 2025
As technology continues to evolve, it is essential for lawyers in Alberta to stay up-to-date to maintain their technical competence. The Code of Conduct for lawyers in Alberta emphasizes the importance of maintaining competence in all areas of practice, including the use of technology.
Understanding Technological Competence
Technological competence involves having the necessary knowledge and skills to effectively use technology in legal practice. This includes understanding the capabilities and limitations of various technologies, including generative AI, and ensuring that their use does not compromise the quality of legal services.
Maintaining Technological Competence
To comply with their duty of technological competence when using generative AI, lawyers should consider the following:
- Stay informed — continuously update your knowledge about the latest technological developments in the legal field. Attend seminars, workshops and training sessions to stay current with new tools and practices.
- Evaluate AI tools — carefully assess the generative AI tools you plan to use. Ensure they meet the necessary standards for security, accuracy and reliability. Choose tools that are reputable and have a proven track record. If you are unsure, speak to an IT professional.
- Implement security measures — establish protocols to protect solicitor-client privilege, client confidentiality and data security. Use encryption and other security measures to safeguard sensitive information. Consult an IT professional if you need support in implementing and monitoring security.
- Monitor AI output — regularly review and verify the content generated by AI systems. Ensure that it aligns with legal standards and is free from errors and biases.
- Maintain transparency — inform clients about the use of generative AI in their cases. Provide clear explanations of how AI is being used and the benefits it offers.
- Seek guidance — consult with colleagues, mentors, and experts in the field of AI and law. Seek advice on best practices and ethical considerations.
Ethical Considerations
In addition to technological competence, lawyers must be mindful of the ethical implications of using generative AI. The Code of Conduct outlines several principles that are particularly relevant:
- Confidentiality — ensure that any information shared with generative AI systems is kept confidential and secure. This includes any privileged material, client data and sensitive legal information. Typically, this means that you will have to pay for an AI product, as “free” AI products are unlikely to protect the confidentiality of any materials you provide them with. If you’re not paying for the product, you (and your information) may be the price.
- Competence — maintain a high level of competence when using generative AI. Understand its capabilities and limitations and ensure that its use does not compromise the quality of legal services.
- Integrity — act with integrity when using generative AI. Be transparent about the use of AI in legal processes and ensure that AI-generated content is accurate and reliable.
Lawyers Being Held Accountable
There is growing judicial emphasis on lawyers’ responsibilities to ensure that legal submissions are authentic and free from AI ‘hallucinations’ which are mistakes generated by AI that are plausible but are in fact incorrect or nonsensical.
In the early days of generative AI, courts seemed to accept the inevitability of submissions containing hallucinations and emphasized the seriousness of submitting such inaccuracies.
As the risks of using AI incompetently have become better publicized, this is being replaced with increasing references to contempt and cost awards against the responsible counsel.
For example, in Zhang v. Chen, 2024 BCSC 285, the court emphasized that citing fake cases in court filings is an abuse of process and is tantamount to making a false statement to the court. It ordered counsel to pay costs personally.
In Ko v. Li, 2025 ONSC 2766 and 2025 ONSC 2965, a lawyer in Ontario was ordered to show cause why they should not be cited for contempt after including fake cases in their submissions.
A self-represented litigant in British Columbia was ordered to pay costs for wasting the opposing party’s time with AI-generated hallucinations (Simpson v. Hung Long Enterprises Inc., 2025 BCCRT 525).
In Hussein v. Canada (Immigration, Refugees and Citizenship), 2025 FC 1138, a lawyer in Federal Court included hallucinated citations in his submissions and tried to mislead the court about what he had done. In addition to failing to verify his work, the Court found that ‘the use of generative artificial intelligence [was] not only undeclared but, frankly, concealed from the Court.’ The lawyer was ordered to pay costs personally.
Conclusion
Generative AI has the potential to revolutionize the legal profession, offering efficiency and innovation. However, it is essential for lawyers to navigate this new terrain with technological competence and ethical diligence. By adhering to the Code of Conduct and following the guidance above, lawyers in Alberta can harness the power of generative AI while upholding the highest standards of their profession.
For more information on generative AI, you can refer to the following Law Society resources:
- The Generative AI Playbook;
- Why a Generative AI Use Policy?; and
- Gen AI Rules of Engagement for Canadian Lawyers.
Copilot was used to assist in drafting the content of this document, July 2025.