- Learning Centre
- Lawyer Programs
- Key Resources
- Legal Practice
- Continuous Improvement
- Cultural Competence & Equity, Diversity and Inclusion
- Lawyer-Client Relationships
- Practice Management
- Retirement Guide
- Business Continuity and Succession Plan Guide and Checklist
- Practice Management Assessment Tool
- Professional Conduct
- Professional Contributions
- Truth and Reconciliation
- Sole Practitioner Resources
- Student Resources
- Public Resources
- Disaster Planning and Recovery
- Upcoming Events
- Media Room
- Latest from the Law Society
Since ChatGPT exploded onto the market late in 2022, interest in artificial intelligence (AI) has been at an all-time high. Generative AI (Gen AI) has become the thing that everyone in the legal community is talking about. Words like ‘seismic’ and ‘revolutionary’ are routinely used to describe the potential impact this technology will have on the practice of law.
The Code of Conduct calls on all lawyers to “develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities” (Rule 3.1-2, Commentary ). When it comes to Gen AI, this means understanding both its potential benefits as well as its risks.
This resource should serve as a starting point for Alberta lawyers seeking to harness the benefits of disruptive technologies like Gen AI while safeguarding their clients’ interests and maintaining their professional competence.
Since firms may develop internal tools that address many of the concerns raised below, the focus of this document is publicly available AI tools.
AI v. LLMs v. Gen AI v. ChatGPT v. OpenAI
There is a lot of confusing terminology in this rapidly developing field.
AI is a broad term referring to any machine-based system that can make predictions, recommendations or decisions influencing real or virtual environments for a given set of human-defined objectives.
It refers to all manner of techniques used to search and analyze large volumes of data; robotics dealing with the conception, design, manufacture and operation of programmable machines; and algorithms and automated decision-making systems able to predict human and machine behaviour.
Lawyers have been using AI for decades: spam filters, spellchecks, search terms in electronic research and automated document generation are but a few examples.
Large language models (LLMs) are types of computer programs that focus on language processing to produce text that resembles what humans would produce. They are trained on vast amounts of text data, such as books, articles and material available on the internet, to understand human language patterns, grammar and semantics.
LLMs generate answers based on the probability of a word or combination of words that is most likely to come next. By predicting what comes next, the platform can generate human-like content that is statistically likely in response to a prompt. It can be told to mimic specific writing styles or tones, present the information in various formats and shorten or lengthen the response as directed.
Gen AI is a particular type of large language model. Unlike other forms of AI, it doesn’t simply reproduce material it finds. It generates material based on information ‘prompts’ that the user inputs, which typically consist of short instructional texts. Its answers are fluent and appear to be human-generated.
Lawyers can use Gen AI to create human-like text in tasks such as client onboarding, scheduling appointments, billing, document review, legal research and drafting. This powerful tool can be used to automate repetitive and time-consuming tasks, improve the quality of written material, and free up valuable time for higher-level analysis and strategic decision-making.
The risks and benefits can vary significantly between public tools like ChatGPT and private AI solutions. For example, Lexum is using AI across their legal research services in ‘private’ mode. A number of larger law firms have already developed in-house AI tools for research purposes by their lawyers and staff. No matter what the solution, they all require diligence and training for staff prior to use.
ChatGPT is the name of a particular Gen AI product. There are free and paid versions (GPT-3.5 and GPT-4). ChatGPT is not alone in the field, with a number of well- and lesser-known competitors including Microsoft’s Copilot, Google’s Bard and PaLM2. Companies like LawDroid, Rally, Harvey.AI and LexisNexis are also building legal profession-specific models.
This is just a small sampling of the Gen AI products currently available. More are being released daily.
OpenAI is the name of the company that created ChatGPT.
The Courts and Gen AI
In 2023, the Alberta Courts and the Federal Court of Canada issued notices about the use of Gen AI in court proceedings. Although the Code of Conduct applies only to lawyers, these notices apply to all litigants, including those without legal representation.
In their Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions When Using Large Language Models, the Alberta Courts acknowledged that emerging technologies like Gen AI bring both opportunities and challenges, and that the legal community must adapt accordingly.
Their Notice urged litigants to exercise caution when citing legal authorities or analysis derived from LLMs and emphasized that “it is essential that parties rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers, or well-established public services such as CanLII” for any references to case law, statutes or commentary in representations to the courts.
The Courts called for “humans in the loop” and stipulated that all AI-generated submissions must be verified with “meaningful human control” that cross-references reliable legal databases to ensure that citations and their content hold up to scrutiny.
This obligation accords with the longstanding practice of lawyers to verify their sources.
The Federal Court “recognize[d] that emerging technologies often bring both opportunities and challenges” but went further than the Alberta courts and required litigants to inform the court and other parties “if they have used artificial intelligence to create or generate new content in preparing a document filed with the Court.” This must be done in writing in the first paragraph of any such document.
As with the courts of Alberta, the Federal Court’s Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings “urges caution when using legal references or analysis created or generated by AI, in documents submitted to the Court. When referring to jurisprudence, statutes, policies, or commentaries in documents … it is crucial to use only well-recognized and reliable sources. These include official court websites, commonly referenced commercial publishers, or trusted public services such as CanLII.”
Humans must once again be “in the loop.” The Federal Court note states that it is “essential to check documents and material generated by AI. The Court urges verification of any AI-created content in these documents.”
Duty of Technological Competency
As the Ontario Superior Court of Justice noted in Worsoff v. MTCC 1168, 2021 ONSC 6493,
Efficiency, affordability, and enhanced access to justice trump counsels’ comfort and presumptions every time. With the current pace of change, everyone has to keep learning technology. Counsel and the court alike have a duty of technological competency in my respectful view. Older judges and counsel may be behind younger counsel and the rest of society who use computers with greater regularity and sophistication than we do … Once upon a time, I had to learn how to use a Gestetner (Google it) and then a fax machine. I do not accept that in person is just “better”. It can be in some cases. But if counsel just prefers it because he or she is more comfortable with it, ought we to reject the printer because I liked my Gestetner (and Word Perfect for that matter)?
Competent practice requires an understanding of the risks as well as the benefits associated with technology. As an example, in People v. Zachariah C. Crabill 23PDJ067, a Colorado lawyer used ChatGPT to draft a motion to set aside a judgment. The lawyer cited case law that he found through ChatGPT. He did not read the cases or attempt to verify that the citations were accurate. After filing the motion but before the matter was heard, he discovered that the cases were either incorrect or fictitious yet failed to alert the Court. Nor did he withdraw the motion. When the judge expressed concerns about the accuracy of the cases, the lawyer falsely attributed the mistakes to a legal intern. Six days after the hearing, he filed an affidavit explaining what had really happened.
The Court ruled that he had failed to competently represent his client; had not acted with reasonable diligence and promptness when representing the client; had knowingly made a false statement of material fact or law to a tribunal; and had engaged in conduct involving dishonesty, fraud, deceit or misrepresentation. The lawyer was suspended for a year and a day, with ninety days to be served and the remainder to be stayed upon his successful completion of a two-year period of probation, with conditions.
- Before proceeding, determine how the Gen AI tool was trained.
- Keep current on the status of Canada’s proposed Artificial Intelligence and Data Act (AIDA). AIDA would set the foundation for the responsible design, development and deployment of AI systems that impact Canadians. It would establish national requirements for the design, development, use, and provision of AI systems and prohibit various AI-related conduct that could result in harm to individuals or create biased outputs.
Key Risks of Using Gen AI
While the opportunities are numerous, Gen AI comes with a variety of risks to the confidentiality of your input, reliability of the output, security, privacy, copyright infringement and more. Diligence and proper training for staff are needed in all cases prior to use.
The risk of inadvertent disclosure of confidential client or proprietary information cannot be overstated. Gen AI platforms like ChatGPT were not built with confidentiality in mind, so information contained in prompts may easily find its way into the public domain if care is not taken at all times. Our ethical rule of confidentiality, Rule 3.3 of the Code, is broad and requires continuing due diligence to hold in strict confidence all information concerning the business and affairs of a client throughout all aspects of a client relationship and beyond.
For example, in one report involving Samsung, engineers used ChatGPT to help fix problems with their source code. In doing so, they inputted the source code itself into their prompt. They also asked ChatGPT to convert internal meeting notes into a presentation. Once entered, there was no way to retrieve or delete the compromised data. Samsung’s trade secrets were then available to Open AI, the company behind ChatGPT, for purposes completely unrelated to Samsung.
Any content you upload to a Gen AI engine, whether public or private, should be carefully considered. For example, uploading content such as a memo for refinement, or uploading a document for proofreading, means the content of that document is now available for the Gen AI engine and your content could potentially be used to train that engine and for other purposes. Uploading that content could risk privacy and you may not be able to retrieve your information once it has been uploaded.
This issue may be addressed through the use of internal or proprietary AI platforms, a number of which are currently in development, but best practice is to exercise extreme caution throughout.
- Never include confidential or potentially identifying information in prompts.
- Use only non-identifiable information in prompts.
- Use caution at all times.
Particularly in the case of impersonation and fraud, Gen AI tools can be used for criminal purposes. For example, using an AI speech generator, a sound bite can be taken from a voice mailbox greeting to impersonate that person. As with any powerful tool, the tool can become a weapon, so caution is needed.
The free version of ChatGPT-3.5 has a knowledge cutoff date of September 2021. This means that events after this time will not be reflected in its responses.
ChatGPT-4’s cutoff has been updated to November 2023 so it is more recent but will still not be aware of current events in all cases. This may affect the reliability of its responses as well.
As a ‘generative’ tool, Gen AI is designed to generate words or images based on the information it encounters in its training. This is not a flaw. By design, it fabricates information when it doesn’t have suﬃcient data to answer a prompt. These events are known as ‘hallucinations.’
In the legal context, this is highly problematic. Gen AI tools are not tied to a foundation of truth or reality and are designed to provide creative responses to queries.
Even though these products can assist in retrieving relevant case law, statutes and legal articles from vast databases, there are notorious examples of how they have filled gaps in data by making up names, dates, historical events and even legal cases. If pressed, they may even write the cases themselves.
Recognizing that this happens, any lawyer using Gen AI for substantive legal work must proceed with caution and ensure that they independently verify all information generated by the platform. Lawyers should never rely on Gen AI to judge its own accuracy.
Gen AI models are trained on data from the internet. If the data is unreliable, so too will be the resulting output.
As a result, Gen AI tools can produce content that is discriminatory, biased or that reinforces stereotypes.
For example, a Bloomberg study evaluating a Gen AI image generator found that images created of high-paying jobs were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.”
Similar results were found for prompts based on by gender. Most occupations in the dataset were dominated by men, except for lower-paying jobs like housekeeper and cashier. Men with lighter skin tones represented the majority of subjects in every high-paying job, including “politician,” “lawyer,” “judge” and “CEO.”
Part of the problem is the opacity of Gen AI models. Because Gen AI operates as a black box, it is difficult to assess the validity of the inputs it relies on or trace how the system produces its outputs.
If lawyers cannot assess Gen AI systems for bias, they may inadvertently perpetuate stereotypes, violate human rights legislation and damage public trust in the justice system.
- Review generated content to ensure that it aligns with your ethical and legal obligations. This includes assessing for biases or stereotypical associations.
- Use prompts that minimize biases.
While it may seem like Gen AI tools create new material from independent thought processes, that is not how they function. The platforms are trained on data scraped from the internet — billions of data points that can include copyright-protected materials like articles, books, code, paintings or music — that the platforms use to identify patterns and use to respond to a prompt.
If the AI-generated outputs contain material that is identical or substantially similar to copyright-protected work, there is a risk that they infringe the original copyrights. At this juncture, it is still unclear who owns content created by Gen AI platforms. Lawyers need to understand the risks this represents and how to protect themselves. They need to ask their providers whether their models were trained with any restricted content.
Going forward, new Gen AI platforms may be developed that train themselves on internal or proprietary datasets rather than the internet generally. This would avoid the risk of copyright infringement, let the platforms produce content in the same style as their users’ previous work, and create a more transparent audit trail.
- Disclose the use of Gen AI any time it is relied upon.
Opportunities for Using Gen AI
What follows are some examples of how lawyers can use Gen AI in their day-to-day practices. As this technology is rapidly evolving and new opportunities appear, lawyers taking advantage of its potential should always be mindful of new risks and their professional and ethical obligations.
Client Satisfaction: Some consumers of legal services will be uncomfortable with the idea that their legal work is performed by a machine. Others will appreciate the speed and convenience that AI platforms offer. Firms that have determined they can safely use Gen AI can, if optimized correctly, respond to client comments, questions and concerns immediately, at all hours.
Automated Client Intake and other Administrative Tasks: AI chatbots can automate client onboarding. Routine documents can be turned into smart forms that clients complete with AI assistance. Chatbots can be used to help clients navigate the law firm’s website and fill out questionnaires, which are then automatically entered into the firm’s client management software. Meetings can be scheduled quickly and easily, and calendars can be automatically updated.
Live Chat: For lawyers wanting to add chat features to their websites, Gen AI can help them accomplish that goal. While basic chatbots can answer yes/no questions and provide FAQs, Gen AI can be used to respond to more complex questions and develop much more friendly and seamless client interactions. As a precaution, the chatbot should disclose that it is not providing legal advice generated by a lawyer.
Brainstorming: Some lawyers are using Gen AI programs to prepare for trial. This can include everything from developing questions for potential jurors during jury selection to explaining expert reports to developing arguments and determining the order in which arguments should be made.
Preparing Cross-Examination: By listing the facts and asking for suggestions on how to approach them, Gen AI can help structure a questioning or cross-examination.
Argument Validation and Counter Argument Consideration: Lawyers can ask Gen AI to assess the strengths and weaknesses of their arguments. It can be asked to anticipate opposing counsel’s counter arguments.
Create Agendas, Memos and Contracts: Gen AI can be used to automate the process of generating documents. By giving the tool specific input and requirements, lawyers can use Gen AI to create meeting agendas and agreements. It can be told to prepare a memo instructing an associate to research a point or draft a document. It can take the resulting research memo and turn it into a brief. Note, however, that free versions of some tools struggle to produce quality material. More advanced, paid models may be more reliable, and some firms are developing tools that may be even more reliable because they are trained on legal data rather than what may be generally available on the internet.
Drafting Correspondence, Proposals and Outlines: Lawyers can use Gen AI to improve their correspondence, project proposals, cover letters, resumes, profiles and other written communications. By analyzing the context and purpose of the communication, they can serve as ‘creative collaboration’ tools that generate quality professional text. They can re-write your letters to fit your client’s style. They can suggest alternate phrasing, provide example of more persuasive arguments, and ensure that documents conform to drafting conventions and requirements. Gen AI is particularly useful to generate ideas and assist in writing a first draft or outline.
Identifying Risks and Inconsistencies in Contracts: Gen AI can analyze documents to identify potential risks, ambiguities and inconsistencies. By automating repetitive tasks, such as identifying specific clauses, checking cross-references or flagging anomalies, the tool can accelerate the review process, saving time and resources. Lawyers can streamline their document review and focus instead on critical issues, compliance and risk mitigation.
Creating Presentations: Because Gen AI offers excellent summarizing skills and can incorporate written words and images, it can be used to prepare a PowerPoint presentation based on memos or articles the lawyer has already written.
Gen AI can be a valuable assistant that summarizes lengthy legal documents, precedents, legal briefs, expert reports or medical records. It can break down legal jargon and complicated issues. It can sort through large volumes of data and identify key documents for a case. It can distill complex information into concise and understandable summaries.
Potential Time and Cost Savings: One of the major benefits of Gen AI is rapid idea brainstorming and drafting.
Lawyers who learn to use Gen AI safely and responsibly will likely see a competitive advantage. Time currently spent filling out forms, drafting emails and advertising can be reduced. Increasing the efficiency of firm operations can lower costs to the client and make access to professional legal advice more affordable, thus increasing access to justice and legal services generally.
Reduce Procrastination, Eliminate Mental Blocks: Gen AI can help reduce procrastination by providing the initial spark needed to get work done. It can overcome mental blocks interfering with a writer’s creativity. In the context of performing due diligence, case preparation and legal opinion drafting, it can free the lawyer to realize significant savings and efficiencies and work on higher-value tasks.
Proofreading: Gen AI can improve the wording and structure of any document and fix spelling and grammatical errors. It can recommend language improvements and ways to improve readability to make the result clearer and more convincing.
Website Content, Blog Posts, Marketing Emails, Tweets, LinkedIn Posts and Newsletter Articles: Gen AI can summarize and edit memos, reports and other material. It can translate resources to and from other languages. Since initial drafts can be created in a matter of minutes, the firm or administrative staff stand to realize significant time and cost savings. Gen AI’s uses are limited only by the lawyer’s imagination and their ethical obligations under the Code of Conduct, the Rules of the Law Society of Alberta and the Legal Profession Act.
As more advanced models are developed, and legal industry-specific tools are built on general-purpose programs like GPT-4, the capacity and accuracy of Gen AI and LLMs will continue to evolve.
Currently, there are several legal Gen AI programs available, but they may not be ready for full scale adoption. In the near future, however, these tools will exit beta testing and become mature products available to law firms of all sizes.
Detecting When Gen AI is Being Used
It is extremely difficult to tell when a document has been created by Gen AI rather than a human.
Chatbots that use Gen AI can produce responses that are remarkably human-like. Image generators can produce fascinating results, both in terms of style and quality. Text generators can mimic a variety of styles and forms.
As a result, it is very easy for clients to mistakenly believe they are interacting with a human rather than a machine. They may also think they are interacting with an AI tool when there is a real person at the other end of the line. Transparency is essential to ensure that clients are not misled.
The risk of confusion between humans and Gen AI platforms is particularly problematic for non-native English speakers because they use more language patterns that Gen AI detectors often attribute to AI. The implications include:
- This confusion exposes them to false accusations of cheating, impacting their careers and psychological well-being.
- The mistaken bias could lead to loss of valuable job opportunities and applications being rejected.
- Work-product may be downgraded, resulting in non-native English speakers being marginalized.
- It exposes lawyers using such tools to claims of discrimination and human rights complaints.
- Clearly communicate to clients when and how Gen AI is used.
- Inform recipients when email is generated by Gen AI.
- Use watermarks to identify content generated by Gen AI tools.
- Be Careful: As with any powerful tool, Gen AI can become dangerous if not used carefully. Attention and vigilance are needed at all times.
- Inform Clients: Be clear with your clients about your safe use of Gen AI in your retainer letter. Let them know that you use it, what you use it for, the reasons why, as well as its benefits and limitations.
- Know the Rules: Gen AI technology is constantly evolving, and the rules are constantly changing. Keep current and adapt your practice to align with the evolving regulatory and technological landscape.
- Protect Client Confidentiality: Take care to protect the confidentiality of all client information and communications. Gen AI tools do more than just respond to your prompts. They also use your data to train and improve themselves. There is no guarantee that they will keep your information confidential. While you may be able to opt out of your data being used for training purposes, given Gen AI’s rapidly developing capabilities, you should not submit confidential information as part of a prompt.
- Protect Client Privacy: If Gen AI systems use information without consent, they may violate federal and provincial privacy laws. In 2023, the Privacy Commissioner of Canada announced that it had launched an investigation into Open AI because of a complaint alleging the collection, use and disclosure of personal information without consent. Understand how Gen AI systems use the data you input. Don’t enter sensitive or personal information you would not want published on the internet. Don’t submit prompts that could undermine public trust in the legal system if they were disclosed.
- Gen AI can Supplement, Not Replace, Legal Judgment: Gen AI is a tool that should complement your legal competences, not replace them. It will never substitute the exercise of your professional judgment, legal analysis and expertise.
- Use But Verify: For all Gen AI’s remarkable capabilities, research conducted by Gen AI remains problematic. It may contain hallucinations putting your reputation and your clients’ interest in jeopardy. It may not be up to date. It may rely on biased or incomplete source material. Use the technology when appropriate but always verify the outputs through analysis and fact-checking.
- Supervise: Junior lawyers and inexperienced staff may be using Gen AI, whether you realize it or not. Treat any Gen AI output as a first draft only. Consider anything generated by Gen AI as being produced by a law student who requires supervision. Implement and publicize policies on the appropriate use and dangers of misuse of Gen AI.
- Understand the Technology: Take the time to familiarize yourself with Gen AI’s capabilities, functionalities, risks and legal implications. This will let you make informed decisions and help you decide when and how to use it most effectively in your practice.
- Don’t Adopt a False Sense of Comfort: Platforms like ChatGPT are designed to sound confident in order to appear more human-like. Users may be induced into a false sense of comfort and may uncritically accept Gen AI outputs simply because they are generated by automated systems.
- Time and Billing Budgets: Make sure that time and budget pressures do not incentivise junior lawyers to rely excessively on Gen AI.
- Experiment with Gen AI: Innovate. Gen AI represents a new and powerful technology that you can use in a multitude of ways to improve the client experience, the delivery of legal services and the practice of law. Be on the lookout for new ways to improve all of these. But know your limits.
Written by Len Polsky, Manager, Legal Technology and Mentorship, with the assistance of Michael Ward, J.D. candidate at the University of Calgary and Summer Intern at the Law Society of Alberta