2 Capabilities, Benefits, and Drawbacks
What can AI tools do well, and what are their limitations?
This chapter explores:
- capabilities and uses of generative AI tools;
- benefits of AI tools in education;
- drawbacks and limitations of AI tools.
Capabilities of AI Tools
As introduced in What is Generative AI?, AI tools can accept input from users in the form of text, audio, video, and images in order to:
- generate text, including essays, reports, stories, and correspondence that reasonably replicates human writing;
- review documents in order to find information, summarize, analyze, or give feedback;
- analyze large amounts of data to identify trends and patterns;
- translate between languages or regional dialects;
- produce, debug, and optimize computer code in a variety of languages;
- generate graphics and illustrations;
- create audio content including voice, music, and sound effects;
- create original video content, as a standalone video or to extend an existing video.
Some of these outputs are not possible with all types of input (text, audio, video, and images), however the capabilities of AI tools continue to broaden.
Dreams, a poem by IN-Q, accompanied by a video created by Wayne Price using OpenAI’s Sora.
Benefits of AI Use in Education
This section provides a high-level overview of some benefits of AI in education. Each section links to a relevant chapter for further reading.
Benefits for Students
AI tools can act as a 24/7 personalized tutor, which can supply students with practice questions or give feedback on student work. When course and institutional policies allow, AI tools can also be used by students to assist with aspects of assignments that are time consuming but have limited learning potential, in order to focus more on higher value learning opportunities. AI tools can also power interactive and immersive simulations that allow students to practice real-world tasks and get feedback. A number of studies on student engagement with AI tools have reported increases in student motivation, engagement, happiness, and achievement when AI tools are integrated with coursework.[1] Additionally, learning how to use AI tools effectively and responsibly prepares students to use AI in the workforce. For more information, see the Using AI to Enhance Student Learning chapter.
Benefits for Instructional Staff
For instructional staff at educational institutions, AI tools have the potential to help with prep and administrative tasks, including lesson planning, idea generation, editing, and assessment development. This kind of AI-based ‘cognitive offloading’ can result in a more manageable workload, better work-life balance, and increased job satisfaction, as well as more working hours available for student contact and support, updating and enhancing curriculum, and continuous learning. For more information, see the Using AI to Enhance Student Learning chapter as well as each chapter in the Practical Uses for Preparatory and Administrative Work section.
Benefits for Accessibility
AI tools can be used to support accessibility in teaching and learning. Some examples are AI-powered text-to-speech and speech-to-text tools, auto-captioning/auto-transcribing, summarizing or adapting a resource, writing assistance, and time-management tools. They can also help instructional staff to increase accessibility of course materials, including generating the first draft of alt text for images, changing the language level or format of a resource, and checking a document for accessibility.
It should also be noted that although AI tools bring a host of potential benefits for accessibility, according to a series of tests on nine AI tools in 2023 at Langara College, some AI tool interfaces are lacking in accessibility—for example, they may be difficult to navigate with a screen reader, voice control software, or using keyboard-only navigation.[2]
For more information about AI tools and accessibility, see the Generative AI Tools for Education chapter.
Drawbacks and Limitations of AI Tools
Errors and Inaccuracies
Large language models rely on predicting which tokens (words or strings of characters) will be used to satisfy a particular prompt, and in which order. These predictions often result in correct and factual information, but also commonly produce inaccurate content, often called hallucinations. Hallucinations may be due to falsehoods in training data, or by a faulty prediction by the LLM.[3] AI tools can also be suggestible, and may produce inaccurate or biased content when they believe it is what the user wants—for an example, see ChatGPT and Generative Artificial Intelligence (AI): Potential for bias based on prompt from the University of Waterloo Library. Since AI tools are now being used to write volumes of content for websites, videos, and social media accounts, errors and inaccuracies in AI outputs lead to the spread of misinformation and eroding trust in the accuracy of available information.
Bias
Numerous tests, studies, and everyday interactions with AI tools have uncovered patterns of bias in AI outputs. This includes gender bias related to occupations (a hypothetical doctor is more likely to be male and a nurse female), racial and religious bias related to criminality, and even preference for particular national political parties. AI tool biases are caused by a variety of factors, including the model’s training data, its settings, and decisions made by its creators and trainers. [4] A 2023 study of 5,100 images produced by AI image generator Stable Diffusion found that images generated of individuals in higher paying jobs had lighter skin tones and were more likely to be male, while individuals in lower paying jobs had darker skin tones and were more likely to be female.[5]
AI-tool training data relies heavily on content produced by communities that have a high level of technological literacy and access. As a result, AI tool outputs reflect common, dominant viewpoints from these groups, and exclude other voices. Users relying on AI tools as a source of information may miss out on authentic and diverse viewpoints, which further marginalizes already underrepresented groups, including the Global South and marginalized populations in the Global North.[6]
Privacy Issues
Tools like ChatGPT may employ user input in training their models, except for data of users who have created an account and switched off the data collection setting. When a user enters sensitive data into an AI tool, there is a risk that others could intentionally or inadvertently gain access to that information through future interactions with the model,[7] or through cybersecurity breaches during the transmission, storage, or processing of data.[8] AI-tool creators have been secretive about their training methods and data sets, leading to a lack of trust even when safeguards like opting out of data collection exist.
Copyright Issues
AI companies are deliberately opaque about what information their models have been trained on. However, it is clear from their familiarity with copyrighted works that they have been fed volumes of copyright-protected information without the consent of copyright holders. Lawsuits filed on behalf of artists and corporations are working their way through the courts, alleging that copyrighted works have been used without permission and AI outputs amount to unauthorized derivatives.[9]
Indigenous groups globally have raised concerns about data sovereignty and generative AI. The Indigenous Protocol and Artificial Intelligence Working Group’s 2020 position paper argues that “Indigenous communities must control how their data is solicited, collected, analyzed and operationalized” and that AI tools risk perpetuating cultural homogenization of Indigenous communities.[10] It’s clear that these concerns are well-founded, since the training methods used for AI tools rely on collecting vast datasets without regard for Indigenous data sovereignty, and the generic outputs of AI tools can exacerbate cultural erasure and homogenization.
Labour Concerns
AI companies have been accused of exploiting low-wage labour pools in Kenya, India, and elsewhere to carry out the psychologically difficult task of filtering objectionable content from AI training data. As part of a project to train AI-powered filtering tools, workers were tasked with evaluating text passages to flag objectionable content. These text passages included descriptions of suicide, sexual abuse, murder, and other disturbing topics. Workers have reported lasting mental health struggles from exposure to this information.[11]
Environmental Impact
AI tools perform a high number of calculations to respond to a single prompt. Running and cooling the servers that power AI tools uses large amounts of water—up to one bottle of water per 100-word email, according to a study by the Washington Post and the University of California, Riverside.[12] AI tools also have a massive energy and carbon footprint; ChatGPT alone uses 500,000 kilowatts of electricity per day, enough to power 180,000 U.S. homes.[13] The environmental impacts of AI tools have been exponentially amplified by their recent integration into everyday platforms like Google, Facebook, and Instagram, where users are provided AI-produced content without asking for it. Those interested in reducing their AI-related carbon footprint can use the Web tab when searching in Google, and mute AI features on Facebook, Instagram, and Snapchat.
Drawbacks for Students and Educators
Several more drawbacks exist that are specific to the educational context. One issue is the disparity in educational opportunities arising from use of AI tools—students who can afford premium subscriptions benefit from advanced features and no wait times, while those using free versions face reduced capabilities and potential wait times. There is also a risk that AI tools may be intentionally or unintentionally used by students to violate academic integrity policies, which is leading educators to rethink teaching and assessment approaches. Some educators are concerned that excessive reliance on AI tools will lead to a loss of critical skills, like document organization, problem solving, critical thinking, and creativity. For both educators and students, failing to recognize the limitations of AI tools could have consequences if AI outputs are used without fact checking, adapting, and other forms of human refinement. These consequences could include damaged professional reputations due to inaccurate or unoriginal work and potential legal or ethical violations from the misuse of AI-generated content.
Recommendations for Educators
This chapter outlines numerous issues related to AI tools including inaccuracies, bias, and issues related to privacy, copyright, labour, and the environment. All of these issues cause concern for AI use in educational environments. Users of AI tools should be informed about AI’s drawbacks and should consider them when deciding if and how to use AI tools as part of their work. When an educator is introducing students to AI, providing a rundown of its drawbacks and ethical issues is essential, as is providing guidance about responsible and ethical use of AI in academia and the workforce. Librarians and library reference technicians are also an excellent resource for understanding and navigating the use of AI tools, and may be available to support students in navigating these technologies.
Key Takeaways
- AI tools are capable of generating many kinds of outputs, tailored to the user’s needs.
- AI tools have a number of potential benefits for students and instructors, as well as potential drawbacks.
- AI outputs may contain errors and bias, and need to be thoroughly vetted and adapted for real-world use.
- There are a number of downsides to using generative AI tools, including copyright, privacy, labour, and environmental issues.
- Educators who use AI tools need to be aware of their risks and drawbacks, and should ensure students are informed as well.
Exercises
- Consider the capabilities of AI tools. How would you consider using AI to help with your work? What are some tasks that you would never consider offloading to an AI tool?
- Use an AI tool to generate content related to a sensitive or controversial subject. Review the content—can you spot bias or disinformation?
- Reflect on the drawbacks of AI tools, including bias, disinformation, and issues related to privacy, copyright, labour, and the environment. Draft a preliminary plan for addressing these limitations with your classes. What do you want students to know about the limitations of AI, and how will you communicate this information?
- Tufan Adiguzel, Mehmet Haldun Kaya, and Fatih Kürşat Cansu, "Revolutionizing Education with AI: Exploring the Transformative Potential of ChatGPT," Contemporary Educational Technology 15, no. 3 (July 1, 2023):ep429. https://doi.org/10.30935/cedtech/13152. ↵
- Luke McKnight, "Accessibility of AI Interfaces", Langara College, accessed November 25, 2024, https://langara.ca/about-langara/academics/edtech/AI-Accessibility.html ↵
- Karen Weise and Cade Metz, "When A.I. Chatbots Hallucinate," New York Times, last modified May 9, 2023, https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html. ↵
- Golnoosh Babaei, David Banks, Costanza Bosone, Paolo Giudici, and Yunhong Shan, "Is ChatGPT More Biased Than You?" Harvard Data Science Review, July 31, 2024, https://hdsr.mitpress.mit.edu/pub/qh3dbdm9/release/2. ↵
- Leonardo Nicoletti and Dina Bass, "Humans are Biased. Generative AI is Even Worse," Bloomberg, June 9, 2023, https://www.bloomberg.com/graphics/2023-generative-ai-bias/. ↵
- Fengchun Miao & Wayne Holmes, "Guidance for generative AI in education and research", UNESCO, 2023, https://doi.org/10.54675/EWZM9535. ↵
- Tom McKay, "Generative AI data leaks are a serious problem, experts say," IT Brew, May 8, 2023, https://www.itbrew.com/stories/2023/05/08/generative-ai-data-leaks-are-a-serious-problem-experts-say. ↵
- Office of the Provost & Vice-President, "Privacy Considerations and Generative AI Tools," McMaster University, Accessed Oct. 16, 2024, https://provost.mcmaster.ca/office-of-the-provost-2/generative-artificial-intelligence-2/privacy-considerations-and-generative-ai-tools/. ↵
- Gil Appel, Juliana Neelbauer, and David Schewiedel, "Generative AI has an Intellectual Property Problem," Harvard Business Review, April 7, 2023, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem. ↵
- Jason Edward Lewis et al., "Indigenous Protocol and Artificial Intelligence Position Paper," 2020, https://doi.org/10.11573/spectrum.library.concordia.ca.00986506. ↵
- Billy Perrigo, "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic," Time, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/. ↵
- Pranshu Verma and Shelly Tan, "A Bottle of Water Per Email: the Hidden Environmenal Costs of Using AI Chatbots," The Washington Post, September 18, 2024, https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/. ↵
- Cindy Gordon, "ChatGPT and Generative AI Innovations are Creating Sustainabilty Havoc," Forbes, March 17, 2024, https://www.forbes.com/sites/cindygordon/2024/03/12/chatgpt-and-generative-ai-innovations-are-creating-sustainability-havoc/. ↵
AI tools that can generate human-like text, based on predictions made after learning from vast amounts of written text.
Incorrect information supplied by an AI tool as if it were factual.