5 Supporting Students to Use AI Effectively

What do students need to know to prepare them for using AI tools in the classroom?

This chapter explores how to:

  • learn how to use an AI tool through experimentation and research;
  • evaluate and adapt AI outputs for real-world use;
  • differentiate between ethical and unethical use of AI tools.

Teaching Effective Use of AI

Interacting with AI Tools

It’s likely that most students in a Canadian polytechnic classroom have already engaged with AI tools. However, many lack effective strategies for prompting, reprompting, and adapting AI outputs. The Writing and Refining Prompts chapter outlines three strategies for prompting an AI tool, which can be explored in the classroom and even tested against one another. One activity could involve having different student groups test out competing strategies to complete the same task, then comparing the quality of the final product. Heidi McKee of Miami University suggests having students submit a record of their prompting and reprompting, with annotations that explain reasoning, then present their strategy to the class.[1] Students may also benefit from learning how to adapt AI outputs, including reprompting and human refinement strategies.

The way we interact with AI tools is likely to change rapidly as they evolve, and methods currently used to prompt and coach AI tools may be less relevant once students enter the workforce.[2] Due to this, building the skills necessary for working with a new AI tool through research and experimentation, and learning how to critically evaluate an AI tool’s outputs, may be a more productive use of limited class time than an exhaustive study of today’s best practices for AI tool prompting.

Evaluating AI Outputs

One starting point for promoting effective AI usage is having students understand the strengths and weaknesses of AI tools. A range of potential classroom activities can address this as the primary intention of an activity or as collateral learning when AI tools are used as part of learning another skill or concept.

Comparing AI Output and Human Writing

RRC Polytech Civil Engineering Technology instructor Ellowyn Nadeau has students evaluate AI outputs as part of a unit on report writing. After students complete their own report-writing exercises, they are asked to have ChatGPT address the same report-writing task, then analyze the quality of the AI output. In a previous term, students overwhelmingly agreed that ChatGPT’s raw output provided some usable structure but was ineffective overall until it had been adapted significantly.[3] Mollick and Mollick agree that activities like Nadeau’s, which involve critical reflection on AI outputs, “sharpen [student] skills while having the AI serve as a supportive tool for their work, not a replacement.”[4] Follow-up activities to an exercise like Nadeau’s could focus on the kind of reprompting and human refinement that would be needed to turn an AI-produced report into one that is usable in professional and academic settings.

Identify the AI Output

Another approach to investigating AI tools’ strengths and weaknesses is having students attempt to distinguish between human- and AI-produced works. One activity idea is for students to use AI tools to create a work, reprompting to refine the output until it appears as if it were produced by a human. Following that, students produce their own work on the same topic, and the class attempts to identify which works were produced by AI and which by fellow students. In addition to promoting critical discussion on the strengths and weaknesses of AI outputs, this activity offers the opportunity to practice writing prompts, reprompting to refine AI outputs, and coaching AI tools to produce text that closely resembles their voice.[5]

Fact-Checking AI Output

Another way to build AI literacy is by fact-checking AI-generated content. As part of RRC Polytech Library and Academic Services presentations on information literacy, reference technician Fatima DeMelo has students analyze an AI-produced biography of a prominent member of the RRC Polytech community, using the SIFT method to verify facts and correct errors.[6] RRC Polytech communications instructor Jacob Carewick has explored AI-produced deepfake video and image examples as part of a unit on media literacy, in which students learned to recognize common hallmarks of faked imagery. Carewick’s activity also included an exploration of online sources for fact-checking viral disinformation, which equips students with strategies for verifying the authenticity of potential fakes.[7] Both Carewick’s and DeMelo’s activities equip students with strategies for verifying AI output that are likely to remain useful as the strengths and weaknesses of AI tools continue to develop.

AI Literacy as Collateral Learning

Most uses of AI in education aren’t intended primarily to teach AI literacy, but instead to enhance learning of another skill or concept. Instructors have a variety of options for adding extension activities to help students develop AI expertise. The following table outlines a number of extension activities that could be added on in order to boost student capacity for using AI tools effectively.

Original Activity
Extension Activity
Write an essay or report with limited use of AI tools allowed. Write a short reflection evaluating the strengths and weaknesses of the AI tool’s output.
Produce a table, chart, or graph with an AI tool. In small groups, share final drafts and the prompting process that produced them.
Conduct a research project on which AI was allowed for initial research and planning stages. In pairs, compare information from AI to that of secondary sources and reflect on accuracy and relevance.
Create a web application using an AI tool as a coding assistant. After reviewing and debugging AI-produced code, engage in a class discussion about its strengths and weaknesses.
Use an AI tool to produce custom images. Compare AI-produced images to human-produced images in order to evaluate their appropriateness for a task.
Incorporate AI into recipe creation or menu planning. Compare AI output to culinary practices and industry norms to identify what changes are needed and why.
Prompt an AI tool to produce a set of instructional slides about a healthcare workplace process (like hand washing). Evaluate AI outputs against local knowledge, checking for accuracy.

 

Teaching Responsible and Ethical Use of AI

It’s likely that students who learn to use AI responsibly and ethically during their studies will carry these habits into the workforce. Consequently, it is essential to equip students with the skills and attitudes needed to use AI appropriately in their program of study.

Instructor Disclosure of AI Use

Promoting ethical and responsible use of AI tools begins with an open dialogue between students and instructors. Instructor use of AI tools (when allowed by institutional and departmental policy) should be disclosed to students in order to open or continue the conversation and model responsible usage. Instructors can share how AI tools have helped them to brainstorm case studies or project topics, produce a set of images, or draft outlines for lesson plans. AI usage can be disclosed in the course outline, in class meetings, or in materials posted to the course learning management system (LMS) to demonstrate transparency, build trust, and position the instructor alongside the students as someone who is learning about and experimenting with AI.[8] See the following image for an example of how one RRC Polytech communication instructor disclosed use of an AI tool in developing an assignment.

 

A screenshot from the text of an assignment. It reads: While based on real issues, these scenarios are fictional/include fictional details. Your instructor used AI tools to help brainstorm the list of project scenarios for the Winter 2024 term. Topics 2 and 8 are adapted from ideas generated by ChatGPT.
An example of instructor disclosure of AI use.

Course AI Policies

Clear expectations can be set by crafting an AI policy for a course that lists acceptable and unacceptable uses of AI. Co-creating the policy with your class makes room for deeper conversation and reflection, and can lead to stronger relationships, greater trust, and increased student ownership of the policy.[9]  For more on how to craft course AI policies and an example, please see the Setting Policy and Communicating Expectations chapter.

Addressing Inappropriate Student Use of AI Tools

It can be difficult to handle a breach of course AI policy (which is potentially also a breach of institutional academic integrity policy). Suspected inappropriate use of AI tools is difficult to prove, since AI-detection tools are unreliable and usage may violate institutional policies around privacy and student work.

An exploratory meeting with the student offers an opportunity to gather more information about the student’s work, by asking questions about their drafting process, content choices, and supports used, and verifying that the work submitted matches the student’s knowledge and abilities. An exploratory meeting also gives instructors a chance to clarify expectations around AI use, guide students in developing strategies for the appropriate completion of academic work, and make referrals to academic supports. Information collected is also valuable for instructors to refine their approach for communicating academic integrity and AI use policies.

Case Studies

Exploring case studies about appropriate and inappropriate use of AI in academia, the workforce, and the community may also be impactful. Here are a few case studies and corresponding discussion questions that can be adapted for use in the classroom:

 

Exploring Bias

Bias exists in AI tools based on the decisions of designers and engineers, and the data sets that large language models were trained on. Michael McKenna calls bias “the backbone of machine learning” and highlights the need to differentiate between useful bias, like an AI tool that believes patients with a family history of breast cancer are more likely to have breast cancer, and harmful bias, like an AI tool for law enforcement that believes people of some races are more likely to commit crimes.[10] Students need to be made aware of the potential for AI tools to produce outputs that have gender, racial, socioeconomic, disability-related, and other forms of bias. Some example activities to explore bias include:

  • analyzing AI tools’ text and visual outcomes for gender or other biases;[11]
  • exploring case studies, such as those of law enforcement case-management systems like COMPAS or OASYS, hiring algorithms, or facial recognition software, to raise awareness of real-world impacts of AI bias;
  • organizing a jigsaw activity in which students explore one type of AI bias, its implications, and how to spot it, and share their findings with the class.

Key Takeaways

Students need to be taught:

  • basic prompting skills;
  • strategies for learning a new AI tool;
  • how to critically evaluate AI output;
  • how to adapt AI outputs for real-world use; and
  • how to use AI tools ethically at school and in the workplace.

Exercises

  • Reflect on your students’ level of aptitude with AI tools. What do they know already and what do they need to learn?
  • Plan ways to integrate AI into an upcoming class. How can the lesson include instruction on using AI tools effectively?
  • Consider how you will disclose your use of AI tools to your next classes.
  • Develop a strategy for incorporating learning about AI bias and the unethical use of AI tools in upcoming courses.

  1. Heidi McKee, "Rhetorical Strategies for Prompting: Developing Prompt Literacies," Exploring AI Pedagogy: A Community Collection of Teaching Reflections, last modified November 27, 2023, https://exploringaipedagogy.hcommons.org/2023/11/27/rhetorical-strategies-for-prompting-developing-prompt-literacies/.
  2. Phillip Dawson, "Don't Fear the Robot", YouTube video, 1:25:54, last modified July 18, 2023, https://www.youtube.com/watch?v=ZEhRFVO6rr0.
  3. Ellowyn Nadeau, focus group, November 22, 2023.
  4. Ethan Mollick and Lilach Mollick, "Assigning AI: Seven Approaches for Students, with Prompts", SSRN, last modified September 23, 2023, https://doi.org/10.2139/ssrn.4475995.
  5. Mark Marino and Mary Traester, "The Turing Test," Exploring AI Pedagogy: A Community Collection of Teaching Reflections, last modified November 27, 2023, https://exploringaipedagogy.hcommons.org/2023/11/27/the-turing-test/.
  6. Fatima DeMelo, focus group, November 22, 2023.
  7. Jacob Carewick, focus group, November 6, 2023.
  8. Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching, "Generative Artificial Intelligence in Teaching and Learning at McMaster University," McMaster University, last modified 2023, https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/.
  9. Jessica Gemella and Anwen Burk, "Digital Pedagogy Toolbox: Digital Literacy Outcomes," BCcampus, last modified December 8, 2023, https://bccampus.ca/2023/12/08/digital-pedagogy-toolbox-digital-literacy-outcomes/.
  10. Michael McKenna, "Bias in AI: How to Mitigate AI Bias," Toptal, accessed January 24, 2024, https://www.toptal.com/artificial-intelligence/mitigating-ai-bias.
  11. Chrissi Nerantzi, Sandra Abegglen, Marianna Karatsiori, and Antonio Martínez-Arboleda, "101 Creative Ideas to Use AI in Education, A Crowdsourced Collection," Zenodo, last modified June 23, 2023, https://doi.org/10.5281/zenodo.8072950.
definition

License

Icon for the Creative Commons Attribution 4.0 International License

Generative Artificial Intelligence: Practical Uses in Education Copyright © 2024 by Troy Heaps is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book