51 Principles for Using AI in the Classroom and Acknowledgement Statements

Joel Gladd and Liza Long

Why do I need to understand generative AI?

College courses aim to provide students with durable skills—meaning those strategies and critical thinking skills that translate most obviously into workplace environments. Today we’re seeing a transformation in professional workflows because of how generative AI and other forms of machine learning can augment what professionals do.

In May 2024, Microsoft reported that generative AI (GenAI) usage doubled in the previous months, “with 75% of global knowledge workers using it,” and those who do say it saves time, focus, become more creative, and make their work more enjoyable.

In August 2024, another report showed that 86.5% of employees used GenAI at work. Here’s how some work departments are using it, covering a range of backgrounds from marketing and business to STEM-related fields such as computer programming:

  • social media content
  • planning and building marketing strategies
  • search engine optimization (SEO)
  • content ideation (brainstorming, etc.)
  • writing content
  • content research
  • proofreading
  • bug-fixing and debugging software
  • testing code snippets
  • code generation and research
  • drafting messages to customers
  • analyzing customer feedback

Those in healthcare may think this is all about writing and coding, but AI is transforming the healthcare industry as well. In addition to the above, AI models are now:

  • automating documentation;
  • helping with data entry and extraction;
  • managing communication;
  • monitoring regulatory compliance;
  • helping with administrative workflows and task prioritization;
  • facilitating patient outreach;
  • image enhancement for better diagnosis;
  • data augmentation;
  • noise reduction and pathology prediction;
  • personalized treatment plans;
  • clinical support;
  • research and development of new drugs.

Even more hands-on patient help, such as the tasks normally undertaken by nurses and other practitioners, is becoming transformed by GenAI, including:

  • patient data analysis;
  • predicting potential complications;
  • real-time suggestions for interventions and medication dosages;
  • documentation assistance, including clinical notes from voice recordings and summarizing patient interactions;
  • simulating patient scenarios for training;
  • patient communication, including real-time translation services;
  • generating personalized patient education materials.

What about those in Career and Technical Education? It will also be disruptive, though in very different ways. Those in the automotive repair industry will begin seeing GenAI:

  • analyze vehicle data, symptoms, and repair history to suggest potential issues and solutions more accurately;
  • interpret complex diagnostic codes and sensor readings;
  • analyze images or descriptions of parts to identify them accurately;
  • anticipate parts needs based on common repair patterns, improving inventory management;
  • create realistic simulations and training scenarios to practice complex repairs;
  • help mechanics explain technical issues to customers in simpler terms;
  • provide estimated repair times and costs more accurately;
  • analyze repair shop data to optimize workflow.

This technology will have some similarities across all industries, especially when producing and analyzing content (helping with customer communication and outreach, for example), but we are also seeing a wide variety of applications as they become adapted to individual professions. Each of you will need to research and better understand how AI is affecting your field of interest.

Principles for Using AI in the Workplace

No matter what your career interests are, GenAI and machine learning are becoming everyday tools. Understanding the basics of how they work is an important first step. But it’s also important to foster a mindset and adopt certain principles that you find empowering and productive.

In his book Co-Intelligence, Ethan Mollick presents guiding principles that can help you navigate AI in your work life effectively. Those who wish to mix these tools into their workflows may find them useful.

Principle 1: Invite AI to the Table

AI is increasingly a valuable skill that complements many others. One power move is to just treat any situation with the question, “How can I use AI here?” By learning to use these tools now, you’re setting yourself up to adjust seamlessly as they evolve and become more powerful. Embracing AI in your workflow today means it will feel more familiar as new capabilities emerge.

Principle 2: Be the Human in the Loop

In Co-Intelligence, Mollick emphasizes the importance of “being the human in the loop.” This means actively checking AI’s outputs for accuracy, maintaining ethical standards, and applying your own judgment. Unlike, say, a simple calculator, you will need to bring oversight, critical thinking, and responsibility to your collaboration with AI.

For students, remaining the human in the loop means you also need to build a foundation in your area of study that provides you with enough insight to evaluate AI outputs. You will often need to build confidence in ways that are unassisted by AI in order to competently critique it. For this reason, when faced with a problem or challenge, consider using AI, but remain the thoughtful human guiding the process.

Centaurs, Cyborgs, and Resisters: Understanding Your AI Style

How you use AI may depend on your comfort level. Some people blend AI seamlessly into their work, others prefer a clear boundary between human-created and AI-generated content, and others may take a more antagonistic stance towards these tools. Mollick uses the metaphors of centaurs and cyborgs to describe these approaches. We’re adding the third category, resisters.

Centaurs

A centaur’s approach has clear lines between human and machine tasks—like the mythical centaur with its distinct human upper body and horse lower body. Centaurs divide tasks strategically: the person handles what they’re best at, while AI manages other parts. Here’s Mollick’s example: you might use your expertise in statistics to choose the best model for analyzing your data but then ask AI to generate a variety of interactive graphs. AI becomes, for a centaur, a tool for specific tasks within their workflow.

Cyborgs

Cyborgs deeply integrate their work with AI. They don’t just delegate tasks—they blend their efforts with AI, constantly moving back and forth between human and machine. A cyborg approach might look like writing part of a paragraph, asking AI to suggest how to complete it, and revising the AI’s suggestion to match your style. Cyborgs may be more likely to violate a course’s AI policy so be aware of your instructor’s preferences!

Resisters: The Diogenes Approach

Mollick does not suggest this third option, but we find that it’s important to recognize that some students and professionals feel deeply uncomfortable with even the centaur approach, and our institution and faculty will support this preference as well. Not everyone will embrace AI. Some may prefer to actively resist its influence, raising critical awareness about its limitations and risks. Like the ancient Greek philosopher Diogenes, who made challenging cultural norms his life’s work, you might focus on warning others about AI’s potential downsides and advocating for caution in its use. Of course, those taking this stance should understand the tool as well as centaurs and cyborgs. In fact, resisters may need to study AI tools even more deliberately.

It’s probably not practical to identify always as a cyborg, centaur, or resister. These are styles of interacting with an emerging technology, not identities. The most sophisticated cyborgs will occasionally become centaurs, sometimes even resisters when the situation calls for it. Likewise, someone who feels more attracted to the resister mode will have to “grok” what it means to be a cyborg or centaur if they intend to offer critical guidance to others.

Principles for Using AI in the Classroom

Educational environments foster durable skills that prepare you for workplace and lifelong success. However, there is a key difference between these environments: as you’re learning certain skills, instructors need to be able to assess the choices you’re making, often under challenging circumstances, to offer guidance about how to succeed.

Unlike most workplace scenarios, classroom environments want to assess student learning so they’re prepared for the future. This means instructors must be able to see the labor—i.e., the choices a student made in order to figure out how to respond to a challenge. This usually requires effort, what some like to call “friction,” and it’s often uncomfortable at first. It also takes time. GenAI can often reduce that friction.

But what’s becoming clear is that using AI effectively requires human input on many different levels (remain the human in the loop), and, if you want to be successful in the future, the difference between you and someone you’re competing with will be how much base knowledge + AI savviness you have when problem-solving. The base knowledge part requires a deep familiarity with the models and concepts relevant to that domain—and this is what courses want to help you with. It’s true, to a certain extent, that computer programmers for example can “program with words” and increasingly rely on higher-order thinking rather than just typing out routine functions again and again. It’s becoming higher-order. But accessing those higher-order ways of thinking (prompting with models and concepts in mind) is what you need to acquire proficiency in first. Without those tools, you’ll be as replaceable as another worker who can type things into a chatbot. With those tools and the comfort of working with them in challenging environments, you will better unlock the potential of AI.

The difference between a model or concept in each course vs. busy work will sometimes be obvious, but at other times it won’t. In a math course, understanding how basic algorithms and matrices work, for example, is incredibly important for understanding how machine learning works and provides insight into a completely different way of processing information, which in turn will allow you to prompt AI in powerful ways. In a writing course, knowing there are such things as the rhetorical appeals and different genres is a massive unlock for many things. In a philosophy course, you become more aware of the ethical frameworks companies use to align AI, and you learn what ideas and concepts are relevant when asking whether an output is ethical.

Higher education is beginning to adjust to this new world in which chatbots can help students at any moment. This may help reduce the amount of busy work you feel in the classroom. At the same time, you will be expected to demonstrate what choices you’ve made in order to solve certain challenges, that will take work and struggle, and course policies and sanctions are there to provide guardrails to ensure that happens.

Understanding AI Syllabus Policies

This final section offers guidance on how to understand AI course policies at CWI. There are three options that your instructors choose from: 1) most restrictive, 2) moderately restrictive, and 3) least restrictive. In each part below, you will find the official language followed by some guidance on how to interpret what’s allowed and what’s prohibited. Note that any use of GenAI that impacts a submission must be accompanied by an acknowledgement statement.

Most Restrictive Language

Aligned with my commitment to academic integrity and teaching focus of creating original, independent work, the use of generative artificial intelligence (AI) tools, including but not limited to ChatGPT, DALL-E, and similar platforms, to develop and submit work as your own is prohibited in this course. Using AI for assignments constitutes academic dishonesty, equitable to cheating and plagiarism, and will be met with sanctions consistent with any other Academic Integrity violation.

What this allows:

  • Since the language focuses on generative AI such as ChatGPT and other Large Language Models (LLMs), this does not restrict using other forms of machine learning, such as transcription tools that help with accessibility, or basic tools such as grammar and spell-check. Note that some grammar tools, such as Grammarly, have generative AI options (Grammarly Pro), and the GenAI options to paraphrase or revise writing would not be allowed under this policy.

What this prohibits at the course-level:

  • For longer writing tasks, outlining may be prohibited but ask for clarification from the instructor.
  • Using AI to draft responses (written, math-based, programming, etc.) to assignments is prohibited.
  • Using AI to revise or alter responses is likely prohibited.

What may be allowed (but you should ask):

  • It depends on what your instructor means by “develop.” Some forms of brainstorming may or may not be allowed, but it would also be impossible to enforce a policy that prohibits any brainstorming with generative AI.

Moderately Restrictive Language

Aligned with my commitment to academic integrity and the ethical use of technology, this course allows AI tools like ChatGPT, DALL-E, and similar platforms for specific tasks such as brainstorming, idea refinement, and grammar checks. Using AI to write drafts or complete assignments is not permitted, and any use of AI must be cited, including the tool used, access date, and query.  It is the expectation that in all uses of AI, students critically evaluate the information for accuracy and bias while respecting privacy and copyright laws.

What this allows:

  • Any from the category above (Most Restrictive) is allowed.
  • Brainstorming and outlining is allowed or even encouraged.

What this prohibits at the course-level:

  • Drafts you intend to submit to the course (written, verbal, math-based, etc.)  cannot be generated by AI.
  • Any other use of generative AI to help with submitted coursework must be acknowledged and explained.

What may be allowed (but you should ask):

  • Your instructor may allow generative AI for improving certain aspects of a completed draft, such as revising topic sentences, etc. Ask before doing this and acknowledge AI use.
  • Since this applies at the course-level, your instructor may allow or even ask you to use AI for certain tasks.

Least Restrictive Language

Aligned with my commitment to academic integrity, creativity, and ethical use of technology, AI tools like ChatGPT, DALL-E, and similar platforms to enhance learning are encouraged as a supplementary resource and not a replacement for personal insight or analysis. Any use of AI must be cited, including the tool used, access date, and query. I expect that in all uses of AI, students critically evaluate the information for accuracy and bias while respecting privacy and copyright laws.

What this allows:

  • Anything from the allowed categories above (Most Restrictive and Moderately Restrictive).

What this prohibits at the course-level:

  • You cannot use generative AI to assist with submitted coursework unless it is acknowledged and explained.
  • Since this applies at the course-level, your instructor may ask you to not use AI for certain tasks.

Acknowledging and Citing Generative AI (GenAI) in Academic Work

Liza Long

As an instructor and a writer, I have found that generative artificial intelligence tools can help explore ideas, refine research questions, outline arguments, and break down difficult concepts for students. When my students use ChatGPT, I ask them to include a citation to the tool they used and also to provide a brief reflection about how they used ChatGPT and how they checked the information for accuracy. Here is an example of a reflection from Luka Denney’s essay in Beginnings and Endings, a student-created open education resource.

For this essay, I used ChatGPT as a resource to give me a summary of the feminist and queer theory analysis lens, “Feminist queer theory is a critical analysis lens that combines feminist theory and queer theory to examine how gender and sexuality intersect and shape social power dynamics. This approach challenges the dominant cultural norms that promote heteronormativity, gender binary, and patriarchy, which result in marginalizing individuals who do not conform to these norms.” With this, it helped me better understand the material so I could write better essays. This information was accessed on, May 6th, 2023.

Reflecting on how and why you are using generative AI can help you to ensure that you are not plagiarizing from this tool.

Luka’s reflection is an example of an acknowledgment statement, which is separate from a citation. Increasingly, students should become familiar with AI acknowledgment statements and clarify with their instructors when these statements are needed.

Suggestions for Acknowledging Use of AI

Monash University provides helpful recommendations for how to acknowledge when and how you’ve used generated material as part of an assignment or project. If you decide to use generative artificial intelligence such as ChatGPT for an assignment, it’s a best practice to include a statement that does the following:

  • Provides a written acknowledgment of the use of generative artificial intelligence.
  • Specifies which technology was used.
  • Includes explicit descriptions of how the information was generated.
  • Identifies the prompts used.
  • Explains how the output was used in your work.

The format Monash University provides is also helpful. Students may include this information either in a cover letter or in an appendix to the submitted work.

I acknowledge the use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. The prompts used include [list of prompts]. The output from these prompts was used to [explain use].

Academic style guides such as APA already include guidelines for including appendices after essays and reports. Review Purdue Owl’s entry on Footnotes and Appendices for help.

Citing AI Chatbots

In some situations, students may want to cite information from a chatbot conversation, such as a definition or discussion of a concept they want to use in an essay. The American Psychological Association (APA) and the Modern Language Association (MLA), two of the most frequently used style guides for college writing, have both provided guidelines for how to do this.

ChatGPT includes the ability to share links to specific chats. It’s a best practice to include those links in your reference. For other tools like Google Doc’s Writing Assistant, links are not yet available, so it’s important to be transparent with your reader about how and when you are using AI in your writing.

Here’s an example of a shared chat link in ChatGPT 3.5. When you click on the link, you’ll be able to see both the author’s prompts and the chatbot’s responses. Including links provides transparency for your writing process.

APA Style

According to the American Psychological Association (APA), ChatGPT should be cited like this:

When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

Reference

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

MLA Style

The Modern Language Association (MLA) uses a template of core elements to create citations for a Works Cited page. MLA asks students to apply this approach when citing any type of generative AI in their work. They provide the following guidelines:

  • Cite a generative AI tool whenever you paraphrase, quote, or incorporate into your own work any content (whether text, image, data, or other) that was created by it.
  • Acknowledge all functional uses of the tool (like editing your prose or translating words) in a note, your text, or another suitable location.
  • Take care to vet the secondary sources it cites. (MLA)

Here are some examples of how to use and cite generative AI with MLA style:

Example One: Paraphrasing Text

Let’s say that I am trying to generate ideas for a paper on Charlotte Perkins Gilman’s short story “The Yellow Wallpaper.” I ask ChatGPT to provide me with a summary and identify the story’s main themes. Here’s a link to the chat. I decide that I will explore the problem of identity and self-expression in my paper.

My Paraphrase of ChatGPT with In-Text Citation

The problem of identity and self expression, especially for nineteenth-century women, is a major theme in “The Yellow Wallpaper” by Charlotte Perkins Gilman (“Summarize the short story”).

Image of "Yellow Wallpaper Summary" chat with ChatGPT

Works Cited Entry

“Summarize the short story “The Yellow Wallpaper” by Charlotte Perkins Gilman. Include a breakdown of the main themes” prompt. ChatGPT. 24 May Version, OpenAI, 20 Jul. 2023, https://chat.openai.com/share/d1526b95-920c-48fc-a9be-83cd7dfa4be5 

Example Two: Quoting Text

In the same chat, I continue to ask ChatGPT about the theme of identity and self expression. Here’s an example of how I could quote the response in the body of my paper:

When I asked ChatGPT to describe the theme of identity and self expression, it noted that the eponymous yellow wallpaper acts as a symbol of the narrator’s self-repression. However, when prompted to share the scholarly sources that formed the basis of this observation, ChatGPT responded, “As an AI language model, I don’t have access to my training data, but I was trained on a mixture of licensed data, data created by human trainers, and publicly available data. OpenAI, the organization behind my development, has not publicly disclosed the specifics of the individual datasets used, including whether scholarly sources were specifically used” (“Summarize the short story”).

It’s worth noting here that ChatGPT can “hallucinate” fake sources. As a Microsoft training manual notes, these chatbots are “built to be persuasive, not truthful” (Weiss &Metz, 2023). The May 24, 2023 version will no longer respond to direct requests for references; however, I was able to get around this restriction fairly easily by asking for “resources” instead.

When I ask for resources to learn more about “The Yellow Wallpaper,” here is one source it recommends:

“Charlotte Perkins Gilman’s The Yellow Wallpaper: A Symptomatic Reading” by Elaine R. Hedges: This scholarly article delves into the psychological and feminist themes of the story, analyzing the narrator’s experience and the implications of the yellow wallpaper on her mental state. It’s available in the journal “Studies in Short Fiction.” (“Summarize the short story”).

Using Google Scholar, I look up this source to see if it’s real. Unsurprisingly, this source is not a real one, but it does lead me to another (real) source: Kasmer, Lisa. “Charlotte Perkins Gilman’s’ The Yellow Wallpaper’: A Symptomatic Reading.” Literature and Psychology 36.3 (1990): 1.

Note: ALWAYS check any sources that ChatGPT or other generative AI tools recommend.

A Checklist for Acknowledging and Citing Generative A.I. Tools

In conclusion, it’s important to follow these five steps if you are considering whether or not to use and cite generative artificial intelligence in your academic work:decorative graphic

  1. Check with your instructor to make sure you have permission to use these tools.
  2. Reflect on how and why you want to use generative artificial intelligence in your work. If the answer is “to save time” or “so I don’t have to do the work myself,” think about why you are in college in the first place. What skills are you supposed to practice through this assignment? Will using generative artificial intelligence really save you time in the long run if you don’t have the opportunity to learn and practice these skills?
  3. If you decide to use generative artificial intelligence, acknowledge your use, either in an appendix or a cover letter.
  4. Cite your use of generative artificial intelligence both in text and on a References/Works Cited page.
  5. Always check the information provided by a generative artificial intelligence tool against a trusted source. Be especially careful of any sources that generative artificial intelligence provides.

These tools are rapidly evolving and have the potential to transform the way that we think and write. But just as you should not use a calculator to solve a math equation unless you understand the necessary steps to perform the calculation, you should also be careful about “outsourcing” your thinking and writing to ChatGPT.

References

Denney, L. (2023). Your body, your choice: At least, that’s how it should be. Beginnings and Endings: A Critical Edition. https://cwi.pressbooks.pub/beginnings-and-endings-a-critical-edition/chapter/feminist-5/

McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style Blog. https://apastyle.apa.org/blog/how-to-cite-chatgpt  

Modern Language Association. (2023, March 17). How do I cite generative AI in MLA style? https://style.mla.org/citing-generative-ai/

Monash University. (n.d.). Acknowledging the use of generative artificial intelligence. https://www.monash.edu/learnhq/build-digital-capabilities/create-online/acknowledging-the-use-of-generative-artificial-intelligence

OpenAI (2023). Yellow Wallpaper themes. ChatGPT (24 May version) [Large Language Model]. https://chat.openai.com/share/70e86a32-6f04-47b4-8ea7-a5aac93c2c77

Weiss, K. & Metz, C. (2023, May 9). When A.I. chatbots hallucinate. The New York Times. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html 

 

License

Icon for the Creative Commons Attribution 4.0 International License

Pathways to College Success Copyright © by Joel Gladd and Liza Long is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book