Key Messages | Practical Implications |
---|---|
Generative AI is a language prediction tool and doesn’t understand what it says. Therefore, it can produce text that appears convincing but may have factual errors. | You must fact-check everything produced by generative AI against trustworthy sources. |
Generative AI is inherently biased because it is trained on a biased data set: the entire open internet. |
You must use critical thinking to ensure that you do not reproduce the biases, assumptions, and blindspots of generative AI. |
It may be permissible to use generative AI as a tool to support the research and writing process - but always within the (opens in a new window)academic integrity guidelines. |
Instead of using AI to generate text, consider ways of using the tools to support your research and refine your writing. |
There are ethical problems around the use of generative AI, such as privacy and intellectual property concerns, human labour costs, and environmental impact. |
You must make an informed choice about how you want to engage with generative AI, remembering that when you engage with these tools you give these private companies your data and ultimately help to make their products better. |
Generative AI presents a plagiarism risk when it is not used carefully. |
Check with your module coordinator about whether you can use generative AI and in what circumstances. If you are allowed to and choose to use it, you must acknowledge your use of generative Al. If in doubt, ask your module coordinator for advice. |
Note on Use of Generative AI
Generative AI was not used to create any of these key messages. Even though it might have generated some similar content, we wanted to ensure that our messages were authentic and accurately reflected the many discussions we have had as a team and in the College more broadly. We did not want generative AI to have any part in shaping these key messages because of the risk of bias and omission.