Generative AI in Computing Education: Wrecking Ball or Holy Grail?
Blog
- Can election results be predicted by correcting biases in social polls from X?
- International Women’s Day: Celebrating Ad Astra Scholar Ava Canning
- Chinese New Year 2024 Celebration
- MSc Advanced Software Engineering Alumni
- Generative AI in Computing Education: Wrecking Ball or Holy Grail?
- My internship in digital strategy at Limerick City & County Council
- Clown Computing 101
- International Men's Day 2023
- Top marks for ChatGPT in the Leaving Certificate Computer Science Examination
- Alumnus Interview
- Student profile: Pasika Ranaweera, PhD student
- Staff profile: Associate Professor Neil Hurley, Head of School
- UCD CS PhD candidate award from IEEE Consumer Electronics Magazine
- UCD-Insight Collaboration Wins Prestigious Publication Award
- W@CS Alumni Roundtable
- Staff profile: Dr. Fatemeh Golpayegani
- Interning at a Smaller Tech Company
- Powering through the pandemic: My Remote Research Internship Experience
- Exploring Sense of Belonging in Computer Science Students
- Student Inter-Society Tech & Enterprise Meetup (SISTEM) held in UCD
- Computer Science Research and the COVID -19 Pandemic
- Zoom fatigue: how to make video calls less tiring
- SIGCSEire Launched at UCD CS
- Best Paper at the International Conference on Case-Based Reasoning (2019)
- ‘Spare tire genes’ explain why some genes can be lost by cancer cells
- Bi-annual CS Graduate Research Symposium
- UCD CS Postdoctoral fellow Claudia Mazo selected as a member of the ACM Future of Computing Academy
- Security, Privacy and Digital Forensics in the Cloud
- Chidubem Iddianozie: PhD student and GitHub Ambassador in UCD
- UCD CS PhD student selected to attend the Heidelberg Laureate Forum
- New Project: Evidence-Based Decision Support for Real-Estate Investment
- UCD projects celebrate Europe Day
- Research Award at the 2019 ACM SIGCSE Technical Symposium on Computer Science Education
- Top Tips for Student Scholarships
- I am a Computer Scientist and a Cancer Biologist
- Critical thinking and data ethics in UCD CS
- Teaching at BDIC Beijing
- Reading 35,000 Books
- Secret to a Great Internship
- 12 Tips for PhD Researchers
- Buddy Coders - a new initiative to support women in Computer Science
Generative AI in Computing Education: Wrecking Ball or Holy Grail?
(opens in a new window)Brett A. Becker and (opens in a new window)Brian Mac Namee, School of Computer Science, University College Dublin
Is Generative AI1 over-hyped or is it going to revolutionise education? In this article we (extremely briefly) summarise the potential impacts of GenAI in the domain of education – specifically computing education – in terms of what has been done so far, as well as challenges and opportunities to come. In December 2023 Yasmin B. Arafat – a developer of the hugely popular visual programming language Scratch – said in her ACM CompEd2 keynote address that in her time there has never been a conversation that has so ubiquitously dominated the education community as the conversation about GenAI3. Perhaps that is not surprising when you consider that PCs took decades to come into homes and revolutionise our lives, and that the internet was also a relatively slow-burner, taking years to truly impact society. Even then, personal computing and the internet largely supported us in doing the same things we’ve always done – communicating, moving money around, getting work done, finding entertainment, etc. In much less time than this, GenAI has eaten the internet for breakfast and accomplished something that the PC and internet could not really do – Generate stuff. Sure, PCs and the internet could be argued to generate stuff in some circumstances – but by-and-large they help people make and share stuff. Up until some recent point in history, most of the information on the internet was human-generated, or human-generated with the help of computers and programs (which humans had to write). GenAI on the other hand can generate content – some of it quite novel – with comparatively little human input which is typically in the form of natural language.
Clearly there are many opportunities and challenges in this domain (Becker et al. [3]). However, before we ask how GenAI might affect Computer Science (CS) Education let’s look at a sampling of what GenAI done so far. Here, we provide a very incomplete whirlwind tour with references – should you feel like going rabbit-chasing – many led by UCD CS academics and PhD students4. In 2023 GPT-4 aced the Leaving Certificate and A-Level CS papers (Mahon et al. [15], Weckler [22]). This wasn’t that surprising however, as it was shown 2.5 years ago – in what has become known as “The Robots Are Coming” paper (Finnie-Ansley et al. [9]) – that Codex (the predecessor of GPT-35) ranked in the top 25% of real (human) university introductory programming (CS1) students, in terms of performance on the same exams those students took. A year later Codex performed almost as well on data structures (CS2) exams (Finnie-Ansley et al. [10]). By last summer, GPT-4 scored 100% on every question on those the same exams, except for one where GPT-4 scored 90% (Denny et al. [6], Prather et al. [17]). For more on that front see a paper by UCD researchers and PhD students co-authored with colleagues from around the world, in the February 2024 Communications of the ACM (Denny et al. [8]). Also well over a year ago, GenAI could create themed and student-ready CS1 programming assignments, complete with solutions and test cases – yes, solutions that properly answered the questions, and tests that properly tested solutions, student- or AI-generated, against the correct ones (Sarsa et al. [20]). At UC San Diego, Leo Porter just wrapped up teaching his CS1 course using his new book, co-authored by Dan Zingaro at the University of Toronto: “Learn AI-Assisted Python Programming With GitHub Copilot and ChatGPT” (Porter and Zingaro [16]). Overall an AI-first approach went very well, and shows promise in overcoming some well-known and long-standing programming education barriers (Ismael et al. [11]) including those such as syntax errors which GenAI has independently and empirically shown to largely overcome (Leinonen et al. [13]). For 52 pages of current GenAI research including student & instructor survey and interview results, check out “The Robots are Here” paper (Prather et al. [17]), co-led by UCD CS researchers.
So, where is this going presently? The “assessment apocalypse”, where many educators felt that GenAI should be banned (Lau and Guo [12]), seems to have simmered down – after all cheating is not a new phenomenon and GenAI was just yet another means to the same end (Prather et al. [17]). However it is clear that that university policy needs to directly address the impacts of GenAI on policy including academic integrity (Russell et al. [19]), and that this needs to be clearly articulated to students (Prather et al. [17]). Filling the void that the initial panic has left are opportunities where GenAI can seriously impact teaching and learning (Becker et al. [3]) beyond, but very much including, programming (Becker et al. [2]). For starters we need to understand more about how students interact with GenAI (Prather et al. [18]) and provide tooling and strategies to achieve effective use (Denny et al. [7]). On the classroom practice front, it is very likely that in a few short years GenAI will be powering virtual TAs (Bryan [5]) that are always available, always approachable, and never get tired of hearing the same questions over and over. Most likely these will be fine-tuned on a specific module or lecturer’s notes and other materials so that responses are tailored for a specific student’s context. These virtual TAs – also called personalised learning assistants – could also have access to student data allowing them to aid student progression via time-tested techniques such as mastery learning that have proven extremely difficult to scale effectively. This has been one goal of the Artificial Intelligence in Education (AIEd) community for years (Becker [1]). Trials are already underway, for instance in Harvard’s CS50 course (Liu et al. [14]).
What about the concerns (Becker et al. [3])? A very incomplete list of broad-ranging concerns span bias, ethics, and equity. For a taxonomy of risks posed by LLMs, see Weidinger et al. [23]. In education there is also the issue of how GenAI affects existing educational processes and structures, too many of which are (finally, unignorably) no longer fit for purpose. It is also possible that GenAI will change who studies what (including CS). This could be good, or bad, or both. The economy and media are big factors in influencing student interest, and GenAI has impacted both of these. Not all of the concerns may end up being negative or coming to fruition however – in this arena, many possible pitfalls are intricately intertwined with possible upsides. For instance, GenAI could narrow some existing gaps and disparities. Leo s, after teaching his course mentioned above, said “We believe LLMs lower the barrier for understanding how to program and, in turn, may help us bring in a broader and more diverse group of students and professionals to the field” (Ismael et al. [11]) However, new gaps and disparities could arise in terms of equity and participation, such as the development of an AI-divide (Seger et al. [21]). Topics such as these are where the conversation is turning to currently. We are only now, starting out 2024, in a period where the dust has settled to the point that such important issues can be investigated with some semblance of potential clarity. However, longer-term effects will remain to be seen for some time. What do we do if GenAI-assisted students start to ace all of their material? Grade harder? Cover more content more quickly? Completely rethink how assessment in higher education works? Options such as these would have drastic effects on the curriculum as well as raising profound professional, economic, societal, ethical, and other issues. Although here we focus on education, it is important to note that research and education cannot be cleanly separated. It is critical to point out that AI is still only a tool that can help researchers, and that AI by itself is not capable of conducting or producing science on its own (Birhane et al. [4]).
It is easy to point the finger at the technology of GenAI when looking for the cause of the rapid disruption that accompanied its arrival, and through which we are still working through. However, it is arguably only indirectly responsible. In a way GenAI is just the messenger of news that was coming at some point anyway. It is likely that catalyst is a better term than cause. Any serious disruption to education is just as much a function of the fact that measuring learning with grades and GPAs is an imperfect process to begin with. Education is a fragile system that is easy to upset and notoriously slow to change.
Perhaps GenAI isn’t over-hyped at all when it comes to education. Could it be that GenAI is a saviour, here to catalyse long overdue change in a system that needed a massive overhaul in the first place? Ask ChatGPT. We just did. Here’s what it said – verbatim – when presented with the first two sentences of this paragraph:
It’s an interesting perspective to consider GenAI as a catalyst for change in the education system. The potential of generative AI in education is quite significant. Here are a few ways it could impact the sector:
However, it’s also important to approach this with caution. Challenges such as ensuring data privacy, preventing biases in AI algorithms, and maintaining the human element in education are crucial. The integration of GenAI in education should be done thoughtfully to enhance the learning experience without replacing the invaluable human elements of teaching and mentorship. |
---|
Not too shabby! Perhaps the future does look bright – but like normal, long-term gain often involves short-term pain.
(opens in a new window)FOOTNOTES
1 Generative AI (GenAI) is an umbrella term for artificial intelligence that generates content – for instance Large Language Models (LLMs) that power chatbots such as Open-AI’s ChatGPT and Google’s Bard, and Diffusion Models that generate images such as Open AI’s Dall·E and Midjourney. For a brief overview of Artificial Intelligence, see (opens in a new window)https://www.ibm.com/topics/artificial-intelligence
2 The ACM Global Computing Education Conference (opens in a new window)https://(opens in a new window)comped.acm.org
3 Not a direct quote, but from my memory – Brett
4 [1–3, 6–10, 13, 15, 17-19] are available open-access from (opens in a new window)https://brettbecker.com/publications
5 GPT-3 is obviously the predecessor of GPT-4, currently the most powerful model behind ChatGPT.
REFERENCES
(opens in a new window)[1] Brett A. Becker. 2017. Artificial Intelligence in Education: What is it, Where is it Now, Where is it Going? In Ireland’s Yearbook of Education 2017-2018, Brian Mooney (Ed.). 30, Vol. 1. Education Matters, Dublin, Ireland, 42–48. (opens in a new window)https://educationmatters.ie/artificial-intelligence-in-education ISBN: 978-0-9956987-1-0.
(opens in a new window)[2] Brett A. Becker, Michelle Craig, Paul Denny, Hieke Keuning, Natalie Kiesler, Juho Leinonen, Andrew Luxton-Reilly, Lauri Malmi, James Prather, and Keith Quille. 2023. Generative AI in Introductory Programming. (opens in a new window)https://csed.acm.org/large-language-models-in-introductory-programming First Draft, to be published in the CS2023: ACM/IEEE-CS/AAAI Computer Science Curricula.
(opens in a new window)[3] Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 500–506. (opens in a new window)https://doi.org/10.1145/3545945.3569759
(opens in a new window)[4] Abeba Birhane, Atoosa Kasirzadeh, David Leslie, and Sandra Wachter. 2023. Science in the Age of Large Language Models. Nature Reviews Physics (2023), 1–4. (opens in a new window)https://www.nature.com/articles/s42254-023-00581-4
(opens in a new window)[5] Claire Bryan. 2023. Chatbots Might Disrupt Math and Computer Science Classes. Some Teachers See Upsides. Associated Press (Oct 2023). (opens in a new window)https://apnews.com/article/chatgpt-math-computer-science-3fc4b72d69d34627ba3f2fa74491ea21
(opens in a new window)[6] Paul Denny, Brett A. Becker, Juho Leinonen, and James Prather. 2023. Chat Overflow: Artificially Intelligent Models for Computing Education - RenAIssance or ApocAIypse?. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (Turku, Finland) (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 3–4. (opens in a new window)https://doi.org/10.1145/3587102.3588773 Video: (opens in a new window)https://www.youtube.com/watch?v=KwVcRXQc3IU
(opens in a new window)[7] Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, and Brent Reeves. 2024. Prompt Problems: A New Programming Exercise for the Generative AI Era. In Proceedings of the 55th SIGCSE Technical Symposium on Computer Science Education (Portland,OR, USA) (SIGCSE’24). Association for Computing Machinery, New York, NY, USA. (opens in a new window)https://doi.org/10.1145/3626252.3630909 Preprint available: (opens in a new window)https://arxiv.org/abs/2311.05943
(opens in a new window)[8] Paul Denny, James Prather, Brett A. Becker, James Finnie-Ansley, Arto Hellas, Juho Leinonen, Andrew Luxton-Reilly, Brent N. Reeves, Eddie Antonio Santos, and Sami Sarsa. 2024. Computing Education in the Era of Generative AI. Commun. ACM 67, 2 (Jan 2024), 56–67. (opens in a new window)https://doi.org/10.1145/3624720 Magazine Version: (opens in a new window)https://cacm.acm.org/magazines/2024/2/279537-computing-education-in-the-era-of-generative-ai/fulltext
(opens in a new window)[9] James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Proceedings of the 24th Australasian Computing Education Conference (Virtual Event, Australia) (ACE ’22). Association for Computing Machinery, New York, NY, USA, 10–19. (opens in a new window)https://doi.org/10.1145/3511861.3511863
(opens in a new window)[10] James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A. Becker. 2023. My AI Wants to Know If This Will Be on the Exam: Testing OpenAI’s Codex on CS2 Programming Exercises. In Proceedings of the 25th Australasian Computing Education Conference (Melbourne, VIC, Australia) (ACE ’23). Association for Computing Machinery, New York, NY, USA, 97–104. (opens in a new window)https://doi.org/10.1145/3576123.3576134
(opens in a new window)[11] Katie E. Ismael, Ioana Patringenaru, and Kimberley Clementi. 2023. In This Era of AI, Will Everyone Be a Programmer? UC San Diego Today (Dec 2023). (opens in a new window)https://today.ucsd.edu/story/in-this-era-of-ai-will-everyone-be-a-programmer
(opens in a new window)[12] Sam Lau and Philip Guo. 2023. From “Ban It Till We Understand It” to “Resistance is Futile”: How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools Such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Computing Education Research - Volume 1 (Chicago, IL, USA) (ICER ’23). Association for Computing Machinery, New York, NY, USA, 106–121. (opens in a new window)https://doi.org/10.1145/3568813.3600138
(opens in a new window)[13] Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A. Becker. 2023. Using Large Language Models to Enhance Programming Error Messages. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (Toronto, ON, Canada) (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 563–569. (opens in a new window)https://doi.org/10.1145/3545945.3569770
(opens in a new window)[14] Rongxin Liu, Carter Zenke, Charlie Liu, Andrew Holmes, Patrick Thornton, and David J. Malan. 2024. Teaching CS50 with AI. In Proceedings of the 55th SIGCSE Technical Symposium on Computer Science Education (Portland, OR, USA) (SIGCSE’24). Association for Computing Machinery, New York, NY, USA. (opens in a new window)https://doi.org/10.1145/3626252.3630938 Preprint available: (opens in a new window)https://cs.harvard.edu/malan/publications/V1fp0567-liu.pdf
(opens in a new window)[15] Joyce Mahon, Brian Mac Namee, and Brett A. Becker. 2023. No More Pencils No More Books: Capabilities of Generative AI on Irish and UK Computer Science School Leaving Examinations. In Proceedings of the 2023 Conference on United Kingdom & Ireland Computing Education Research (Swansea, Wales, UK) (UKICER ’23). Association for Computing Machinery, New York, NY, USA, Article 2, 7 pages. (opens in a new window)https://doi.org/10.1145/3610969.3610982
(opens in a new window)[16] Leo Porter and Daniel Zingaro. 2023. Learn AI-Assisted Python Programming with GitHub Copilot and ChatGPT. Manning, Shelter Island, NY, USA. (opens in a new window)https://www.manning.com/books/learn-ai-assisted-python-programming
(opens in a new window)[17] James Prather, Paul Denny, Juho Leinonen, Brett A. Becker, Ibrahim Albluwi, Michelle Craig, Hieke Keuning, Natalie Kiesler, Tobias Kohn, Andrew Luxton-Reilly, Stephen MacNeil, Andrew Petersen, Raymond Pettit, Brent N. Reeves, and Jaromir Savelka. 2023. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proceedings of the 2023 Working Group Reports on Innovation and Technology in Computer Science Education (Turku, Finland) (ITiCSE-WGR ’23). Association for Computing Machinery, New York, NY, USA, 108–159. (opens in a new window)https://doi.org/10.1145/3623762.3633499
(opens in a new window)[18] James Prather, Brent N. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett Powell, James Finnie-Ansley, and Eddie Antonio Santos. 2023. “It’s Weird That It Knows What I Want”: Usability and Interactions with Copilot for Novice Programmers. ACM Trans. Comput.-Hum. Interact. 31, 1, Article 4 (Nov 2023), 31 pages. (opens in a new window)https://doi.org/10.1145/3617367
(opens in a new window)[19] Seán Russell, Simon Caton, and Brett A. Becker. 2023. Online Programming Exams - An Experience Report. In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1 (ITiCSE 2023). Association for Computing Machinery, New York, NY, USA, 436–442. (opens in a new window)https://doi.org/10.1145/3587102.3588829
(opens in a new window)[20] Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1 (Lugano and Virtual Event, Switzerland) (ICER ’22). Association for Computing Machinery, New York, NY, USA, 27–43. (opens in a new window)https://doi.org/10.1145/3501385.3543957
(opens in a new window)[21] Elizabeth Seger, Aviv Ovadya, Divya Siddarth, Ben Garfinkel, and Allan Dafoe. 2023. Democratising AI: Multiple Meanings, Goals, and Methods. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (Montréal, QC, Canada) (AIES ’23). Association for Computing Machinery, New York, NY, USA, 715–722. (opens in a new window)https://doi.org/10.1145/3600211.3604693
(opens in a new window)[22] Adrian Weckler. 2023. ChatGPT Scored up to H1 on Leaving Cert Computer Science Exam. Irish Independent (May 2023). (opens in a new window)https://www.independent.ie/business/technology/chatgpt-scored-up-to-h1-on-leaving-cert-computer-science-exam/a215051642.html
(opens in a new window)[23] Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, Courtney Biles, Sasha Brown, Zac Kenton, Will Hawkins, Tom Stepleton, Abeba Birhane, Lisa Anne Hendricks, Laura Rimell, William Isaac, Julia Haas, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2022. Taxonomy of Risks Posed by Language Models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 214–229. (opens in a new window)https://doi.org/10.1145/3531146.3533088
Published February 7th 2024