Carnegie Education

Artificial Intelligence: a paradigm shift for education

In terms of education, it is clear that Artificial Intelligence (AI) has had a significant impact on lexicon in 2023. As we entered the calendar year, AI for most people in society - and for the microcosm of society that is the UK education system – was the stuff of a number of newspaper headlines and science fiction storylines As we leave 2023, we can all look back on a calendar year that has changed the educational landscape arguably as much as the Covid-19 pandemic did. This was year that for the majority brought ChatGPT, Google Bard, and Microsoft Co-Pilot and many others firmly into the zeitgeist. For those of us privileged enough to work in education, it was the year that Artificial Intelligence became an active member of and contributor to our classrooms, whether we were expecting it, or welcoming of it, or not.

Artificial Intelligence

But with great power, comes great responsibility. Professor Richard Reece returned to this axiom in his keynote speech which introduced The University of Kent’s conference on ‘Unleashing Creativity through Generative AI’ in October of 2023. It exemplifies the tension that exists between the use of AI in education, and wider attitudes towards AI in society more generally. On the one hand, we have the power to create through and with AI which can bring a multitude of benefits to both teachers and students. The UK government advocates a pro-innovation approach to the development of AI in both public and private sectors, which opens up this sense of a creative and developmental space to advance the capabilities and uses of AI. On the other hand, we have responsibility as educators for helping our students to navigate the harms associated with AI and ensuring that students have sufficient digital literacy skills, as originally advocated by Paul Gilster (1997), to promote critical thought when engaging with AI tools. By placing importance on the pro-innovation approach, the UK government is enabling the private sector to advance the development of AI tools, making the UK a prime contributor in AI, but is the private sector going to take responsibility for this great power seriously?

Open access generative AI tools have already been found to provide information which demonstrates a bias towards the global north, the English language, and white experiences. For example, AI has been found to provide health information which does not consider issues specific to certain racial groups (Goldman, 2023). The logical answer to this issue would be to provide more data for the technology to work from, although it has been found that training AIs on larger data sets could actually worsen their tendency to produce racially biased results (Hsu, 2023). The issue goes beyond text-based generative AI and has impacted on AI art and image generation (Small, 2023) as well as facial recognition technologies (Buolamwini, 2019). These examples serve as stark reminders that AI has the potential to not only perpetuate, but reinforce inequalities. However, perhaps this is exactly why the space needs to be created for critical engagement and discussion around its use in our classrooms.

History tells us that education is often a laggard when introducing new technology; despite the often transformative benefits of well-considered technological innovations, education - the tip of the spear when it comes to personal development, improvement and the development of skills - is often more traditional and comfortable in its approach to innovation. Whilst we all utilise the benefits of the web now, education was again late to the party, and the use of virtual and immersive reality is commonplace in many industries and private training, but is almost entirely overlooked as an approach to pedagogy in mainstream education, at any level. So in our view, it is imperative that education as a sector and as a national interest injects itself into the emerging conversation about AI now.

Artificial intelligence approaches are not set in stone, each major provider of AI uses a different approach to development, and even definition of what AI means in detail, however all systems utilise:

  • learning: the system must be able to learn from data
  • reasoning: the system must be able to make judgements based on its knowledge and understanding, and
  • problem-solving: the system must be able to solve either simple or complex problems.

Sound familiar? Through established computing approaches such as machine learning, the use of search algorithms, and the storage and organisation of data through knowledge representation principles, AI systems are able to learn, reason and problem-solve based on the data that it has access to, which over time could be theoretically boundless.

So if we are open to this, it is clear that AI can benefit education in many ways. AI systems can create more personalised learning experiences for individual pupils and students, once it understands the desired curriculum and the preferred approaches to learning of each learner.  It can also adapt learning as a result of analysing real-time student performance; if a student is cruising through an element of the curriculum, AI can provide more challenge, and if a student is clearly struggling with a concept, AI can rethink the approach to the subject and simplify things for the learner. Just like a good teacher then! Factor in 24/7 access to a chatbot that is essentially an expert in both the subject at hand, and also the most successful approaches to learning characteristics of each individual student, and we have a tool that could have a significant impact on learning.

So far in this post, we have demonstrated that AI behaves like a good student, in order to create learning opportunities like a good teacher. However, it seems evident that the majority of the narrative around AI from educators in 2023 has been around how learners may use the technology to plagiarise, to cheat, and to subvert what we consider to be traditional approaches to education and assessment in particular. Whilst these are undoubtedly important considerations, as educators with a responsibility to inclusive and innovative learning, we must be open to the strides forward that AI can give to our pedagogical approaches.

References

  • Buolamwini, J. (2019) ‘Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It’, Time, 7 February. Available at: https://time.com/5520558/artificial-intelligence-racial-gender-bias/ (Accessed: 18 November 2023).
  • Gilster, P. (1997), Digital literacy. New York: Wiley Computer Publications.
  • Goldman, M. (2023) ‘Study: Some AI chatbots provide racist health info’, Axios, 23 October. Available at: https://www.axios.com/2023/10/23/ai-chatbot-racism-medicine (Accessed: 18 November 2023).
  • Hsu and Jeremy (2023) ‘Using bigger AI training data sets may produce more racist results’, New Scientist, 13 July. Available at: https://www.newscientist.com/article/2381644-using-bigger-ai-training-data-sets-may-produce-more-racist-results/ (Accessed: 18 November 2023).
  • Small, Z. (2023) ‘Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History’, The New York Times, 4 July. Available at: https://www.nytimes.com/2023/07/04/arts/design/black-artists-bias-ai.html (Accessed: 18 November 2023).

Dr Steve Burton

Head of Subject / Carnegie School Of Education

Dr Steve Burton is the Head of Subject for Digital Transformative Education in the Carnegie School of Education. He undertakes research in digital learning, and lectures in the areas of digital learning, safeguarding, digital safety, leadership, and professionalism in education.