Artificial Intelligence (AI) for School Leaders
6 November 2023 | Written by Rosie Kelly

Read the latest blog post written by Rosie Kelly, Information Governance Manager at Veritau, which focuses on artificial intelligence (AI) and its impact on schools.
There is no denying that this year has been a whirlwind of news, conversation, and debate around the use of artificial intelligence (AI). Although AI has been around for several years, its development and popularity has recently accelerated. This is starting to have a visible impact on everyday life, including in schools.
While some uses of AI seem to promise a positive social impact, there are other aspects that many people find problematic. According to a BBC article, concerns are shared by professionals across the education sector.
We understand that many schools have already begun using applications that deploy AI. Others may be more cautious or want to find out more. We appreciate that this is a complex topic, which people are only starting to get to grips with!
Over the coming months, we will be providing guidance to our school DPO clients to support them to navigate this tricky area. The articles will assist with understanding AI, how it relates to data protection and the actions to consider.
In this article, we want to provide a broad overview of what the technology is and the benefits it could bring. We will also summarise the key risks your Senior Leadership Team should be aware of, including those that specifically relate to privacy and data protection compliance.
What is artificial intelligence?
Artificial intelligence is ultimately the use of machines, including computer programmes, to solve problems based on large datasets.
Most popular AI programs in use currently are trained to perform specific tasks. These systems use complex rules and instructions (known as algorithms) to classify and make predications or produce various types of content.
Generative AI is a particular form of AI that is used to generate content which imitates materials produced by humans, such as text, videos, or images. The interfaces are generally simple to use, and the outcomes are often very convincing, which is why this type of AI has become incredibly popular.
Many AI programs now use large language models (LLMs) which have billions or even trillions of parameters. They are trained using huge datasets and are capable of realistic communication.
What are the benefits of AI?
The main driver behind the use of AI worldwide is efficiency and cost savings. Since the field of AI aims to mimic the intelligence and decision-making abilities of humans, it frees up individuals to spend time on other things. Technology does not require breaks, so AI is permanently available.
AI technologies have numerous innovative uses, particularly in the medical and technology sectors. AI is far quicker than humans at even complex tasks and decisions. It is particularly useful for automating repetitive and boring jobs.
If AI is programmed properly using high quality, unbiased data, it can eliminate the risk of human error and improve precision and consistency.
In 2021, the UK Government published a National AI Strategy in recognition of the “power of AI to increase resilience, productivity, growth and innovation across the private and public sectors”.
In March, the Department for Education (DfE) also released a position statement titled Generative Artificial Intelligence in Education. This stated that, used appropriately, the technology has the potential to reduce workload and resource pressures, and potentially improve learning outcomes.
What are the risks?
Most AI developers have positive intentions and ambitions. However, people have raised legitimate concerns about ethical dilemmas and practical implications arising from the use of AI.
One key issue is that machines can learn the unconscious biases of humans through the datasets used to train them. This can result in discrimination. This is compounded by the fact that machines lack human emotions, which often act as our inner moral compass.
There is also speculation that AI could eventually replace many human workers, significantly altering our societies. This could affect low-paid unskilled workers the most, which would exacerbate inequality. It is also believed that AI could replace skilled jobs, such as those in the finance, medical and legal sectors.
Over-dependence on AI could also hamper human creativity, leading to the loss of useful skills like logical thinking and writing. In the education sector, the use of AI could allow pupils to cheat in their homework and even their assessed coursework.
Another concern is how AI could be misused maliciously in the wrong hands. It is already being used by criminals to develop increasingly realistic phishing scams and other cyber-attacks. It could also be used to negatively influence people, for example by generating and spreading false news.
There is also a risk of inaccuracy if the datasets used to train the AI are outdated or not robust. AI uses data scraped from the internet, which could be inaccurate or otherwise flawed.
There are numerous examples of generative AI producing content that is factually incorrect. For example, the International Forum for Responsible Media recently demonstrated that Google Bard cites fictional case law.
What are the privacy and data protection implications?
In addition to the broader concerns, the use of AI carries risks to privacy and data protection. There is currently no specific legislation regulating AI in the UK. However, any AI that involves the use of personal data is within the scope of data protection laws.
Often personal data is used to train or test an AI model, or AI is deployed to make a decision (including a prediction, recommendation, or categorisation) about an individual based on their personal data. In these situations, the processing must comply with the UK GDPR and Data Protection Act 2018.
One of the main conflicts between AI and data protection is the potential for lack of transparency. Transparency is a key principle of the UK GDPR.
The UK GDPR also gives individuals certain rights in relation to entirely automated decision-making which has a legal or significant effect. This includes the right to meaningful information about the logic involved. They also have the right to obtain human intervention, express their view and contest the decision.
However, it can be difficult to communicate to data subjects the logic behind extremely complicated AI algorithms. Decisions can therefore be inscrutable and difficult to challenge.
AI must also comply with the fairness principle. This means AI should only use personal data in a way people would reasonably expect. The AI should not have discriminatory or unjustified negative effects on people.
What does this mean for us as school leaders?
Although the ethical concerns outlined above sound troubling, please be reassured that we envisage AI used in educational apps should have fewer ethical issues than other sectors. This is because educational technology is usually designed for the sole purpose of assisting schools to educate and protect pupils.
However, if you do intend to use apps that involve AI in school, there are specific steps you should take to remain compliant with the UK GDPR. You should also be cautious of the more practical risks, such as the use of AI for cheating and the possibility of data breaches caused by AI cyber scams.
For more information and a quote for ongoing DPO support in this area, please do not hesitate to get in touch with our schools service on [email protected].