The rapid development of artificial intelligence (AI) over the past few years has opened the door to many opportunities, including streamlining operations across organisations.
But it is no secret that the UK suffers from an AI skills gap. With technology advancing rapidly and showing no signs of slowing down, it is difficult for professionals and organisations to keep up with the new demands of the workplace.
A 2024 Salesforce survey revealed that 60% of public sector IT professionals identified a shortage of AI skills as their top challenge for implementing AI. While the skills gap is universal, public-sector organisations most likely feel the strain due to budget constraints and tight workloads.
Could AI be the public sector’s ‘knight in shining armour?’
Many government departments seek to benefit from AI—from the NHS—using it to increase efficiencies such as detecting record-keeping inaccuracies, to local councils using AI assistants.
The education sector is another area looking to use AI to improve its day-to-day operations. Teachers are reportedly struggling with increased working hours and poor well-being.
With over 11 days of teacher strikes across the country last year—triggered by concerns about overworking—the implementation of AI in classrooms could make a real difference to a teacher’s workload.
A Schools Week article, It’s more than just pay, discloses the reasons for the teacher strikes. A primary motive is teachers’ excessive workloads and lack of work-life balance. The government hopes to combat some of these issues by implementing AI to aid teachers.
Soon, teachers will be able to use AI to create lesson plans and mark homework. That’s according to a recent government £4M project set to launch just before the beginning of the new academic year. The project aims to provide teachers with more trustworthy AI technology to help with their daily teaching tasks and save them time.
So, how do parents feel about this?
As part of its announcement, the government has published research showing a degree of nervousness around AI, especially among parents. This touches on an issue relevant to this project and every AI deployment—trust.
While it should come as no surprise that schools were “most trusted to make decisions about the use of pupils’ work and data,” the same cannot be said for tech companies with a possible interest in building AI platforms and tools.
“Trust in tech companies was extremely limited and there was little to no support for them to be granted control over AI and pupil work and data use,” the government said.
While no one can doubt the potential good that comes from AI-driven applications, this must be balanced against public perceptions around fairness and privacy. For many people, the algorithms behind AI can appear opaque. So, when AI systems fail, whether through error, bias, or misuse, it’s little wonder the public becomes sceptical.
Aligning innovation with public confidence in AI
Trust is fundamental to successfully implementing AI, whether in a classroom or in the workplace. And achieving accountability in AI requires embedding it deeply within the system through a comprehensive observability strategy. This approach allows IT professionals to monitor and understand a system based on the data it generates.
As the name suggests, observability provides in-depth visibility into an IT system. It’s an essential resource for overseeing extensive tools and intricate public sector workloads and vital for helping ensure AI operations function correctly and ethically. It also can play a crucial role in regulatory compliance by providing detailed data points for auditing and reporting.
Beyond improving operational efficiency, observability is key to fostering public trust. Not only does observability enhance the operational aspects of AI systems, but it also plays a pivotal role in building public trust by helping ensure these systems are transparent and aligned with user needs.
Addressing concerns
It’s not just a question of accountability, however. Other issues will likely gain significant public scrutiny, making an AI by Design framework essential.
Research conducted in August 2024 by the UK government on parent and pupil attitudes towards AI in education highlighted that protecting data privacy is often a top concern for parents and teachers alike. Addressing these concerns requires proper protocols to be put in place. This includes embedding robust data encryption, stringent access controls, and comprehensive vulnerability assessments as standard. These steps help safeguard sensitive information and protect systems against external attacks and internal leaks.
Finally, more practical considerations must be factored into rolling out AI in the education sector. As with any AI project, education leaders must consider the end user. There needs to be a gradual build-up of trust in the tools rather than a jarring change, which can immediately put users on the defensive.
AI’s unique opportunity in education
The use of AI in the education sector offers a unique opportunity to alleviate teachers’ pressures while enhancing the quality of learning. If teachers feel they have more time for higher-value work, they will likely be more productive and see teaching as a long-term career.
For this to work, open communication and education between the public and agency personnel about AI’s capabilities and limitations is crucial. Keeping parents updated on the newest advancements to their child’s education can smooth out any concerns before they become a more significant issue.
The education sector needs a shake-up to truly be fit for the future. Instead of being wary of technology, we must all be open to it—from students to teachers to parents. By balancing innovation with a commitment to public trust, AI can be the game-changing tool that revolutionises education.
Sascha Giese
Sasha Giese is Global Tech Evangelist, Observability, at SolarWinds, a leading provider of simple, powerful and secure solutions designed to help organisations accelerate business transformation in today’s hybrid IT world.