AI and the Public Sector: Understanding the current landscape


In the first of a series of articles for Public Sector journal, Stephen Clarke, Data and Information Management Consultant and former Chief Archivist, provides an overview of the use of artificial intelligence (AI) technologies within the public sector, the impacts of AI on our work, and asks what issues AI raises for public servants.

AI is rapidly transforming the public sector globally, with the potential to improve efficiency, effectiveness, and public service delivery. However, it is important to understand the current AI landscape, including the existing applications, policy and regulatory frameworks, and potential issues, to maximise the benefits of this technology while mitigating the risks and challenges it raises.

Current applications of AI in the public sector

AI is already being used in a variety of ways across the New Zealand public sector, including:

  • automating tasks, such as processing applications, issuing invoices, and responding to customer enquiries
  • improving decision-making by analysing large amounts of data to identify patterns and insights
  • personalising services: Chatbots provide support to users, and algorithms recommend relevant services to the individual.

These applications, for the most part, accelerate existing practice. What is new, however, is the generative AI use cases where we can apply the traditional data processing to unstructured textual data, speech, and audio-visual content. The ability to generate documents, auto-summarise meetings, and generate machine code was brought to public consciousness by ChatGPT.

Policy and regulatory frameworks for AI in the public sector

The government has committed to the responsible and ethical use of AI in the public sector. In 2023, the government released the Interim Generative AI guidance for the public service principles:

  • Be transparent and accountable: Public sector agencies should be transparent about their use of AI and accountable for the decisions that AI systems make.
  • Be fair and equitable: AI systems should be designed and used in a way that is fair and equitable to all users.
  • Be respectful of privacy and security: AI systems should be designed and used in a way that respects the privacy and security of citizens.
  • Consider Te Tiriti o Waitangi: Public sector agencies should consider the principles of Te Tiriti o Waitangi when designing and using AI systems.

However, The Ministry of Business, Innovation and Employment has banned staff from using artificial intelligence technology, citing privacy risks. This may drive the unauthorised, or covert use of ‘Shadow AI’. The productivity advantage that staff get from AI means that inevitably bans will be ignored. However, this is not a small rogue minority. A recent Cybernews study reported that:

  • Forty-four per cent of workers have used GenAI (generative AI), 25 per cent of those visits leaked data.
  • Confidential information being input into the GenAI tools includes internal business data, source code, and personally identifiable information.
  • A study by professional social network Fishbowl reported that 68 per cent of staff don’t disclose AI usage to their bosses.

The answer, in my view, is not the King Canute approach of ordering the tide of AI availability to recede. The answer is to invest in trust frameworks, governance, and sensible use case identification, and to take control of AI through clearly articulated use cases and provide a pathway for public servants to have legitimate access to these capabilities.

Developing public sector use cases for AI

The public sector needs to take ownership of AI and bring it in-house to mitigate Shadow AI and external reliance. By teaching AI our own organisational language, culture, and values, training and fine-tuning the AI in-house, we will retain data sovereignty and develop AI systems that meet our local context. 

So public sector organisations should bring in smaller, independent New Zealand-based AI experts to act as a bridge to the multinationals, to help them design, build, and operate their own AI and machine learning (ML) practice and embed their own culture and ethics within their own digital boundaries. This ensures we build local capability, have explainable decision-making, and, more importantly, keep our public sector data in-house, sovereign, and private.

Why can’t we copy what other countries do?

Why not just pick up the European Union’s AI Act or the United States’ Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence? This is not a sustainable approach in my view, as:

  • AI regulations need to be tailored to our specific technological landscape and skills base
  • AI requires local customisation to support local practices and to reflect our local contexts
  • our legal system is different from other jurisdictions
  • we have different social and cultural values and Tiriti obligations to meet.

Therefore there is a real opportunity to overhaul the whole information and data regulatory environment into a cohesive approach for the public sector, society, and to regulate the private sector.

Developing information, data, and AI standards

Standards can play a significant role in the regulation of AI. They can provide a framework for assessing the safety, reliability, and fairness of AI systems. This is well articulated in the Standards Australia Responsible and Inclusive AI Whitepaper.

The New Zealand public sector should adopt and adapt international standards where practicable and adapt to context. Unusually, AI is unique in having a single global harmonised international working group for standards, bringing ISO, IEC (USA), and CEN (EU) together, with only China publishing independent standards.

Issues raised by AI for public servants

While AI has the potential to transform the public sector for the better, it also raises challenges for public servants, including:

1. Job displacement

    The cost of AI technology is decreasing, and it is becoming more accessible. This means that it is becoming more feasible for public sector agencies to adopt AI technologies and replace many traditional roles, particularly where they can be easily automated, are process-driven, are based on data analysis, or require the summarisation or production of text-based outputs, policies, reports, and briefings.

    While AI is likely to displace some public sector jobs, it is important to note that it is also likely to create new jobs and opportunities. For example, public servants will be needed to design, develop, and manage AI systems, and public servants will be required to interpret the results of AI systems and to make decisions based on those results to mitigate bias.

    Against this threat of job displacement, we must look at the existing skills that public servants bring to AI, including:

    • Domain expertise: Having deep knowledge of the public sector and the problems that it faces, skills essential for designing and deploying AI solutions.
    • Stakeholder engagement: Essential skills for ensuring that AI solutions meet stakeholders’ needs.
    • Critical thinking and problem-solving: Skills are essential for evaluating AI solutions and using them to solve complex public sector problems.

    Also, the poor state of information management and data governance across the public sector, lack of standardisation, interoperability, classification, descriptive metadata, and provenance means there is much work to be done to get public sector content ready for AI.

    2. Bias and discrimination

      Specific issues within the New Zealand context are:

      • Māori and Pasifika

      One of the biggest risks associated with AI is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased. An AI system that is used to assess bail risk may be more likely to recommend detention for Māori and Pasifika people, even if they pose no greater risk. The historical training data shows the bias that Māori and Pasifika people were likely to be arrested and charged with crimes.

      • Privacy risks

      AI systems can be used to track and monitor people’s movements and activities. This can be used for legitimate purposes, such as preventing crime, but it can also be used for surveillance and social control. For example, AI was used to track the movements of people returning from overseas during the COVID-19 pandemic, to monitor self-isolation compliance. However, it raises concerns about the government’s ability to track and monitor people’s movements without their consent.

      • New Zealand data sovereignty

      Data sovereignty is the right of individuals and communities to own and control their own data. Data sovereignty risks are that data will be collected, used, or shared without the consent of the individuals or communities to whom it belongs. The way AI systems operate, it is difficult to retrospectively remove training data.

      These concerns are particularly high for Māori. If Māori do not have control over their own data, they are at risk of being exploited and marginalised, this data may be used to make decisions about Māori without their consent, and they may be discriminated against, based on their own data.

      3. Opaque decision-making

        Automated decision-making by the public sector is challenging for the obvious ethical reasons, but also to maintain the public sector’s licence to operate within a democratic society. There should always be a ‘human in the loop’ when using AI to ensure:

        • Explainability: To mitigate the risk of black box decision-making.
        • Ethics: AI systems are used in a way that is consistent with human values and interests.
        • Accuracy: AI is good at processing large amounts of data and identifying patterns. However, humans are better at making decisions in complex situations.

        Government decision-making must be explainable for transparency and accountability. We must be able to replicate or understand how a determination was made, and a human must be responsible and accountable. By having humans and AI systems working together, we could get the best of both worlds.

        4. AI hallucinations

          AI hallucinations, or incorrect results from flawed analysis or the provision of misleading information, are known issues. They can lead to inaccurate public service decision-making or misleading information being presented to the public. This will damage public trust in AI and the public service through:

          • Misinformation: an AI-powered chatbot could be used to generate fake news articles or social media posts.
          • Bias and discrimination: AI hallucinations can also be biased and discriminatory.
          • Public trust: If the public believes that AI cannot be trusted to provide accurate or reliable information, it damages the system.

          So, we need to take steps to mitigate the risks posed by AI hallucinations, such as:

          • Quality Assurance: To improve the accuracy and reliability of source data quality, AI system outputs, and detecting and preventing AI hallucinations.
          • Governance and Stewardship: Implementing ethical guidelines, governance, and right of appeal for the use of AI in the public sector.
          • Openness and Education: Publishing, Large Language Models (LLMs )methodologies, training models, and risk assessments, and educate the public about the risks and benefits of AI.

          By taking a proactive approach, the public sector can help to ensure that AI is used safely and responsibly. We mustn’t think that we are regulating technology; we are regulating how humans wield it. We regulate the behaviour of drivers not cars. A speed limit isn’t broken by an engine, it’s broken by the driver.

          In conclusion, the rise of AI/ML hasn’t really created new risks for organisations. It has exposed the general lack of underlying information and data governance, and the weakness of data and information regulation and practice. If the public sector has well-established ethical trust foundations in place for its information and data, then it should be well-placed to introduce AI and any new technologies as they arise.

          So, the key to safely introducing AI is getting the foundational data governance and information management in place, introducing a well-governed AI Use Case Assessment model, and engaging staff on this journey.

          Good intentions, with good governance, usually mean good outcomes.

          This article is published in the Public Sector Journal - Summer 2023, Issue 46.4.


          Share