Interview: AI and the Government, UK Experience / Sébastien Krier (Office for AI, UK)

Automation is not just replacing (repetitive) jobs, it’s after the government as well, and computer algorithms are already widely deployed across many public sector operations, in many cases without awareness and understanding of the general population. What’s happening with AI in government and how does government affect and interact with the development of technologies like artificial intelligence? We wanted to dig deeper into this subject through a conversation with an AI Policy expert who had a notable first-row seat in all of this while working for Office for AI, a special UK government agency responsible for overseeing the implementation of the United Kingdom’s AI strategy.                        


The role of government is to serve its citizens. In doing so it uses the latest technologies, transforming into smarter and more accessible, even interactive, through its technological updates. Not so long ago, it was the government and government-funded organizations who were inventing new technologies. While it seems that R&D power is shifting to so-called “big tech” companies, the relationship between government (and politics) and technology is of both existential and practical importance for all. The government is there to – at least – initiate, oversee, and regulate. One of the most interesting and important technological developments is artificial intelligence. There are really high hopes around it, with many believing that the AI revolution will be bigger than the Agricultural, Industrial, and Digital revolution. What is the role of government in that process? 

The world is changing fast and so does the government. With the digital and computer revolution and the advent of the internet and the world wide web, the government is going through one of the most significant transformations in its history. “Software is eating the world” (Marc Andressen); it is eating the government too. The $400 billion #govtech market (Gartner) emerged with startups and other companies doing technologies for the government, improving many aspects of it. Some estimates say it will hit a trillion dollars by 2025. From a historical perspective, it’s probably just a start. Future, digital-first government, will probably look totally different from what it used to be and what it is now. 

New realities create new fields of study and practices that did not exist. One of those fields is AI Policies, the field primarily interested in the intersection of AI and government. The United Kingdom is leading the way in all of this in many respects. In the case of AI technologies, it’s a birthplace of some of the most important AI research companies in the world, Deep Mind being just one of them. Traditionally among the best in the world, higher education produces scientific leaders and researchers. If you want to seriously study long term effects of technology like artificial intelligence on society and humanity at large, you’ve already stumbled upon Oxford’s Institute for the Future of Humanity. How is the UK government approaching artificial intelligence?                                                               

Sébastien is an AI Policy expert. After graduating at UCL and spending quite some time in law, Sébastien joined the British government, Office for Artificial Intelligence as an Adviser in 2018, a government joint-unit responsible for designing and overseeing the implementation of the United Kingdom’s AI strategy. He now helps public and private organizations design strategies and policies that maximize the benefits of AI, while minimizing potential costs and risks. 

Sébastien’s former role involved designing national policies to address novel issues such as the oversight of automated decision-making systems and the responsible design of machine learning solutions. He led the first comprehensive review of AI in the public sector and has advised foreign delegations, companies, regulators, and third sector organizations. Sébastien has also represented the UK at various panels and multilateral organizations such as the D9 and EU Commission. 

He is the perfect person to talk to about all AI government stuff. We had a chat with him about the AI and government in the UK.                      


You spent quite some time working at the Office for AI, the UK government. Can you tell us more about the purpose and the work of that government agency? How does the UK government approach artificial intelligence? 

The Office for AI is a joint-unit between two ministries – BEIS and DCMS – and is responsible for overseeing the implementation of the £1bn AI Sector Deal. The AI Sector Deal was the Government’s response to an independent review carried out by Professor Dame Wendy Hall and Jérôme Pesenti. Our approach was therefore shaped by these commitments and consisted of the following workstreams:

Leadership: the aim here was to create a stronger dialogue between industry, academia, and the public sector through the Government establishing the AI Council.

Adoption: the aim of this work was to drive public and private sector adoption of AI and Data technologies that are good for society. This included a number of activities, such as the publication of A Guide to Using AI in the Public Sector, which includes a comprehensive chapter on safety and ethics drafted by Dr. David Leslie at the Alan Turing Institute. 

Skills: given the gap between demand and supply of AI talent, we worked on supporting the creation of 16 new AI Centres for Doctoral Training at universities across the country, delivering 1,000 new PhDs over the next five years. We also funded AI fellowships with the Alan Turing Institute to attract and retain the top AI talent, as well as an industry-funded program for AI Masters.

Data: we worked with the Open Data Institute to explore how the Government could facilitate legal, fair, ethical, and safe data sharing that is scalable and portable to stimulate innovation. You can read about the results of the first pilot programs here.

International: our work sought to identify global opportunities to collaborate across jurisdictional boundaries on questions of AI and data governance, and to formulate governance measures that have international traction and credibility. For example, I helped draft the UK-Canada Joint-Submission on AI and Inclusion for the G7 in 2018).

Note that the Government also launched the Center for Data Ethics and Innovation, who were tasked by the Government to research, review, and recommend the right governance regime for data-driven technologies. They’re great and I recommend checking out some of their recent outputs, but I do think they’d benefit from being truly independent of Government

How would you define “AI Policy“? It’s a relatively new field and many still don’t properly understand what the government has to do with AI. 

I like 80,000 Hours’ definition: AI policy is the analysis and practice of societal decision-making about AI. It’s a broad term but that’s what I like about it: it touches on different aspects of governance, and isn’t necessarily limited to the central government. There are many different areas a Government can look at in this space – for example:

  • How do you ensure regulators are adequately equipped and resourced to properly scrutinize the development and deployment of AI?
  • To what extent should the Government regulate the field and incentivize certain behaviors? See for example the EC’s recent White Paper on AI
  • What institutional mechanisms can ensure long-term safety risks are mitigated? 
  • How do you enable the adoption and use of AI? For example, what laws and regulations are needed to ensure self-driving cars are safe and can be rolled out? 
  • How do you deal with facial recognition technologies and algorithmic surveillance more generally? 
  • How do you ensure the Government’s own use of AI, for example, to process visa applications, is fair and equitable?

I recommend checking out this page by the Future of Life Institute, which touches on a lot more than I have time to do here!

The UK is home to some of the most advanced AI companies in the world. How does the government include them in the policy-making processes? How exactly does the government try to utilize their work and expertise?

The Pesenti-Hall Review mentioned earlier is an example of how the Government commissioned leaders in the field to provide recommendations to the Government. Dr. Demis Hassabis was also appointed as an adviser to the Office for AI.

CognitionX co-founder Tabitha Goldstaub was asked last year to chair the new AI Council and become an AI Business Champion. The AI Council is a great way to ensure the industry’s expertise and insights reach the Government. It’s an expert committee drawn from private, public, and academic sectors advising the Office for AI and government. You can find out more about them here

The government decides to implement a set of complex algorithms in the public sector in whatever field. How does it happen in most cases? Companies are pitching their solutions first, or the government explicitly wants solutions for previously very well defined problems? How does Ai in the public sector happen?

That’s a good question. It really depends on the team, the department, the expertise available, and the resources available. To be honest, people overestimate how mature Government departments are to actually develop and use AI. Frequently they’ll buy products off the shelf (which comes with a host of issues like IP and data rights). 

Back in 2019 I helped lead a review of hundreds of AI use cases in the UK Government and found that while there are some very high-impact use cases, there are also a lot of limitations and barriers. AI Watch recently published its first report on the use and impact of AI in public services in Europe. They found limited empirical evidence that the use of AI in government is achieving the intended results successfully

The procurement system is also quite dated and not particularly effective in bringing in solutions from SMEs and start-ups, which is why the Government Digital Services launched a more nimble technology innovation marketplace, Spark. The Office for AI also worked with the WEF to develop Guidelines for AI Procurement.

Part of your work was focused on educating others working in the government and the public sector about AI and its potential and challenges. How does the UK government approach the capacity building of its public officials and government employees? 

There are various initiatives that seek to upskill and ensure there’s the right amount of expertise in Government. As part of the 2018 Budget, the Data Science Campus at the ONS and the GDS were asked to conduct an audit of data science capability across the public sector, to “make sure the UK public sector can realize the maximum benefits from data”. There are also specific skills frameworks for data science-focused professions. Ultimately though I think a lot more should be done. I think a minimum level of data literacy will be increasingly necessary for policymakers to properly understand the implications new technologies will have on their policy areas.

The recently published National Data Strategy also finds that “The lack of a mature data culture across government and the wider public sector stems from the fragmentation of leadership and a lack of depth in data skills at all levels. The resulting overemphasis on the challenges and risks of misusing data has driven chronic underuse of data and a woeful lack of understanding of its value.

What is your favorite AI in the public sector use case in the UK (or anywhere) – and why?

One of my favorite use cases is how DFID used satellite images to estimate population levels in developing countries: this was the result of close collaboration between the Government, academia, and international organizations. And this is exactly how AI should be developed: through a multidisciplinary team.

Outside of the UK, I was briefly in touch with researchers at Stanford University who collaborated with the Swiss State Secretariat for Migration to use AI to better integrate asylum seekers. The algorithm assigns asylum seekers to cantons across the country that best fits their skills profile, rather than allocate them randomly, as under the current system. That’s an impactful example of AI being used by a Government and in fact, I think Estonia is trialing similar use cases. 

On the nerdier side, Anna Scaife (who was one of the first Turing AI Fellows) published a fascinating paper where a random forest classifier identified a new catalog of 49.7 million galaxies, 2.4 million quasars, and 59.2 million stars!

What are the hottest AI regulation dilemmas and issues in the UK at this moment?

Until recently, the key one was how to govern/oversee the use of facial recognition. Lord Clement-Jones, the chair of the House of Lords Artificial Intelligence Committee, recently proposed a bill that would place a moratorium on the use of facial recognition in public places. That won’t pass but it’s a strong signal that the Government should consider this issue in more detail – and indeed the CDEI is looking into this.

But with the A-levels scandal in the UK, I think there is a growing acknowledgment that there should be more oversight and accountability on how public authorities use algorithms.

You’ve spent quite some time collaborating with European institutions. Can you tell us more about AI policy approaches and strategies on the European level? What is happening there, what’s the agenda? 

The European approach is a lot more interventionist so far. There are some good proposals, and others I’m less keen on. For example, I think a dualistic approach to low-risk and high-risk AI is naïve. Defining risk (or AI) will be a challenge, and a technology-neutral approach is unlikely to be effective (as the European Parliament’s JURI committee also).

It’s better to focus on particular use cases and sectors, like affect recognition in hiring or facial recognition in public spaces. I also think that it’s dangerous to have inflexible rules for a technology that is very complex and changes rapidly.  

Still, I think it’s encouraging they’re at least exploring this area and soliciting views from industry, academia, and the wider public. 

As for their general approach, it’s worth having a look at the white papers on AI and data they published back in February.

What happens when something goes wrong, for example, there is major harm, or even death, when the AI system is used for government purposes? Who is responsible? How should governments approach the accountability challenge?

It’s very difficult to say without details on the use case, context, algorithm, human involvement, and so on. And I think that illustrates the problem with talking about AI in a vacuum: the details and context matter just as much as the algorithm itself. 

In principle, the Government remains liable of course. Just because you program use cases that learn over time, doesn’t require human involvement, or cannot be scrutinized because of black-box issues, doesn’t mean usual product liability and safety rules don’t apply. 

Outside the public sector context, the European Commission is seeking views on whether and to what extent it may be needed to mitigate the consequences of complexity by alleviating/reversing the burden of proof. Given the complexity of a supply chain and the algorithms used, it could be argued that additional requirements could help clarify faults and protect consumers. 

The European Parliament’s JURI committee’s report on liability is actually very interesting and has some interesting discussions on electronic personhood and why trying to define AI for regulatory purposes is doomed to fail. They also find that product liability legislation needs to be amended for five key reasons:

  1. The scope of application of the directive does not clearly cover damages caused by software. Or damage caused by services.
  2. The victim is required to prove the damage suffered, the defect, and the causal nexus between the two, without any duty of disclosure of relevant information on the producer. This is even harder for AI technologies.
  3. Reference to the standard of “reasonableness” in the notion of defect makes it difficult to assess the right threshold for new technologies with little precedent or societal agreement. What is to be deemed reasonable when technologies and use cases evolve faster than they are understood?
  4. Damages recoverable are limited and the €500 threshold means a lot of potential claims are not allowed
  5. Technologies pose different risks depending on use: e.g. FRT for mass surveillance or FRT for smartphone face unlocks. Therefore, there is a need for a “sector-specific approach that does not prioritize the technology, but focuses on its application within a given domain”.

How would you define “AI Ethics” and “AI Safety”? And how governments are shaping the development and deployment of AI systems in ways that are safe and ethical? Which policy instruments are used for that?

The definition we used in our safety & ethics guidance with the Alan Turing Institute defines ethics as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies”.

It’s tricky to define comprehensively since it could relate to so many aspects: for example, the use case itself – e.g. is it ethical for a bank to offer more favorable terms to people with hats if the data shows they’re less likely to default? There are also questions about mathematical definitions of fairness and which ones we value in a particular context: see for example this short explanation

Safety to me relates more to questions of accuracy, reliability, security, and robustness. For example, some adversarial attacks on machine learning models maliciously modify input data – how do you protect against that? 

Do you ever think about the role of AI in the long-term future of government, when technology improvement potentially accelerates exponentially? What do you contemplate when lost in thinking about decades to come? 

Definitely. In fact, the book that initially got me into AI was Nick Bostrom’s Superintelligence. To me, this is an important part of AI safety: preparing for low-probability but high-impact developments. Rapid acceleration can come with a number of dilemmas and problems, like a superintelligence explosion leading to the well-documented control problem, where we get treated by machines the same way we treat ants: not with any malign intent, but without really thinking about their interests if they don’t align with our objectives (like a clean floor). On this, I highly recommend Human Compatible by Stuart Russell. On superintelligence, I actually found Eric Drexler’s framing of the problem a lot more intuitive than Bostrom’s (see Reframing Superintelligence). 

Horizon scanning and forecasting are two useful tools for Governments to monitor the state of AI R&D and AI capabilities – but sadly this type of risk is rarely on Government’s radar. And yet it should be – precisely because there are fewer private-sector incentives to get this right. But there are things Governments are doing that are still helpful in tackling long-term problems, even though this isn’t necessarily the primary aim. 

There was a recent Twitter spat between AI giants at Facebook and Tesla on this actually. I don’t really buy Jerome Pesenti’s arguments: no one claims we’re near human-level intelligence, and allocating some resources to these types of risks doesn’t necessarily mean ignoring other societal concerns around fairness, bias, and so on. Musk on the other hand is too bullish.

What governments in Serbia and the Balkans region can and should learn from the UK AI Policy experience? Can you share three simple recommendations?

That’s a good but difficult question, particularly as I have limited knowledge of the state of affairs on the ground!

I think firstly there is a need for technology-specific governance structures – a point one of my favorite academics, Dr. Acemoglu, emphasized during a recent webinar. A team like the Office for AI can be a very helpful central resource but only if it is sufficiently funded, has the right skill set, and is empowered to consider these issues effectively

Second, there should be some analysis to identify key gaps in the AI ecosystem and how to fix them. This should be done in close partnership with academic institutions, the private sector, and civil society. In the UK, very early on the focus was essentially on skills and data sharing. But there are so many other facets to AI policy: funding long-term R&D, or implementing open data policies (see e.g. how TfL opening up its data led to lots of innovations here, like CityMapper).

Lastly, I really liked Estonia’s market-friendly AI strategy, and I think a lot of it can be replicated in Serbia and neighboring countries. One particular aspect I think is very important is supporting measures for the digitalization of companies in the private sector. It’s important for markets to not only be adequately equipped from a technological point of view but also to fully understand AI’s return on investment. 

Careers in AI Policy is relatively new. Can you recommend the most important readings for those interested in learning more?

The 80,000 Hours Guide on this is excellent, and so is this EA post on AI policy careers in the EU. I was recently appointed as an Expert to a fantastic new resource, AI Policy Exchange – I think it’s super promising and highly recommend it. Lastly, definitely check these amazing newsletters, in no particular order: 

Hi

Leave a Reply