A summary of AI developments and policy in the United Kingdom.


  • Candidates in the Liberal Democrat and Conservative leadership races address automation in their leadership bids

  • Information Commissioner’s Office calls for code to regulate police use of facial recognition and the Greater London Authority’s policing ethics panel sets out a future framework

  • Law Society Commission examining the use of algorithms in the justice systems finds a lack of standards, best practice, openness or transparency.

  • Information Commissioner’s Office and The Alan Turing Institute publish an interim report on public and industry views on explaining AI decision-making

  • UK signs up to OECD’s Principles on Artificial Intelligence

  • Department of Health and Social Care bans NHS Trusts from signing deals giving a tech company exclusive access to patient data & interim NHS People Plan released

  • Civil Aviation Authority launches innovation sandbox, including commercial autonomous drones and automated air traffic control

  • Miscellaneous Links

  • Interesting Upcoming Events

Candidates in the Liberal Democrat and Conservative leadership races address automation in their leadership bids

Liberal Democrats: Jo Swinson, one of the two candidates for leader of the Liberal Democrats, has made “Harnessing the technological revolution for Britain’s future” one of three pillars of her potential leadership, following in the footsteps of the outgoing leader Vince Cable, who also made preparing Britain for the advances in automation and artificial intelligence one of his three national priorities near the beginning of his leadership. In contrast, her opponent Ed Davey made no mention of AI or automation in his launch speech (which to his credit instead focused on combatting the far-right and tackling climate breakdown).

She argues that

  • Greater use of automation and robotics could lead to a shorter working week
  • The benefits from automation need to be felt by people, rather than just increased profit margin for corporates. She highlights retraining and upskilling as one way of doing this.
  • We need to clarify our digital rights and responsibilities in the face of AI

In this, her solutions sound closer to ideas espoused by more left-wing organisations like Autonomy’s plan for a shorter working week or the newly launched Commonwealth’s idea of a Digital Commons. It’s clear Swinson takes these issues seriously. She has been talking about the effects of automation since at least November 2017 and the vision she has outlined in her leadership pitch flow naturally from the focus of the Technology and Artificial Intelligence Commission. which she set up last year and is due to report back by the Liberal Democrat Conference in mid-September.

With the Liberal Democrats performing the best they have in a decade in Westminster polls after their success in the local and European elections, a new leader putting the issues of automation and data ethics at the heart of her policy could have significant influence over how the UK approaches AI. See here for more on the Liberal Democrats’ approach to AI.

Conservative Party: On BBC Question Time, Conservative Party leadership candidate Rory Stewart endorsed a universal training income to enable mid-career retraining to ameliorate the effect that automation and the deployment of robotics will have on the labour market.

Matt Hancock, the Health Secretary, is the only other candidate with a track record that suggests he will seriously prioritise automaton and digital technology, has also pledged to raise the amount the U.K. spends on research and development to 3% of GDP by 2025, moving forward and substantially up the government’s current target of 2.4% by 2027. With artificial intelligence and the future of mobility as two of the government’s current grand challenges, there’s no doubt this would result in significant investment in AI and supporting technologies if the target was reached.

However, Stewart is well behind many of the other candidates, the favourite of only 1% of Conservative voters in a recent poll, and has pledged not to serve in the government of a Prime Minister in favour of no-deal, such as the favourite Boris Johnson, and Hancock is performing similarly poorly. So the effects of automation are likely to remain a second-tier issue in the political executive, another victim of Brexit’s effect on domestic policy.

See also — David Cameron appointed to chair advisory board of Afiniti, a US-based firm using machine learning to automatically pair call centre staff with customers based on behavioural profiling.

Information Commissioner’s Office calls for code to regulate police use of facial recognition and the Greater London Authority’s policing ethics panel sets out a future framework

The ICO: South Wales Police was taken to court by Ed Bridges over claims that they had violated his privacy and data protection rights by using automated facial recognition, the first legal challenge of the use of this technology.

During the case, a barrister for the ICO told the court the current guidelines around automated facial recognition were “ad hoc” and a clear code was needed. Further, that a legal framework should address the nature of a watchlist and in what circumstances the technology was deployed. They also had questions on the training operators should have, how to ensure the technology was not hacked, and if people could refuse to be scanned.

Greater London Authority: Just days after the case and the ICO’s call, the Greater London Authority’s independent policing ethics panel set out new guidelines on how facial recognition technology should be used by the Met Police. They recommend that live facial recognition software should only be deployed by police if the five conditions below can be met and that the Met does not conduct any further trials until the police have fully reviewed the results of the independent evaluations and are confident they can meet the conditions, which are:

1. The benefit to public safety must be great enough to outweigh any potential public distrust in facial recognition technology.

2. There is evidence it will not generate gender or racial bias in policing operations

3. Each deployment must be assessed and authorised to ensure it is both necessary and proportionate for a specific policing purpose

4. Operators are trained to understand the risks of use and understand they are accountable

5. Both the Met and the Mayor’s Office for Policing and Crime develop strict guidelines to ensure that deployments balance the benefits of this technology with the potential intrusion on the public

Why this matters: London isn’t going as far or as fast as San Francisco, which pre-emptively prohibited all use of facial recognition by public agencies. However, if the ethics panel’s recommendations are fully implemented, then this will be pretty significant restriction of police use of facial recognition.

Given that the Met police is by far the largest force in the country, its adoption of these conditions in its use of facial recognition is likely to set a strong informal standard in what’s expected of police forces across the country. Further, given the increasing public awareness, and therefore political pressure, around the use of facial recognition and the legal questions being raised by the aforementioned case, if these conditions prove sufficient to retain public trust then they may well form the basis for any forthcoming national guidelines.

Information Commissioner’s Office and The Alan Turing Institute publish an interim report on public and industry views on explaining AI decision-making

What happened: The ICO and The Turing Institute have published an interim report from their joint Project ExplAIn. The project is intended to produce practical guidance for organisations to assist them in explaining AI decisions to the individuals affected. So far, they have conducted citizens’ juries and industry roundtables to gather views on the subject from across stakeholders, the results of which form the basis for the interim report.

They identified three key themes:

1. The importance of context in explaining AI decisions. The importance of explanations to individuals, and the reasons for wanting them depended significantly on what the decision was about, e.g. justice requiring a much greater level of explanation than healthcare.

2. The need for education and awareness around AI.

3. Technical issues were not a barrier to explainability for Industry representatives. However, cost, commercial sensitivities like intellectual property, gaming of the system and the lack of a standard approach to establishing internal accountability are more difficult challenges for the industry.

Why this matters: Citizen engagement in the development and governance of AI systems is important and the ICO appear to have deployed citizen’s juries very effectively to gauge the informed views of a wide cross-section of the public.

The point that technical issues aren’t the barrier seems particularly interesting to me. The report says: “Some organisations used a perceived lack of technical feasibility as an excuse for not implementing explainable AI decision-systems. Participants thought that, in reality, cost and resource were more likely the overriding factors.”

This suggests that if legislation making explainability compulsory were implemented, companies could deliver. However, if cost and resources are the limiting factors, then increased compliance and legal burden could actually empower existing large technology companies whose influence is rightly currently being questions. — is oligopoly the price of meaningful transparency?

Are there policy levers to allow competition without sacrificing standards? Is it possible to create easily transferable and deployed transparency overlay? I look forward to seeing what answers the ICO and Turing have in their final report due out in the Autumn.

UK signs up to OECD’s Principles on Artificial Intelligence

What happened: The UK, along with 41 other countries, have signed up to the OECD’s Principles on Artificial Intelligence — the first set of intergovernmental policy guidelines on AI. The OECD sets out five principles for the responsible stewardship of trustworthy AI:

  • AI should benefit people and the planet.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity. They should include safeguards to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure people understand when they are engaging with AI and can challenge the outcomes of those systems.
  • AI systems must function in a robust, secure and safe way with risks continually assessed and managed.
  • AI developers and deployers should be held accountable for their proper functioning in line with the above principles.

The OECD recommends governments:

  • Facilitate investment in R&D towards trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and mechanisms to share data and knowledge.
  • Create a policy environment to enable the deployment of trustworthy AI systems.
  • Provide training for the skills needed in an automated economy and ensure a just transition to that economy.
  • Co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI.

Why this matters: These principles and recommendations are not legally binding. However, they set a clear precedent for future international binding standards and treaties. They match fairly closely with the European Commission’s Ethics Guidelines for Trustworthy AI and have the endorsement of the United States, unlike previous international principles.

The UK has been leading on implementing a national AI strategy and explicitly making ethics part of that. However, it lacks the clout alone to deal with the likely major players in AI. However, if these translate into national-level policies across even a plurality of states who have endorsed the principles, then they may have enough influence on the multinational corporations leading the development of AI to set an effective ethical and safe standard around the world.

Civil Aviation Authority launches innovation sandbox, including commercial autonomous drones and automated air traffic control

What happened: The Civil Aviation Authority has launched an ‘Innovation Sandbox’. The sandbox will allow companies to discuss, explore, trial and test emerging concepts with the regulator before deployment. Announced participants include an Amazon delivery system using unmanned aerial vehicles and the air traffic control body NATS who are working to implement new technology such as AI into traffic control towers.

Why it matters: Launching an innovation sandbox is an important part of the UK achieving more agile aerospace, a key part of the upcoming Aviation 2050strategy currently being consulted on. For example, As Jack Clark explores in the most recent Import AI, it is increasingly possible to train and test drones in a completely simulated environment.

For regulators, this makes it possible to assess the capabilities of drones and set guidelines accordingly with very limited real-world trials. It would allow them to do so in a programmatic way and set outcome-based regulation, e.g. a failure rate across simulated trials, which allows regulation to keep pace with exponential developments in the technology and place the technical burden more onto the companies rather than the regulators who often lack technical capacity.

Miscellaneous Links


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s