The UK AI inspection will include opportunities to directly study the technology of some companies. In a speech at London Tech Week, Prime Minister Rishi Sunak revealed that Google DeepMind, OpenAI and Anthropic have pledged to provide “early or priority access” to AI models for research and security. This, says Sunak, would ideally improve oversight of these models and help the government identify “opportunities and risks”.
It is unclear what data the tech firms will share with the UK government. We’ve asked Google, OpenAI and Anthropic for comment.
It comes weeks after officials announced they would conduct a preliminary assessment of AI model accountability, security, transparency and other ethical concerns. The country’s Competition and Markets Authority is expected to play an important role. The UK has committed to spending an initial £100 million (about $125.5 million) to create a foundation model taskforce that will develop “sovereign” AI to grow the British economy while minimizing ethical and technical problems.
Industry leaders and experts have called for a temporary halt to AI development because manufacturers are rushing forward without adequate consideration for safety. Generative AI models such as OpenAI’s GPT-4 and Anthropic’s Cloud have been praised for their potential, but have also raised concerns about abuses such as inaccuracy, misinformation, and fraud. The UK move limits these issues in principle and catches problematic models before they do much damage.
This essentially does not give the UK full access to these models and the underlying code. Similarly, there is no guarantee that the government will catch every major issue. However, access can provide relevant insights. If nothing else, the effort promises to increase transparency for AI at a time when the long-term impact of these systems is not entirely clear.











