Let's talk
Blog

Can we trust ‘big tech’ to support regulation of AI?

Pensions & benefits AI
Lighthouse with starry sky

Over the last year, AI has swept into our lives, with OpenAI’s ‘ChatGPT’ – and the similar chat-based AI tools launched by the other ‘big tech’ firms – making AI accessible (and visible) to the masses.

New and emerging uses of AI technologies are appearing across different industries and sectors at a rapid pace. This is opening up a whole new way of interacting with data and is presenting new and exciting opportunities to evolve the way we work.

However, with new opportunities also come new challenges and new risks – for example concerns about data privacy and the risks of infusing new technology with data biases, each of which pose ethical as well as legal challenges. Regulation is needed to ensure that AI systems are trustworthy, that AI risks are mitigated, and that those developing, deploying and using AI technologies can be held accountable in the event things go wrong. Regulation will also help to set minimum ethical standards, albeit the ethical concerns are complex and regulation will not be a complete solution here.

In my view, the greatest challenge (and one which the British Prime Minister, Rishi Sunak, and his team are known to be currently grappling with) is how to put in place regulatory framework which is effective but also sufficiently nimble to evolve with the rapid pace of AI development.

With the technology rapidly shifting, it appears obvious that governments of all countries will be heavily reliant on ‘big tech’ supporting attempts to develop and evolve regulation. In my view, it will be critical that those global tech firms – who understand the technology better than anyone – are fully engaged in the process.

However, there is an obvious conflict of interest here – by being given a seat at the table to set the regulatory environment for AI, big tech is effectively marking its own homework. And so it will be vital that the big tech firms are challenged in this role and are held accountable by politicians and society.

The UK Government is perhaps somewhat ahead of the curve here and will hold the first global summit on AI safety at Bletchley Park in November. The summit will bring together representatives from the leading tech companies, academics and researchers to agree on common standards and best practices for evaluating and monitoring the most significant risks from AI.

This looks like the first real test on a global stage of whether we can trust the big tech firms to actively and openly engage in discussions on objectively setting a regulatory ecosystem around AI – and I believe it is in their interests to do so.

In my view, consumers will gravitate to the AI tools of the tech firms which are best able to demonstrate that they support the regulatory environment and have adequate controls and protections in place to earn consumers’ ‘digital trust’.

In this context, I am optimistic that the big firms will recognise that playing their part to develop and work within a regulatory system which is fit for purpose and which effectively addresses the risks posed by AI will ultimately support their commercial interests. After all, a focus on the ‘bottom line’ is likely to be an effective means of focusing the attention of the big tech players.

Arguably, the firms who take a prominent role to develop and foster the regulatory environment for AI will become those who earn consumers’ digital trust and who will therefore be at a competitive advantage.