Public trust in AI is shifting in a direction contrary to what companies like Google, OpenAI, and Microsoft are hoping for, as suggested by a recent survey based on Edelman data.
The study suggests that trust in companies building and selling AI tools dropped to 53 percent, compared to 61 percent five years ago.
While the decline was less severe in less developed countries, in the US, it was even more pronounced, falling from 50 percent to just 35 percent.
We are told that AI will cure disease, clean up the damage we’re doing to the environment, help us explore space and create a fairer society.
So, what is the cause of this decline in trust? Is it just an image problem? And what can we do about it if we believe that this technology clearly has enormous potential for doing good if it’s implemented in an ethical and human-first way?
Why Is AI Trust Essential?
Firstly, what does the term “trust” mean when it comes to AI?
Well, it isn’t just about trusting AI to give us the right answers. It’s about the broader trust that society puts in AI. This means it also encompasses questions of whether or not we trust those who create and use AI systems to do it in an ethical way, with our best interests at heart.
Take self-driving cars, for example. Despite assurances from manufacturers that they would be a common sight on our roads by the early half of this decade, this hasn’t (yet) proven to be the case. It seems likely that this is due to a lack of trust on the part of both regulators, who have been slow to approve legislation, and the general public, who still express some hesitation.
Other studies have shown that public trust in AI varies according to the use case. This KPMG study, for example, carried out in late 2023, suggests that projects connected to HR are least likely to be trusted, whereas projects in healthcare are more likely to be trusted.
It’s important to remember, however, that trust is fundamental to achieving the widespread support that’s needed to integrate AI across the most world-changing use cases.
The danger is that a lack of trust in AI could stall progress, hindering the potential of AI to solve real-world problems.
Building A Trustworthy AI Ecosystem
Of course, the simplest way to look at this challenge is that in order for people to trust AI, it has to be trustworthy. This means it has to be implemented ethically, with consideration of how it will affect our lives and society.
Just as important as being trustworthy is being seen to be trustworthy. This is why the principle of transparent AI is so important. Transparent AI means building tools, processes, and algorithms that are understandable to non-experts. If we are going to trust algorithms to make decisions that could affect our lives, we must, at the very least, be able to explain why they are making these decisions. What factors are being taken into account? And what are their priorities?
If AI needs the public’s trust (and it does), then the public needs to be involved in this aspect of AI governance. This means actively seeking their input and feedback on how AI is used (and, just as importantly, when it shouldn’t be used). Ideally, this needs to happen at both a democratic level, via elected representatives, and at a grassroots level.
Last but definitely not least, AI also has to be secure. This is why we have recently seen a drive towards private AI – AI that isn’t hosted and processed on huge public data servers like those used by ChatGPT or Google Gemini.
Transparency, accountability and security are all fundamental to the concept of trustworthy AI. Increasingly, I believe we will find that any AI project that overlooks any of these principles is likely to fall at the first hurdle – public acceptance.
The Future Of Trustworthy AI
I firmly believe that AI has a tremendous potential to be a transformative force for good in the world.
However, it’s also clear that it could cause a great deal of damage. This “dark side” of AI includes its potential for spreading fear and misinformation, or undermining democratic processes through deepfakes, to enabling cyberattacks and security threats more sophisticated than anything that’s been seen so far.
Even if there’s no malicious intent, then poorly executed initiatives could result in reinforcing biases and discrimination, or infringement on privacy and personal freedoms.
Navigating a path through these dangerous waters will require a wide-reaching, collaborative effort to ensure AI is harnessed for the greater good of humanity and the planet.
Of course, given the vast sums of money on the table – estimated to be trillions of dollars – this won’t always be easy. There will always be temptation to take shortcuts or skirt around ethical issues in the race to be first-to-market. But doing so is only likely to create problems that will set the whole AI industry back and further damage the public’s trust in the industry. And in the long term, that isn’t likely to be good either for them or for the world.
Read the full article here