Can humans trust AI? As CEO of an AI tech firm I have one thing to say to those still in two minds about it: you’re right.
RIGHT now, humans are right to be distrustful of AI, not because it’s a Skynet plotting to rid the world of human beings but because as far as AI is concerned we’re still in frontier territory. There is a lot of snake oil and no appellation contrôlée to guarantee what’s on the label.
Technologies list AI as a feature like it’s a tick in the box. The same thing happened with the Internet of Things (IoT) a few years ago.
Healthy scepticism is a good thing
But the broad group of technologies people refer to as machine-learning covers many things. And there are fundamental differences between true AI and pattern-matching (albeit extremely efficiently) with data sheets. Asking yourself if you can trust what you’re being told is an absolutely valid question.
What’s not valid is questioning the consistency of AI after it’s embedded and has proven itself. There is no human fallibility. It doesn’t just wake up and have a bad day. AI is based on data.
Your job, as a healthy sceptic, is to arm yourself with some key knowledge before venturing into the AI unknown.
Good AI needs good data
The first thing to recognise is that, however clever it is, AI is built by humans inputting data. Good AI is built on good data – absolute truths – and lots of it. A few hundred samples of training data is not enough.
FIDO’s deep-learning neural networks were trained only on data we knew to be correct. FIDO learned the veracity of its predictions from proven results.
Our training library was built on real physical interactions with data events. That meant digging a hole to prove a leak existed, or, just as importantly, no leak. It’s very difficult to build learning algorithms without this solid foundation.
So, ask where your AI’s data comes from, and how reliable it is. Technology which looks for patterns based on questionable data, even if there’s 30 years’ worth of it, is not AI.
Good AI makes mistakes
Second, like any brain, true deep-learning technology goes through an embryonic phase where it makes mistakes. This is how it learns and continues to learn. The first time we trialled FIDO its results were 32 per cent accurate. Within six weeks it was over 92 per cent accurate. One hundred per cent accuracy is not possible, but by learning from the occasional verified mistake FIDO AI is edging ever closer towards it.
So, ask how many samples your AI has processed, or how many iterations it’s been through. If a product won’t admit it’s made mistakes, be sceptical.
Good AI learns from negatives too
Third, be wary of AI built from assumptions based on only positive outcomes. It will not work. FIDO’s training library is now approaching two million files and we continue to add more boots-on-the-ground verified data to prove or disprove those critical ‘no leak found’ events.
This pushes FIDO’s accuracy levels even higher. Right now, FIDO is creating its own set of false negatives and positives from data so it can learn from it. An incredibly clever process called audio mapping, where FIDO makes noises that it knows shouldn’t fit into a particular category and then self-corrects.
So, ask your AI about the balance of positive and negative outcomes in its training library.
Now the scepticism ends
Here’s where the scepticism ends. Once AI like FIDO proves it does work, there’s no need to prove it any more. At that point, accepting it becomes cultural. The remaining hurdle is how the AI product interacts with humans. You can’t just impose it on people and expect it to be a success.
By the time FIDO gets to the front line staff, the higher ups are already convinced, but often this assurance has not made it down through the ranks to the people who will be using it. And they will have the same serious questions which, unanswered, act as a barrier to them really investing in it.
When we first introduced FIDO AI into the leakage process during its test phase, we realised we needed to take the people with job responsibilities with us on a journey. By this I mean the people who bear the consequences of taking an action based on FIDO decisions. For instance, by sending an engineer to site, or deciding to instruct a complex dig.
People need buy in
These people need to buy in to AI just as much as their managers and directors. They need to understand why it’s correct. And why they can trust it. By letting them question it, you build their confidence.
Counter-intuitively, one of our best winning arguments is proving when there isn’t a leak. It shows people their natural instinct isn’t always reliable. Done well, the investment is worth its weight in gold as a human validation process, encouraging people to invest time and effort incorporating the AI into their daily workflow.
FIDO’s onboarding process takes this into account. We show new clients how FIDO AI was built, what it does, why FIDO makes a certain decision, why, as a human, they may have come to a different decision and what the consequences are for them.
It tackles the fear that AI is ultimately about automation and job removal by showing that good AI doesn’t affect the status of their job. It makes it easier by giving them clear calls to action at the point where their expert input is needed.
Can humans trust AI?
We always go into a lot more detail during the first few days when we are embedding the reports. We ask for feedback. We’re honest that AI is not infallible. We encourage them to come back to us with their ‘I don’t believe that’ moments. It’s a partnership. We’ll always do this, whether we have 50 clients or 5,000.
So, a bit of scepticism when you’re talking to AI providers is not a bad thing. In fact, it’s great. AI is built on human fallibility and learns from it. Challenge your AI provider by all means, but once you’ve got an AI that works the time for questioning is over. So, can humans trust AI? For true AI, the answer is yes.
As lead sponsor of this year’s Global Water AI Innovation and Data Usage FIDO is conducting a survey, with the conference organisers, on people’s attitudes to AI. The results will be revealed at the conference on April 26. Visit our conference page below to take the survey and get 25% off entry to the Congress using our discount code.