Much has been said about the tendency of generative AI to hallucinate and create made-up facts to answer a question. Obviously, nobody wants AI to (literally) spread fake news. But can AI make us more honest if used properly?
A group of researchers around Alicia von Schenk used a standard BERT AI model to assess if a given statement is truthful or a lie. And they tested if humans are better than machines or if the AI can help humans discover more lies.
To do this, they asked a group of 986 volunteers to write down one truthful and one false statement. Their task was to write a convincing statement, so they won £2 if they managed to write a false statement that got classified by a jury as truthful or if they wrote a truthful statement that was classified as a lie. Then they asked 2,040 ‘judges’ to evaluate the statements and classify them as true or false. For every correct identification, the judges also got a monetary reward to incentivise them to do as good a job as possible.
However, one twist was that the judges had to not only assess a statement as true or false, they also had to accuse the person who made the statement of being a liar. If you are like me, that will make you very uncomfortable. Most people don’t want to ruffle feathers, so we’d rather not accuse someone of a lie unless we are sure it is a lie. The result is that we live in a society where compulsive liars can get away with their lies a lot. In some cases, being a compulsive liar can even be career-enhancing because people don’t find out about the lies or don’t put a stop to a person’s rise by calling them out.
And this is where the study gave us hints on how to make the world a better place…
First, when it comes to assessing the truthfulness of the statements made, humans were no better than chance. If people would randomly accuse others of lying, they would have been correct 50% of the time. In the experiment, the judges were correct 47% of the time. Meanwhile, the AI lie detector algorithm was 67% accurate – much better than chance or humans.
Humans may be no better than chance at detecting lies, but do they at least call out people for lying? Nope. Only 19% of judges accused people of lying when they had to judge by themselves. But when people were supported by the AI lie detector, their behaviour changed. About 30% of judges now accused people of lying. They did so predominantly when the algorithm told them that the other person was lying.
But there was an interesting twist to that as well. If judges were told the assessment of the algorithm, they accused 40% of presumed liars. If the judges could ask the algorithm themselves (i.e. they actively had to ask the algorithm for help) they accused 85% of presumed liars. This huge difference between being forced to use the information or having to actively ask for it is nothing new. It is the reason why even today, baking mixes for cakes ask people to add an egg or some other ingredient, even though we are perfectly capable of making cake mixes that do not need any human interaction before they can go into the oven.
Which brings me back to where we started. The answer to the question at the top is yes, AI can make us more honest, but only if we ask AI to detect lies for us AND we go the extra mile to call people out for their lies…
…Ok, I just re-read the last sentence and I think we must give up hope that the world will become a more honest place anytime soon.
"Any headline that ends in a question mark can be answered by the word no." https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines
Your summary of the research study provides a compelling look at the interplay between AI and human honesty. It highlights both the potential of AI to enhance our ability to detect lies and the inherent challenges in motivating people to confront dishonesty.
Key Takeaways:
AI vs. Human Detection: The study demonstrates that AI can outperform humans in identifying lies, with a 67% accuracy rate compared to the 47% accuracy of human judges. This indicates that AI can serve as a valuable tool in assessing truthfulness.
Human Reluctance: Despite the AI’s capabilities, human judges showed a significant reluctance to call out dishonesty, reflecting a broader social discomfort with confrontation. Only 19% of judges accused others of lying when acting alone, underscoring the challenges of social dynamics in truth-telling.
Engagement with AI: The difference in outcomes based on how judges interacted with the AI is particularly interesting. Active engagement (asking the algorithm for help) led to a much higher rate of accusations (85%) than passive reception of information. This suggests that fostering active engagement with AI tools could be crucial in promoting honesty.
Cultural Implications: The findings point to a need for a cultural shift that encourages accountability and openness. Even with AI’s capabilities, the willingness to confront dishonesty often relies on social norms and individual comfort levels.
Hope for the Future: While the study suggests that AI can promote honesty if used correctly, it also highlights that systemic change is needed to create a more honest society. This may involve training people to be more comfortable with confrontation, improving communication skills, and fostering environments where honesty is rewarded.
In conclusion, while AI has the potential to make us more honest, its effectiveness will largely depend on how we integrate it into our social fabric and encourage individuals to act on the insights it provides. Your reflection on the challenges we face in achieving a more honest society is a sobering reminder of the complexity of human behavior, even in the face of technological advancements.
By the way, the entire block of text after the Betteridge's Law of Headlines above was generated by AI ... we won't even have to think up and type out our own clever comments anymore! ;-)
This whole article is untrue.
There, I've said it.
I've told AI to agree with me, so I must be right.