New York Stock Exchange. Photo: Spencer Platt (Getty Images)
When it comes to investing and planning your financial future, do you trust humans or computers?
This is no longer a hypothetical question.
Major banks and investment firms are using artificial intelligence (AI) to make financial predictions and advise customers.
Morgan Stanley leverages AI to reduce potential bias of financial analysts when it comes to stock market predictions. Additionally, Goldman Sachs (GS), one of the world’s largest investment banks, recently announced that it is piloting the use of AI to help write computer code. did not say in which department it is being used. Other companies also use AI to predict which stocks will go up or down.
But are people actually entrusting their money to these AI advisors?
Our new study examines this question. It turns out that it really depends on who you are and your prior knowledge about AI and how it works.
difference in trust
To explore the issue of trust when using AI for investing, we asked 3,600 people in the US to imagine they were being advised about the stock market.
In these imaginary scenarios, some people received advice from human experts. Some people have received advice from AI. Others received advice from humans working alongside AI.
Generally, people are less likely to follow advice when they know AI is involved in its creation. They seemed to trust human experts more.
However, distrust of AI was not universal. Some people were more receptive to AI advice than other groups.
For example, women were more likely than men to trust AI advice (7.5%). People who know more about AI are more willing to listen to advice it provides (10.1%). And politics was also important. People who identify as Democrats were more open to AI advice than others (7.3%).
We also found that people were more likely to trust simpler AI methods.
When you tell study participants that the AI uses something called “ordinary least squares” (a basic mathematical technique that uses straight lines to estimate the relationship between two variables), they were more likely to believe us when we said it. It used “deep learning” (a more complex AI method).
This may be because people tend to trust what they understand. It’s like a person would trust a simple calculator over the most complex scientific equipment they’ve ever seen.
Trust in your financial future
As AI becomes more commonplace in the financial industry, companies need to find ways to improve trust levels.
This includes teaching people more about how AI systems work, clarifying how and when AI is used, and finding the right balance between human experts and AI. This may include:
Additionally, we need to tailor how AI advice is presented to different groups of people and demonstrate how well AI performs over time compared to human experts.
The future of finance may involve more AI, but only if people learn to trust it. It’s a bit like learning to trust a self-driving car. Technology may be great, but if people aren’t comfortable using it, it won’t catch on.
Our research shows that building this trust is not just about creating better AI. It’s about understanding how people think and feel about AI. It’s about bridging the gap between what AI can do and what people believe it can do.
We need to continue to study how people respond to AI in the financial industry. We need to find ways to make AI not just a powerful tool, but a trusted advisor that people can rely on to make important financial decisions.
The world of finance is changing rapidly, and AI is a big part of that change. But ultimately, it’s still people who decide where to put their money. Understanding how to build trust between humans and AI is key to shaping the future of finance.
Gertjan Verdickt is a lecturer in the School of Business at the University of Auckland’s Waipapa Taumata Rau. This article is republished from The Conversation under a Creative Commons license. Read the original article.