Algorithms, particularly in the form of artificial intelligence and machine learning, are proliferating in the financial sector and are changing the way decisions are made.
Algorithms have been around for a while in the form of risk models, predictions, robo-advisors, and more, but now generative AI is making waves in the way people make choices about their money. Morningstar Medalist Ratings also received a machine learning innovation to expand coverage and build on human expertise.
It’s hard to deny the power of algorithms, but many people may not be completely comfortable relying on the output of algorithms when making important choices.
This is not a new phenomenon. Indeed, for the past decade, researchers in the behavioral sciences have been studying algorithmic aversion. This is a phenomenon in which people prefer advice from humans to advice generated from algorithms, despite recognizing that algorithm-based recommendations are often better on average than humans. is.
But given the power of algorithms in financial decision-making, it’s important to involve more people.
Here are some top tips to help people use algorithms for decision-making.
Try using a simple model first. Start by explaining how it benefits users. Incorporate human interaction. Use algorithms only for things you’re clearly good at.
Simpler may be just as good (or even better)
At a high level, AI allows experts to feed data into a system, have a model find patterns in the data, and wait for the system to spit out an answer.
The problem is that given how complex these models are, they are hard to understand and even harder to explain how they work. Understandably, many people are resistant to algorithms that even experts don’t understand, especially when the algorithm in question is used in high-stakes situations.
Fortunately, research shows that more complex an algorithm is not necessarily better. In fact, in some situations, a simple algorithm based on a few simple rules may perform better and be more robust than a complex model.
When trying to incorporate algorithms into business processes, don’t assume that more complexity is better. Instead of choosing a complex black box model right away, try testing it based on simple rules of thumb. Simpler models may perform similarly in terms of accuracy, with the added benefit of being better understood by humans.
Start with the (user) benefits
Humans, even experts, are not perfect and face many biases when making predictions and giving advice. For example, research shows that many financial professionals are poorly calibrated and hold many of their predictions with unwarranted confidence. This doesn’t just apply to the financial industry. Decades of research have found similar results in experts in other fields, including psychologists, doctors, lawyers, geopolitical experts, and even professional golfers.
Fortunately, many experts have found ways to improve accuracy by incorporating algorithms into decision-making. By helping experts make better decisions when advising individuals, algorithms improve outcomes for end consumers. Increased accuracy means that using algorithms increases benefits for end consumers, but this benefit is often not fully transferred to individuals.
When presenting an algorithm to a user, focus on how the user can benefit from the algorithm. For example, clearly state the quality of your algorithm’s performance and how it has helped others make better decisions. Another option is to compare the algorithm’s accuracy to human accuracy. This gives the user a baseline to compare the algorithm with. Our own research shows that people are more accepting of financial advisors who use generative AI if the advisor explains how using AI allows them to spend their time on other things that benefit their clients. I found out that it is possible.
where are the humans?
After all, no matter how accurate an algorithm is, it’s still an abstract procedure that many people have a hard time understanding. When algorithms are used to make decisions, some may feel like they are relinquishing control to a complex entity they cannot understand. When expressed like this, it’s no wonder people hate algorithms.
One important step that professionals can take to alleviate this concern is to bring humans back into the process.
For example, our research shows that investors will be more concerned if their advisors check the output of AI before incorporating it into their decision-making, i.e., if they use a “human accuracy check” layer before accepting the algorithm’s output. We know you may feel more comfortable with a financial advisor who uses generative AI. .
Other research has shown that when users have some control over an algorithm, for example when they can force a recommendation system to learn what they like and dislike, or when users can use the algorithm’s output as suggestions. We know that users are more comfortable when they can. It means users can choose their actions, rather than an algorithm determining the outcome.
Task selection is key
The power of algorithms, especially when it comes to generative AI, seems limitless. But before implementing an algorithm, we all need to ask ourselves, “Is this an activity that should be outsourced to an algorithm?”
Research shows that people think some tasks shouldn’t be handled by algorithms. Generally, people accept algorithms to handle objective tasks, but believe that the subjective realm should be reserved for humans. In finance, this means investors are participating in outsourcing functional tasks such as portfolio construction and tax management to algorithms. But they would rather put activities that require personal connection and human interaction with actual humans on hold.
In other words, let machines do the number crunching, but don’t outsource the human connections, relationship building, and empathetic communication.
summary
Considering the efficiency and effectiveness of algorithms, their proliferation in the financial industry is inevitable. However, we must be aware of the impact that algorithmic solutions have on trust and acceptance, especially as they are introduced to individuals.
Algorithmic evasion is a well-documented phenomenon, but that doesn’t mean these solutions are doomed to fail. Research has already pointed to some interventions to help people become familiar with algorithms. That means choosing simpler algorithms when possible, being clear about how the algorithm benefits users, incorporating some form of human oversight, and being careful about the tasks you delegate to algorithms. It’s about choosing. .