humans are always in the loop
getty
Can we delegate at least some of our business processes and decision-making to artificial intelligence without human intervention? Trust is important, but AI currently lacks trust, making most processes completely human-free. cannot be managed. Experts agree that this situation is unlikely to change soon.
Yes, we order things online and interact with digital agents and processes along the way. But I hope there’s someone behind the scenes keeping things in check. Similarly, we now drive cars with many autonomous features, but we hope that their capabilities have been well tested and monitored by their human creators.
Taking automation a step further, Ed Ratner, head of machine learning at Verseon International Corporation, says AI is far from making unattended automated processes a reality. “Even today, AI decisions are not transparent or reliable enough to be manipulated without human intervention. Technologies are becoming available that increase transparency and enable more advanced real-time monitoring of AI output. “There will be less human oversight as time goes on,” he explained. ”
But not all processes are the same. The urgency for human oversight can be differentiated between low-risk processes (where the process could be completely AI-driven) and high-risk processes. “The extent of surveillance may change depending on the consequences of using AI,” said Joshua Wilson, an associate professor at the University of Delaware. “For low-risk applications, full automation or minimal monitoring is appropriate. For high-stakes applications, the need for monitoring increases.”
Wilson points to automated essay assessment in education as an example of a low-risk application. “AI can provide valuable feedback to students without human supervision or involvement. Feedback mechanisms are usually fully automated.”
However, Wilson says that as the stakes increase, “the need for human oversight becomes more pronounced. In medicine, for example, AI can assist in drug selection and dosage recommendations, but To avoid lethal consequences, these decisions must be monitored by trained physicians. Similarly, AI-guided weapons must always be under human control to ensure ethical and safe deployment. Full automation may and should never be desirable in these situations.
The challenge in minimizing human oversight is the fact that “many AI models are black boxes, developed without proper consideration of output interpretability, ethics, or safety.” said Scott Zoldi, FICO’s chief analytics officer. This increases the need for responsible AI to help define the conditions under which some transactions require less human oversight and others require more oversight.
Even the best-performing AI models “will produce a large number of false positives, or mistakes, so you need to treat all outputs carefully and have a defined strategy to validate, counter, and support the AI.” Zoldi said.
Ratner said there are two complementary approaches to increasing trust in AI. “The first is the explainability of AI models, which means that users understand how an AI model makes decisions about its inputs, both generally and in the specific case. means.”
The second approach is to “continuously monitor the AI output,” Ratner continued. Recently, a number of new tools have come onto the market that can track the output of AI models and detect inconsistent or anomalous output. The combination of explainability and real-time monitoring is an effective way to keep humans updated. ”
Zoldi acknowledges that there have been frequent instances where AI-driven decisions and processes have been overturned or reversed by humans. “Things like this happen all the time,” he said. “Responsible AI codifies all the important human-driven decisions that guide how AI is developed, including approving or denying the use of data, eliminating unethical relationships, and adhering to regulatory standards. Includes securing.
Part of this responsible AI process also involves codifying the details of how the decision authority of the AI and human operators in operation will be monitored on the blockchain. This may include situations in which there is a transition to a “new AI model.”
Decisions about disabling or reversing AI should generally be left to people with experience in the appropriate field, Zoldi said. It also requires organizations to have a “chief analytics officer or AI officer to set standards, help business units balance business risks and regulations, and enforce thresholds for AI oversight.” There is.
“AI + humans is the strongest solution,” Zoldi said. “AI alone should not be making decisions.”