Math formulas to help AI solve ethical problems

An interdisciplinary research team has devised a plan to create algorithms that more effectively embed ethical standards into artificial intelligence (AI) decision-making programs. The project focused specifically on technologies that allow humans to interact with AI, particularly with virtual assistants, or in the context of helping patients in hospitals.

“Technologies like medical robots are supposed to help keep patients safe and comfortable in hospitals, seniors and others who need medical supervision or physical assistance,” says Veljko Dubljvic, one of the study’s authors and an associate professor at North Carolina State University.

“Plainly speaking, this means that these technologies will be employed in situations where the AI ​​must make ethical judgments. »

“For example, let’s say a medical aid robot is in a situation where two people need care. One of the patients is unconscious, but needs emergency treatment; the other patient has less urgent needs, but wants the robot to treat them first. How will the robot decide who to help first? Should this robot, in fact, treat an unconscious person who cannot consent to receive treatment? »

“Previous attempts to incorporate ethical decision-making into AI programs have been limited in scope and hinged on utilitarian reasoning, which neglects the complexity of human moral decision-making,” said added Mr. Dubljevic. “Our work aims to correct this shortcoming and, if I use medical robots as an example, also applies to several technologies that combine humans and AI. »

Utilitarian decision-making focuses on outcomes and consequences. But when humans make moral judgments, they also consider two other factors.

The first is the intention of a gesture and the personality of a person performing that action. In other words, who is making a move, at a specific time, and what is this person trying to accomplish? Is it benevolent or malevolent?

The second factor is the action itself. For example, some people tend to view specific actions, such as lying, as fundamentally wrong.

And all of these factors interact with each other. For example, the researchers remind, we can agree that lying is wrong, but if a nurse lies to a patient who is making unacceptable demands in order to prioritize treatment for a second patient who needs it more, most people would see that as morally acceptable.

To account for the complexity of making moral decisions, researchers have developed a mathematical formula and a series of decision trees that can be incorporated into AI programs. These tools are based on something called the Agent, Act and Consequence (ADC) model, which was designed by Dubljevic and colleagues to reflect how people make decisions based on moral aspects. in real life.

“Our goal here was to translate the ADC model into a format that allows it to be incorporated into AI programming,” he said. “We are not just saying that this ethical framework might work well for artificial intelligence, we are presenting it as a language that is accessible in the context of computing. »

Still in the words of the researcher, “with the development of AI and robotic technologies, society needs such collaborative efforts between ethicists and engineers. Our future depends on it. »

Don’t miss any of our content

Encourage Octopus.ca

Leave a Comment