Is AI a subjective concept?

Is Artificial intelligence a subjective concept? | The Entrepreneur Review

A study demonstrates people can be conditioned to think a certain way about the motivations of an Artificial intelligence (AI) chatbot, which affects how they interact with the chatbot.

From Adam Zewe of MIT News

According to a new study, a person’s prior opinions about an AI agent, such as a chatbot, have a big impact on their interactions with that agent and how they judge its reliability, empathy, and efficiency.

In spite of the fact that users were conversing with the same conversational AI agent for mental health support, researchers from MIT and Arizona State University discovered that priming users affected how they interacted with the chatbot. They were told whether the agent was sympathetic, neutral, or manipulative.

The majority of people who were informed that the AI agent was caring did so, and they gave it higher performance ratings than those who thought it was manipulative. Less than 50% of users who were informed that the agent had manipulative intentions believed the chatbot was genuinely evil, showing that people may attempt to “see the good” in Artificial intelligence in the same way they do in their fellow humans.

The study showed a feedback loop between users’ mental models of an AI agent, or how they perceive that agent, and their replies. If the user thought the Artificial intelligence was sympathetic, the tone of their talks with it tended to become more positive over time, but the converse was true for those who thought it was malicious.

What is AI? Artificial Intelligence Explained

“From this study, we see that to some extent, the AI is the AI of the beholder,” explains Pat Pataranutaporn, a graduate student in the MIT Media Lab’s Fluid Interfaces group and co-author of a paper presenting this study. “When we explain what an AI agent is to users, it not only alters their mental model but also their behavior. Additionally, because the Artificial intelligence reacts to the user, it also changes as the user alters their behavior.

Along with senior author Pattie Maes, a professor of media technology and the director of the Fluid Interfaces group at MIT, Pataranutaporn is joined by co-lead author and fellow MIT graduate student Ruby Liu, Ed Finn, an associate professor in the Center for Science and Imagination at Arizona State University, and Ruby Liu.

Since the media and popular culture have such a big impact on our mental models, the study, which was published today in Nature Machine Intelligence, emphasises the significance of researching how AI is portrayed to society. The same kinds of priming remarks utilised in this study may be employed to mislead humans about an AI’s intentions or capabilities, the scientists warn.

“A lot of people believe that the success of AI is solely an engineering challenge, but this is incorrect. The success of these systems when presented to people can be greatly influenced by the way we discuss AI and even the name we give it in the first place. We need to give these issues greater thought, adds Maes.

AI: A friend or A foe?

The goal of this study was to ascertain how much of the empathy and efficacy people see in AI is based on their subjective perceptions and how much is based on the actual technology. Additionally, they were interested in investigating whether priming could be used to influence someone’s view.

“Since the Artificial intelligence is a mystery to us, we often compare it to something else that we can comprehend. Similes and metaphors are used frequently. But which metaphor is most appropriate for thinking about AI? Pataranutaporn claims that the solution is not simple.

In order to find out whether people would recommend a conversational AI mental health companion to a friend, researchers created a study in which participants interacted with the agent for roughly 30 minutes and then gave their ratings. Three groups of the 310 people the researchers selected were randomly divided. Each group received a priming statement on AI.

The first group was informed the agent had no motivations; the second group was told the Artificial intelligence had good goals and was concerned for the welfare of the user; and the third group was told the agent had evil motivations and would attempt to trick users. Even though it was difficult to limit themselves to only three primers, the researchers made the decisions they believed best matched the most widespread AI misconceptions, according to Liu.

An AI agent built on the GPT-3 generative language model, a potent deep-learning model that can produce writing that is human-like, interacted with half of the participants in each group. The other half engaged with an implementation of ELIZA, a chatbot that was created in the 1960s at MIT using a less complex rule-based natural language processing system.

Curious to learn more? Explore our articles on The Entrepreneur Review
Do You Like the Article? Share it Now!