Human-Like Characteristics of Artificial Intelligence Affect Confidence in Conversations – Neuroscience News

Summary: A new study delves into how advanced AI systems affect our trust in the people we interact with. Research finds that a strong design perspective is driving the development of AI with increasingly human-like characteristics. While attractive in some contexts, it can be problematic when it’s not clear whether you’re communicating with a computer or a human.

The study looked at three types of conversations and audience reactions and comments. Uncertainty whether you are talking to a human or a computer affects this aspect of communication. This has the potential to impact human connectedness, particularly in therapy, and raises important ethical questions about AI development.

Main aspects:

  1. Researchers at the University of Gothenburg have examined how advanced AI systems affect our trust in the people we interact with.
  2. The study found that during interactions between two humans, certain behaviors were interpreted as signs that one of them was actually a robot.
  3. The researchers propose creating an AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.

Source: University of Gothenburg

As AI becomes increasingly realistic, our trust in those we communicate with may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems affect our trust in the people we interact with.

In one scenario, a would-be scammer, believing he is calling an elderly man, is instead connected to a computer system that communicates via pre-recorded loops. The scammer spends a lot of time trying to defraud, patiently listening to the “man’s” somewhat confusing and repetitive stories.

Oskar Lindwall, professor of communications at the University of Gothenburg, notes that it often takes people a long time to realize that they are interacting with a technical system.

He, in collaboration with computer science professor Jonas Ivarsson, wrote an article titled Suspicious minds: the problem of trust and conversational agentsexploring how people interpret and relate to situations where one party might be an AI agent.

The article highlights the negative consequences of harboring suspicions of others, such as the damage it can cause to relationships.

Ivarsson gives an example of a romantic relationship in which trust issues arise, leading to jealousy and an increased tendency to seek evidence of deception. The authors argue that not being able to fully trust an interlocutor’s intentions and identity can lead to excessive suspicion even when there is no reason to.

Their study found that during interactions between two humans, certain behaviors were interpreted as signs that one of them was actually a robot.

The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like characteristics. While this can be tempting in some contexts, it can also be problematic, particularly when it’s not clear who you’re communicating with.

Ivarsson wonders whether artificial intelligence should have human-like voices, as they create a sense of intimacy and lead people to form impressions based on voice alone.

Credit: Neuroscience News

In the case of the would-be scammer calling “the older man,” the scam is only discovered after a long time, which Lindwall and Ivarsson attribute to the credibility of the human voice and the assumption that the confused behavior is due to age.

Once an AI has a voice, we infer attributes like gender, age, and socio-economic background, making it harder to identify that we’re interacting with a computer.

The researchers propose creating an AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.

Communicating with others involves not only deception, but also the building of relationships and the creation of common meanings. Uncertainty whether you are talking to a human or a computer affects this aspect of communication.

While it may not matter in some situations, such as cognitive behavioral therapy, other forms of therapy that require more human connection could be negatively impacted.

About the study
Jonas Ivarsson and Oskar Lindwall analyzed the data made available on YouTube. They studied three types of conversations and audience reactions and comments. In the first type, a robot calls a person to book a hair appointment, unbeknownst to the person on the other end. In the second type, one person calls another person for the same purpose. In the third type, telemarketers are transferred to a computer system with pre-recorded speech.

About this AI research news

Author: Thomas Melin
Source: University of Gothenburg
Contact: Thomas Melin – University of Gothenburg
Image: The image is credited to Neuroscience News

Original research: Free access.
“Suspicious Minds: The Problem of Trust and Conversational Agents” by Jonas Ivarsson et al. Computer Supported Cooperative Work (CSCW)


Abstract

Suspicious minds: the problem of trust and conversational agents

In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services.

Voice quality and interactivity are sometimes so good that artificial people can no longer be differentiated from real people. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer just a theoretical question, but a practical problem facing society.

As a result, the Turing test went from the lab into the wild. The shift from the theoretical to the practical domain also accentuates understanding as a subject of ongoing inquiry.

When the interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? How does understanding figure into human-computer interactions in the real world? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions.

Building on ethnomethodology and conversation analysis, we illustrate how parties in a conversation routinely use two forms of analysis (categorical and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial to established relationships.

Furthermore, outside of experimental settings, any problems in identifying and classifying an interactive partner raise concerns regarding trust and suspicion. When suspicion is aroused, shared understanding is disrupted.

Thus, this study concludes that the proliferation of conversational systems, powered by artificial intelligence, may have unintended consequences, including impacts on human-human interactions.

#HumanLike #Characteristics #Artificial #Intelligence #Affect #Confidence #Conversations #Neuroscience #News

Leave a Comment