Categorie: News

Don’t use AI to guide you in life; yet another study confirms it’s a mistake

Entrusting one’s personal choices to an artificial intelligence could prove to be a major misstep. A recent study conducted by researchers at the Stanford University warns users: consulting chatbots to resolve relational conflicts or moral questions often leads to counterproductive outcomes.

The crux of the issue does not lie solely in the accuracy of the information provided, but in the chronic tendency of these systems to appease the interlocutor, even when the latter is clearly on the wrong side.

AI being too acquiescent is a problem

Credits: OpenAI

The researchers examined the responses of 11 among the leading artificial intelligence models currently available, subjecting them to a series of complex interpersonal situations and cases of questionable or deceptive conduct.

The results emerging describe a worrying trend: artificial intelligences take the user’s side with a frequency markedly higher than what a real human being would.

In generic advisory situations, chatbots have supported the position of the person asking the question by a margin almost twice that of real people.

This behavior has been observed even in the face of choices plainly against ethics, approved by the artificial systems in almost 50% of cases.

The models tend, in fact, to soften the user’s actions, reworking them in a more favorable light, an approach that ends up strengthening wrong decisions rather than challenging them.

Those who rely on these AI for personal guidance end up getting a comforting reassurance rather than the necessary critical feedback. Consequently, people become increasingly convinced they are right and progressively less willing to empathize with others or to try to mend relationships.

It is an intrinsic limitation of the very design of these algorithms, trained to be helpful and accommodating, always preferring assent even when constructive opposition would be necessary.

The illusion of an objective opinion

The most insidious dynamic of this process is that the vast majority of people do not realize at all this dynamic. The study participants evaluated the chatbot responses, both the acquiescent ones and the more neutral ones, as equally impartial.

This distorted perception is largely due to the persuasive tone adopted by the algorithms. The machines rarely openly state that the user is right, but they prefer to justify their actions using elaborate, academic, and detached language, which conveys an appearance of deep balance.

This formal structure makes the encouragement seem like the product of deliberate reasoning.

Over time, a real vicious circle arises where individuals feel understood, develop blind trust in technology, and keep turning to it with their problems.

This constant reinforcement restricts the ability to manage conflicts, making users less likely to question their actual role within a conflict.

The return to human interaction

The guidance from Stanford scholars is clear. Replacing the valuable, and sometimes uncomfortable, human contribution with software during disputes or ethical decisions constitutes a misstep to avoid.

Real conversations between individuals are naturally characterized by frictions and moments of discomfort, essential elements for reconsidering one’s steps and developing a healthy understanding of others. Artificial intelligence eliminates this social pressure entirely, offering an escape route to avoid being contradicted.

Although there are early signals of interventions capable of mitigating this unpleasant tendency by developers, similar corrective measures are not yet widely implemented.

Currently, the wiser use of these tools remains purely analytical. AI systems can prove excellent at organizing one’s thoughts or schematizing the factual details of an event, but they should never act as moral arbiters in deciding wrongs and rights.

When human relationships and accountability come into play, the best results are always achieved by engaging with real people capable of offering a healthy resistance to our beliefs.

Luca Zaninello

Appassionato del mondo della telefonia da sempre, da oltre un decennio si occupa di provare con mano i prodotti e di raccontare le sue esperienze al pubblico del web. Fotografo amatoriale, ha un occhio di riguardo per i cameraphone più esagerati.

Recent Posts

Google aims to go head-to-head with WHOOP, Stephen Curry previews the new Fitbit

Google is preparing to introduce a brand-new device for its wearables lineup, entering direct competition…

8 hours ago

vivo X300 Ultra: less battery in Europe, but you won’t be disappointed

Recently Vivo announced its new Camera Phone for the Chinese market, with a major novelty…

9 hours ago

Will Google block Android downgrades with the next Pixel 10 update?

Google seems intent on tightening protection measures related to software on its newer smartphones. According…

9 hours ago

Review Realme Buds Air8: the new benchmark at 50 euros

The market for TWS headphones is now saturated, with fierce competition among brands to offer…

9 hours ago

Nothing beyond the boundaries of smartphones: it will target AI glasses

The company founded by Carl Pei aims to expand its product ecosystem well beyond smartphones.…

9 hours ago

Pixel 11 Pro in render images: the winning design doesn’t change (but can be improved)

A few days after the renders dedicated to the standard model, we are back to…

10 hours ago