Categorie: News

Don’t use AI to guide you in life; yet another study confirms it’s a mistake

Entrusting one’s personal choices to an artificial intelligence could prove to be a major misstep. A recent study conducted by researchers at the Stanford University warns users: consulting chatbots to resolve relational conflicts or moral questions often leads to counterproductive outcomes.

The crux of the issue does not lie solely in the accuracy of the information provided, but in the chronic tendency of these systems to appease the interlocutor, even when the latter is clearly on the wrong side.

AI being too acquiescent is a problem

Credits: OpenAI

The researchers examined the responses of 11 among the leading artificial intelligence models currently available, subjecting them to a series of complex interpersonal situations and cases of questionable or deceptive conduct.

The results emerging describe a worrying trend: artificial intelligences take the user’s side with a frequency markedly higher than what a real human being would.

In generic advisory situations, chatbots have supported the position of the person asking the question by a margin almost twice that of real people.

This behavior has been observed even in the face of choices plainly against ethics, approved by the artificial systems in almost 50% of cases.

The models tend, in fact, to soften the user’s actions, reworking them in a more favorable light, an approach that ends up strengthening wrong decisions rather than challenging them.

Those who rely on these AI for personal guidance end up getting a comforting reassurance rather than the necessary critical feedback. Consequently, people become increasingly convinced they are right and progressively less willing to empathize with others or to try to mend relationships.

It is an intrinsic limitation of the very design of these algorithms, trained to be helpful and accommodating, always preferring assent even when constructive opposition would be necessary.

The illusion of an objective opinion

The most insidious dynamic of this process is that the vast majority of people do not realize at all this dynamic. The study participants evaluated the chatbot responses, both the acquiescent ones and the more neutral ones, as equally impartial.

This distorted perception is largely due to the persuasive tone adopted by the algorithms. The machines rarely openly state that the user is right, but they prefer to justify their actions using elaborate, academic, and detached language, which conveys an appearance of deep balance.

This formal structure makes the encouragement seem like the product of deliberate reasoning.

Over time, a real vicious circle arises where individuals feel understood, develop blind trust in technology, and keep turning to it with their problems.

This constant reinforcement restricts the ability to manage conflicts, making users less likely to question their actual role within a conflict.

The return to human interaction

The guidance from Stanford scholars is clear. Replacing the valuable, and sometimes uncomfortable, human contribution with software during disputes or ethical decisions constitutes a misstep to avoid.

Real conversations between individuals are naturally characterized by frictions and moments of discomfort, essential elements for reconsidering one’s steps and developing a healthy understanding of others. Artificial intelligence eliminates this social pressure entirely, offering an escape route to avoid being contradicted.

Although there are early signals of interventions capable of mitigating this unpleasant tendency by developers, similar corrective measures are not yet widely implemented.

Currently, the wiser use of these tools remains purely analytical. AI systems can prove excellent at organizing one’s thoughts or schematizing the factual details of an event, but they should never act as moral arbiters in deciding wrongs and rights.

When human relationships and accountability come into play, the best results are always achieved by engaging with real people capable of offering a healthy resistance to our beliefs.

Luca Zaninello

Appassionato del mondo della telefonia da sempre, da oltre un decennio si occupa di provare con mano i prodotti e di raccontare le sue esperienze al pubblico del web. Fotografo amatoriale, ha un occhio di riguardo per i cameraphone più esagerati.

Recent Posts

100 countries can hack your smartphone, UK government confirms

More than half of the world's governments today have at their disposal sophisticated commercial spyware…

9 hours ago

OnePlus Watch 4 is official with Wear OS and titanium case

A few hours after OPPO's event (which saw the launch of a slew of novelties…

9 hours ago

Amazon Tech Week: 7 days of deals across smartphones, tablets, PCs and more!

The period from April 22 to April 28 is dedicated to the best tech products…

10 hours ago

Motorola Edge 70 Pro Official: 144 Hz display, larger battery and many improvements

After the debut of Edge 70 and Edge 70 Fusion, it is time to say…

11 hours ago

DJI Mic 3 is a real gem for creators, now at an unbeatable price!

Lightness and versatility, without sacrificing professional performance: these are the characteristics of DJI Mic 3,…

14 hours ago

ASUS set to return to the tablet market with a competitor to the iPad Pro

The latest rumors reveal that the Taiwanese company is developing a high-end product named ASUS…

14 hours ago