Artificial intelligence has now become an ubiquitous assistant in our lives, ready to draft emails, analyze complex texts, or plan trips.
With such versatility at hand, it might seem entirely natural to entrust these sophisticated tools with the https://gizchina.it/2026/01/149-milioni-dati-di-accesso-trapelati-cambia-password/”>creation of our digital access keys.
However, delegating one’s security to a chatbot represents a serious misjudgment that risks exposing your sensitive data to cybercriminals.
Large language models (LLMs), by their very engineering nature, are trained to calculate and predict the next term in a sequence, drawing on vast quantities of historical data.
This precise characteristic makes them exceptional conversationalists, but it proves disastrous when asked to generate truly unpredictable sequences. True cybersecurity requires absolute entropy and uniform randomness, essential properties that conversational systems simply cannot simulate.
To confirm this serious vulnerability, a study conducted by the cybersecurity company Irregular. The researchers tested the most widely used systems, including ChatGPT, Claude and Gemini, discovering that the alphanumeric sequences returned to users are highly repetitive.
The results are clear: when subjected to Claude fifty distinct requests, the system produced only twenty-three unique keys. During the tests, a specific string was proposed as many as ten times, while the remaining ones exhibited extremely similar logical architectures and textual structures.
This marked predictability delivers an invaluable strategic advantage to malicious actors.
Hackers routinely use automated software to launch the so-called dictionary attacks, a technique that involves trying to gain unauthorized access by rapidly testing enormous archives of common words and combinations already compromised in the past.
For an attacker, updating their malicious databases by inserting the limited standard variants proposed by chatbots requires practically no effort.
Consequently, even if the website you are registering on were to rate your new access key as complex and secure, its real effectiveness would be annihilated by the fact that hackers already have it in their ready-to-use lists. In short, it’s like reusing a leaked password.
To protect yourself adequately, it is necessary to understand the difference between text generated by a language model and true cryptographic generation.
Traditional password managers do not “invent” characters following probabilistic or cognitive schemes. Instead, they communicate directly with the operating system to extract bits generated through rigorous mathematical processes.
These mechanisms rely on elements of real entropy sourced from hardware, thereby ensuring the complete absence of patterns traceable by attack software.
The path to truly authentic and impregnable protection passes solely through tools designed for that specific purpose. Where web platforms and applications allow it, the passkeys represent the modern and airtight alternative par excellence.
Google is preparing to introduce a brand-new device for its wearables lineup, entering direct competition…
Recently Vivo announced its new Camera Phone for the Chinese market, with a major novelty…
Google seems intent on tightening protection measures related to software on its newer smartphones. According…
The market for TWS headphones is now saturated, with fierce competition among brands to offer…
The company founded by Carl Pei aims to expand its product ecosystem well beyond smartphones.…
A few days after the renders dedicated to the standard model, we are back to…