Google Expert Warns About Sharing Data With AI Chatbots

Google Expert Warns About Sharing Data With AI Chatbots

As artificial intelligence chatbots become increasingly integrated into both personal and professional workflows, a security professional from Google’s own AI teams has issued a stark warning about the significant cybersecurity risks associated with sharing sensitive information. Harsh Varshney, with experience on Google’s privacy and Chrome AI security teams, urges users to treat every interaction with a public AI chatbot as if they were writing on a public postcard, a message that can be read by anyone. This cautionary stance comes from the fundamental way these models operate; they use vast amounts of data, including user conversations, to generate helpful responses and train for future interactions. Varshney specifically advises against inputting highly personal data such as Social Security numbers, credit card details, home addresses, or private medical records into these public tools. The core danger is that this information can be stored and later accessed by harmful entities, including cybercriminals and data brokers, who could exploit it for malicious purposes, transforming a tool of convenience into a significant liability for personal data security.

1. Employing Corporate Safeguards and Proactive Data Hygiene

For professional contexts where confidentiality is paramount, the expert strongly recommends utilizing enterprise-grade AI tools designed with enhanced security protocols. The distinction between public and enterprise versions can be crucial, as the latter often includes more robust data protection features. Varshney recounted a surprising experience where an enterprise Gemini chatbot was able to recall their exact home address, a detail shared in a previous conversation. This illustrates the powerful “long-term memory” capabilities being built into these systems, which, while useful, can pose a serious risk if confidential work discussions or proprietary data are retained indefinitely. To mitigate these risks, users are advised to practice rigorous data hygiene. This includes regularly deleting chat histories to prevent the accumulation of sensitive information and leveraging temporary or “incognito” modes when available. Furthermore, it is critical for users to navigate to the privacy settings of any AI platform they use and explicitly opt out of having their conversations used for model training, a simple yet effective step to reclaim control over their data and ensure that convenience does not come at the cost of security.

2. A Comparative Look at Platform Privacy Policies

The growing concern over generative AI’s data appetite prompted a comprehensive analysis of the privacy landscape earlier this year. A report from the data privacy firm Incogni evaluated major AI platforms, offering users a clearer picture of which services prioritize data protection. The findings ranked Mistral AI’s Le Chat as the safest option, closely followed by ChatGPT and Grok. These platforms were commended for their clear privacy policies and for providing users with straightforward options to opt out of data collection for training purposes. In stark contrast, the report identified Meta AI, Google’s Gemini, and Microsoft’s Copilot as the most aggressive in their data collection practices, often citing a lack of transparency about what information is gathered and how it is used. This trend was mirrored in the mobile application space, where Le Chat, Pi AI, and ChatGPT were found to pose the lowest risks. Conversely, apps like Meta AI were flagged for collecting highly sensitive data, including user emails and physical locations. This analysis underscored the reality that the level of privacy risk can vary significantly between platforms, and users who reviewed their privacy settings and made informed choices were better positioned to safeguard their personal information.

subscription-bg
Subscribe to Our Weekly News Digest

Stay up-to-date with the latest security news delivered weekly to your inbox.

Invalid Email Address
subscription-bg
Subscribe to Our Weekly News Digest

Stay up-to-date with the latest security news delivered weekly to your inbox.

Invalid Email Address