AI users need to rethink private conversations.
A U.S. federal judge forced OpenAI to preserve chat logs they had been routinely deleting, issuing a court order in May that reveals something most users never consider. Every conversation with ChatGPT gets logged, stored, and in some cases destroyed according to what the company calls their “default policy.”
This news should force a rethink of an uncomfortable truth about today’s artificial intelligence landscape. These large language models – AI systems trained on vast amounts of text to generate human-like responses – represent surveillance infrastructure that we have not yet recognized as such, and therefore we are feeding them the most sensitive details of our lives. As such, users should be cautious and strategic about the personal data they share using AI tools.
AI’s risk to our privacy is the same threat in a new form. The past two decades saw the titans of surveillance capitalism, Google, Microsoft, and Meta build trillion-dollar empires by transforming human attention and behavior into algorithmic fuel, teaching us that “free” digital services invariably extract payment through data harvesting. Search queries became behavioral profiles, social media interactions became advertising targets, and location data became market intelligence. We became the product, and we accepted it – fair enough.
Now we find ourselves repeating this exchange with these new technologies, except the stakes have grown considerably higher. The conversations we have with ChatGPT, Claude, or Gemini represent the externalization of our cognitive processes. We share creative projects, business strategies, personal information, and half-formed thoughts, essentially inviting these systems and their owners to observe the raw mechanics of human reasoning and problem-solving. When it comes to using AI tools to craft financial strategies, it can be important to keep in mind there is no guarantee of confidentiality in the future.
Privacy Meets AI Reality
This development represents a notable shift when viewed against the trajectory of digital rights over the past decade. The Snowden revelations of 2013 sparked a renewed interest in privacy-conscious technology where end-to-end encryption, a method that scrambles data so only the sender and receiver can read it, became mainstream, private messaging applications displaced SMS, and browser developers began implementing tracker-blocking by default.
Yet the rise of conversational AI appears to have disrupted established privacy expectations. These systems feel so naturally interactive that we’ve forgotten the cardinal rule of digital privacy: If you don’t own your data, you don’t control it.
For users seeking to reclaim some measure of privacy while still accessing these powerful tools, the options remain frustratingly limited. Running models locally provides complete data sovereignty but requires substantial technical expertise and hardware investment, typically several thousand dollars for adequate GPU computing power.
Anonymizing proxy services offer a more accessible alternative, stripping identifying metadata and creating a privacy buffer between users and model providers. While these services represent meaningful harm reduction, they still depend on the same centralized infrastructure that creates privacy problems.
The most promising short-term solutions may emerge from cryptographic innovations that enable confidential computation on sensitive data. Trusted execution environments – secure hardware areas that isolate sensitive computations from outside interference – can process AI workloads while protecting data even during active use, ensuring that neither the infrastructure provider nor the app operator can access user inputs or outputs.
Advanced cryptographic techniques like zero-knowledge proofs – mathematical methods that allow verification of information without revealing the information itself – could allow users to demonstrate legitimate queries without revealing their contents. Meanwhile, decentralized inference networks—distributed computing systems that spread AI processing across multiple nodes – might distribute computation in ways that prevent any single entity from observing complete user interactions.
These privacy-preserving technologies suggest that the tension between AI capability and user privacy can be resolved. While largely unknown to the general public, they ultimately provide the cryptographic foundations necessary to make artificial intelligence truly trustworthy.
The Lesson For AI Users? Be Careful
Until these solutions mature, users may want to approach large language models with greater awareness of their data-handling practices. These are not neutral intellectual tools but commercial products designed to extract value from their users. They are not your friend.
The court order forcing OpenAI to preserve user conversations is only the first glimpse into how these systems could threaten our private thoughts. Cognitive liberty is a right we will need to fight for and until then, ask yourself if you’d be comfortable with your prompts appearing in a data breach or court filing. Never share names or addresses, rotate between providers, and actively use features like ChatGP’s temporary chat before you whisper about crypto trades.
Let us not go full circle and give up our privacy easily. There is always a price for it and it should be higher than a ghiblified image.