You have probably typed something into ChatGPT, Claude or Gemini that you would not want stapled to your name forever. Maybe a medical question. A draft of a difficult work email. A fight with a partner.
Here is the unsettling part. By default, nearly every major chatbot uses your conversations to train future versions of itself. Your prompts become part of the model’s permanent memory. Not as raw text you can easily delete, but embedded into the statistical fabric of the AI itself.
OpenAI started this trend, and everyone else followed. Google trains Gemini on your chats unless you opt out. Microsoft Copilot does the same. Claude changed its policy in late 2025 to start hoovering up user conversations unless you flip a single switch. Anthropic even extended data retention to five years for people who do not opt out. That means a conversation you have today could be feeding model improvements half a decade from now.
In ChatGPT, open Settings, then Data Controls. You will see a toggle labeled “Improve the model for everyone.” Switch it off. Done. OpenAI says future conversations will not be used for training. Past chats might already be inside the model, but you can stop the bleeding.
In Claude, go to Settings, then Privacy. Look for “Help improve Claude” and toggle it off. Anthropic gives you a choice. If you miss it, your chats become training data.
In Gemini, the setting is called “Gemini Apps Activity” or “Keep Activity” depending on when you last looked. Head to myactivity.google.com/product/gemini, turn the toggle off, and uncheck the box for audio recordings while you are there.
In Microsoft Copilot, open the sidebar, click your profile, then Privacy. Turn off both “Model training on text” and “Model training on voice.”
One thing the chatbots do not advertise. Opting out stops future training, but any conversation you already had may already be baked into the model. That is why you should also delete old chats and start using temporary chat modes when you plan to share anything sensitive. ChatGPT has a “Temporary Chat” option right in the model picker. It does not appear in history and is not used for training.
Is this a perfect solution? No. The companies say they anonymize data before training, but you are trusting them on that. A future technique could potentially re identify anonymous prompts. The safest approach is to assume nothing you type into a free consumer chatbot is truly private.
But flipping these switches is the difference between making AI companies a little richer with your data and keeping your conversations where they belong. Just you and the bot. Not the training set.
Subscribe to my whatsapp channel
Recover your password.
A password will be e-mailed to you.