Privacy
How to use AI safely: 5 privacy rules worth knowing
By Simone Andrea Pozzi
AI tools like ChatGPT, Gemini, and Claude are genuinely useful. They can help you write emails, explain confusing letters, plan a trip, or summarise a long document in seconds.
But they're not like a private conversation. What you type into them may be stored, reviewed, and used to improve the AI. That doesn't mean they're dangerous — it means it's worth knowing what you're sharing, and with whom.
These five rules take less than a minute each to understand, and they cover the situations where most people's privacy is at risk.
1 Don't share personal details that could identify you
There's no need to tell the AI your full name, address, date of birth, or national ID number to get useful help. Instead of typing "My name is Maria Rossi, born 15 June 1952, and I live at Via Roma 14, Milan," you can write "I'm a 73-year-old woman living in Italy." The AI doesn't need your identity to give you good answers.
2 Never type passwords, PINs, or security codes
There's no situation where an AI tool needs your banking password, your email password, or a one-time verification code. If you're ever in a situation where you think you need to share one, you don't — you're in the wrong place or the wrong conversation. Type the question, not the credential.
3 Be careful with medical and financial details
AI tools can be helpful for understanding medical terms or making sense of a financial document. But there's a difference between "what does 'atrial fibrillation' mean?" and "here is my full medical history, my current medications, and my doctor's name." Use the first kind of question freely. For the second, consider whether a professional — your doctor, your accountant — is the right place to go instead.
For financial questions, you can describe your situation in general terms ("I'm retired, with a small pension and some savings") without sharing actual account numbers or balances.
4 Check the privacy settings of the tool you're using
Most AI tools offer a setting that controls whether your conversations are used to train the AI. In ChatGPT, go to Settings → Data Controls and turn off "Improve the model for everyone." In Claude, similar controls are available in your account settings. If you're using Google Gemini, check My Activity in your Google account.
Turning this off doesn't delete your history — it just means your conversations aren't fed into future training. Worth doing.
5 Always verify important information before acting on it
AI tools can be wrong — confidently, fluently wrong. They don't always know when they don't know something. If an AI tells you that a medication is safe to take with another one, or that a particular law applies to you, or that a financial strategy is sound — check that information with a qualified professional before acting on it.
This isn't a privacy rule exactly, but it's the most important safety habit you can build. Use AI to help you understand and explore, then verify anything consequential with a real expert or official source.
The bottom line
AI tools are useful, and they're safe enough for everyday tasks when you use them thoughtfully. The risks aren't dramatic — you're not likely to be hacked by asking ChatGPT to help you write a birthday message. The risks are subtler: sharing more than you intended, or trusting an answer that turns out to be wrong.
These five rules keep those risks small while letting you get real value from the tools.
Want to go deeper?
Added Intelligence — Volume I covers privacy habits, prompting techniques, and how to verify AI outputs — so you can use these tools with confidence, not anxiety.
View guide →