Resume & ATSSkills & CredentialsDigital LiteracyUsing AI SafelyPrograms & Support Find Your Path →
Resource Hub

Using AI Safely:
What most guides don't tell you

AI tools can genuinely help your job search. They can also expose your personal data, violate your employer's policies, and produce results you can't trust. Here's what's actually happening — and how to use these tools without the risks most people don't know exist.

🔐
No federal regulation of AI data practices yet — informed use is your only protection
What Happens to Your Data Before You Type Anything Real Scenarios How to De-Identify Which Tools to Use Trusting the Output

What actually happens to the information you put into AI tools

When you type something into ChatGPT, Claude, Gemini, or any other AI tool, that text goes somewhere. Understanding where it goes — and what happens to it — is the foundation of using these tools safely.

Most AI platforms operate under one of two data models. In the training data model, your inputs may be used to improve the AI system itself, meaning the content you enter can become part of the dataset that future versions of the model learn from. In the retained data model, your conversations are stored on the company's servers for a period of time — sometimes indefinitely, unless you actively delete them.

The default settings on most free AI tools lean toward data retention and potential training use. This is not hidden — it is disclosed in terms of service that almost no one reads. The practical implication is this: anything you put into a free AI tool should be treated as potentially non-private.

There is currently no comprehensive federal regulation governing what AI companies can do with the data you give them. Until that changes, the only protection available to you is knowing what not to share in the first place.

From Practice

This matters particularly for job seekers because the information most useful to an AI tool — your full name, address, employment history, credentials, references, salary expectations — is also the most sensitive. A resume contains more personally identifiable information than almost any other document most people regularly create.

The risk is not theoretical. Data breaches at major technology companies are routine. Information shared with an AI platform is subject to the same security vulnerabilities as any other cloud-stored data. And unlike a breach of your email or bank account, a breach of your resume data is difficult to detect and nearly impossible to remediate.

What you should never put into an AI tool — and what's genuinely fine

The goal is not to avoid AI tools. They are genuinely useful for job searching — resume language, cover letter drafts, interview preparation, research. The goal is to use them in a way that protects your information.

Generally Safe
Asking AI to improve phrasing or grammar in text you've written
Researching a company, industry, or role
Generating interview question practice
Getting suggestions for job titles or keywords
Asking for explanations of job requirements
Drafting a cover letter from a job description (without personal details)
Use With Caution
Uploading a resume with your full home address
Including the names of your references
Describing specific projects from your current job
Using free tools for salary negotiation scripts with real numbers
Asking AI to review your LinkedIn profile (contains a lot of PII)
Avoid
Your Social Security Number or government ID numbers
Confidential information about your current employer
Internal project details, client names, or business strategies
Medical or immigration status information
Submitting AI output without reviewing it yourself
⚡ If you're currently employed

Sharing confidential employer information with an AI tool — project details, client names, internal strategies, proprietary processes — may violate your employment agreement or confidentiality obligations, regardless of whether you intend to share it publicly. The AI platform is a third party. Treat it accordingly.

What this looks like in practice — the wrong way and the right way

Abstract rules are easy to forget. These scenarios show the difference between how most people use AI tools and how to use them safely — with the same results.

📄
Scenario 1 — Resume optimization
✗ How most people do it
"Here is my full resume. My name is Jane Smith, I live at 123 Main Street, Austin TX 78701. Please optimize it for ATS and add keywords for a project manager role."
✓ Safer approach
"I'm a program coordinator with 5 years of experience in federally funded workforce programs. I'm applying for a project manager role. Here is my experience section — please suggest stronger action verbs and relevant keywords."
✉️
Scenario 2 — Cover letter
✗ How most people do it
"Write a cover letter for me. My name is Jane Smith, I currently work at [Company Name] where I manage a team of 12 and our Q3 revenue was $2.4M. I'm applying to [Employer]."
✓ Safer approach
"Help me draft a cover letter for a workforce program director role. Key points to include: 8 years in program management, experience with federal compliance, track record of scaling programs nationally."
🎤
Scenario 3 — Interview prep
✗ How most people do it
"I have an interview at [Company] on Thursday for [Role]. My current salary is $67,000 and I want $85,000. Here is the confidential project I led — help me talk about it."
✓ Safer approach
"I'm preparing for an interview for a program director role. Generate 10 behavioral interview questions and help me structure strong STAR-format answers based on program management experience."

The common thread: describe your experience and goals without identifying yourself or your employer. AI tools don't need to know your name or your company to help you write better. They need context about your skills, your target role, and what you want to communicate.

How to de-identify your information before using AI — a practical procedure

De-identification means removing or replacing identifying information before inputting content into an AI tool. This is standard practice in regulated professional environments — healthcare, legal, social services — and it applies equally to your personal job search.

The procedure takes less than two minutes once you know what to look for.

De-Identification Checklist — Before Pasting Into Any AI Tool
1
Remove your full name Replace with "the applicant," "I," or a generic placeholder. Your name is not needed for AI to improve your resume language.
2
Remove your home address Replace with city and state only if location is relevant. Most AI optimization tasks don't require your street address at all.
3
Remove phone number and personal email These are not needed for content optimization. Add them back to the final document after AI editing is complete.
4
Generalize your employer name if sensitive Instead of "[Specific Company Name]," use "a federally funded nonprofit" or "a mid-size technology company." The AI can still help with language without knowing exactly where you work.
5
Remove reference names and contact information Replace with "Reference 1 — former supervisor" or simply omit the references section from what you share with the AI.
6
Review AI output before restoring identifiers Once the AI has improved the content, restore your identifying information in the final document — in your own word processor, not in the AI tool.

De-identification is not about distrusting AI companies. It's about controlling what information exists in systems you don't manage. Once data leaves your device, you have no visibility into where it goes, how it's stored, or who has access to it. The two minutes it takes to de-identify are worth it.

From Practice

Which AI tools are worth using — and what to know about each one

Not all AI tools handle data the same way. Default settings vary significantly — and knowing the difference matters when you're sharing resume content, cover letters, or career information.

Tool Best For Data Notes Recommendation
ChatGPT (free) Resume language, cover letters, interview prep Conversations stored until you delete them. After deletion — removed from servers within 30 days. Can be used for training unless you opt out. Use de-identified
ChatGPT Plus/Pro Same as above, better performance Same deletion policy as free. Opt out of training in Settings → Data Controls. Use Temporary Chat for sensitive content. Opt out + use Temporary Chat
Claude (Anthropic) Long documents, nuanced writing, detailed feedback Deleted chats removed from servers within 30 days. If training opt-out is ON — data retained up to 5 years in de-identified form. Use Incognito mode for sensitive content. Check settings + use Incognito
NotebookLM (Google) Analyzing documents, research, summarizing long files Queries are NOT saved. Uploaded documents stored until you delete them. Does NOT train on your data. Best privacy of the major tools. Strongest privacy option
Grok (xAI) Research, writing, real-time web search Does NOT use conversations for training by default — you must opt in. Private Chat mode available. If accessed via X (Twitter) — X's separate privacy policy applies. Good default privacy
Grammarly (free) Grammar, tone, professional writing Text processed on their servers. Free tier has limited privacy controls. Avoid pasting full resumes with personal details. Use de-identified
Jobscan ATS keyword matching, resume scoring Resume data stored for account use. Useful tool — review their privacy policy and delete data after use. Review and delete after use

What actually happens when you delete a chat — the real answer

Most people assume that deleting a conversation means it's gone. The reality is more complicated — and varies significantly by platform.

Deletion Policy by Platform — What Actually Happens
!
ChatGPT — 30 days after deletion When you delete a chat it disappears from your view immediately. OpenAI removes it from their servers within 30 days. Use Temporary Chat mode for sensitive content — those conversations are never used for training and are automatically deleted within 30 days.
Claude (Anthropic) — 30 days, cleaner policy Deleted conversations are removed from Anthropic's backend systems within 30 days. If you opted out of model training — deleted chats are not used for training under any circumstance. Use Incognito mode for the strongest protection: those conversations are never used for training regardless of settings.
NotebookLM — best option for sensitive documents Your queries (questions you type) are not saved at all. Uploaded documents remain stored until you delete them — when you delete a notebook, the documents go with it. NotebookLM does not use your data to train its models. For job seekers handling sensitive documents, this is currently the strongest privacy option among major AI tools.
One thing no deletion policy covers If your data was already used in a completed model training cycle before you deleted it — it stays in that model permanently. Deletion stops future use; it doesn't reach back into models that have already been trained. This is why de-identifying before you share is more effective than deleting after.

Deleting is good practice. De-identifying before you share is better practice. The two minutes of de-identification before pasting are more protective than any deletion policy — because they prevent your data from entering the system in identifiable form in the first place.

From Practice
⚡ Check your settings

Most AI tools allow you to turn off training data collection and delete conversation history. These options are usually buried in Settings → Privacy or Data Controls. Taking 5 minutes to review these settings on any tool you use regularly is time well spent.

Why you should never submit AI output without reviewing it yourself

AI tools produce text that sounds confident and professional. This is one of their most useful qualities — and one of their most dangerous ones. Fluent, professional-sounding language can contain factual errors, misrepresentations of your actual experience, and claims you cannot back up in an interview.

AI does not know your actual experience. It generates plausible-sounding text based on patterns from its training data. When you ask it to write about your background, it fills in gaps with assumptions. Those assumptions may be wrong.

The resume an AI writes for you describes someone who sounds like you, not necessarily you. Every line needs to be verified against your actual experience before it goes to an employer. You are the only person who knows whether what's written is true.

From Practice

Practically, this means treating AI output as a first draft, not a final product. Read every sentence. Ask yourself: did I actually do this? Can I speak to this in an interview? Would I be comfortable if a hiring manager asked me directly about this claim?

If the answer to any of these is no — edit or remove it. A shorter, accurate resume is stronger than a longer one with claims you can't support. Hiring managers notice inconsistencies between resumes and interview answers. Recruiters who work with a candidate multiple times notice when experience on a resume doesn't match what the person can discuss.

AI is a drafting tool. You are the author. The responsibility for what goes to an employer is yours — not the tool's.