AI Advertising

Ads in ChatGPT: Your Secrets, Monetized

Sam Altman called advertising a "last resort" in 2024. Less than two years later, ChatGPT is showing ads. What this means for users—and why AI advertising is more dangerous than Google ever was.
Published on February 1, 2026 · by Michael J. Baumann

"I hate advertising."

That was Sam Altman, CEO of OpenAI, speaking at Harvard in 2024. Advertising, he said, was the "last resort" for his company—a business model that fundamentally pits user interests against corporate ones. The plan sounded noble: the wealthy would pay so everyone else could use ChatGPT for free. Robin Hood, but with language models.

That was 2024. In January 2026—less than two years later—OpenAI announced it would introduce advertising to ChatGPT. The "last resort" turned out to be a remarkably short path.

$8 Billion Doesn't Burn Itself

The about-face has a simple explanation: money. OpenAI burned through roughly $8 billion in 2025—more than most companies generate in a decade. Of ChatGPT's 800 million users, only about 5% pay for a subscription.

The math is brutal. You can't serve a billion people for free while your data centers consume as much electricity as mid-sized cities. Someone has to pay the bill.

Financial analysts at Deutsche Bank project a cumulative negative cash flow of $143 billion by 2029. Sebastian Mallaby warns that OpenAI could go bankrupt by mid-2027 without new revenue streams. The same CEO who called advertising "dystopian" is now reaching for the dystopian toolkit himself.

Altman's new explanation on X: "It's clear to us that many people want to use a lot of AI and don't want to pay." From idealist to pragmatist in under two years. Even by Silicon Valley standards, that's impressive.

What ChatGPT Knows About You—and Google Never Did

Here's where it gets interesting. Advertising in an AI chatbot is not the same as advertising in a search engine. The difference is fundamental—and should concern us all.

With Google, you type "best running shoes." Google learns: this person is interested in running shoes. Useful for advertisers, but ultimately superficial. A snapshot of purchase intent.

With ChatGPT, the game is different. People use the chatbot as therapist, life coach, and confidant. They share relationship problems, mental health struggles, financial worries—things they wouldn't tell even close friends. Sam Altman himself admitted: "People talk about the most personal things in their lives with ChatGPT. Especially young people use it as a therapist, as a life coach."

A Stanford study confirms what many suspect: all six major AI companies use chat data by default to train their models. For corporations like Google and Meta, this data is also merged with information from other products—search queries, purchases, social media activity. A digital puzzle that reveals a surprisingly detailed picture.

The researchers warn of a subtle dynamic: ask ChatGPT about low-sugar recipes or heart-healthy diets, and the system may classify you as "health-compromised." This assessment seeps through the provider's entire ecosystem. Suddenly you see pharmaceutical ads. And from there, it's not far until such information reaches an insurance company.

The Difference from Google Advertising

AspectGoogle SearchChatGPT
Data TypeSearch queries, clicksComplete conversations, personal problems, emotions
Information DepthSurface-level purchase intentIntimate life circumstances, mental state
User ExpectationTransactional—searching for informationTrusting—conversation with an "advisor"
Relationship CharacterToolQuasi-human interaction
Manipulation PotentialInfluencing purchase decisionsInfluencing in vulnerable moments

The core problem lies in the psychology of the interaction. Chatbots feel discreet, non-judgmental, almost intimate. People reveal things they would never share publicly—because it doesn't feel public. This trust amplifies the persuasive power of advertising in ways social media never could. The feed annoys. The chatbot understands.

What OpenAI Promises—and What's Technically Possible

OpenAI presented a list of safeguards in its official announcement. It reads reassuringly:

  • Ads do not influence ChatGPT's responses
  • Conversations remain private and are not sold to advertisers
  • Users under 18 won't see ads
  • No ads near sensitive topics like health, mental health, and politics
  • Paying subscribers (Plus, Pro, Business, Enterprise) won't see ads

So far, so reasonable. But look closer and you'll spot the gaps—and above all: the conspicuous silence.

What OpenAI doesn't explain. The announcement mentions that users can "turn off personalization" and "clear the data used for ads." But what exactly does that mean? OpenAI never defines which data is used for targeting. Just the current conversation? All previous chats? The Memory feature that remembers your preferences? The inferences the system draws from your questions—like "this person has health anxiety"? The phrase "clear data" implies that more is stored than just the current chat. What exactly remains in the dark. For a company that promises transparency, this is a remarkable omission.

The personalization paradox. To show "relevant" ads at the end of a response, the system must analyze the conversation. OpenAI emphasizes it doesn't sell data—but they use it internally to personalize advertising. For users, this distinction is academic: your most intimate thoughts flow into algorithms designed to sell you products.

The gray zone of "sensitive topics." OpenAI promises not to show ads during health and mental health discussions. But where exactly is the line? Is a conversation about stress already "mental health"? When does a financial question become a "sensitive" debt topic? These decisions are made by an algorithm—not an ethics board.

Zero legal protection. Here's where it gets serious: there is no confidentiality protection whatsoever for ChatGPT conversations. If authorities request chat logs, OpenAI must hand them over. This fundamentally distinguishes ChatGPT from an actual therapeutic conversation—even if it feels the same to many users.

The Next 12 to 24 Months

What comes next? Some predictions based on current developments:

Advertising becomes standard. OpenAI is starting in the US, but global expansion is just a matter of time. The company is counting on a billion dollars in ad revenue for 2026 alone. If the model works, others will follow. Google has already introduced ads in AI Overviews. Perplexity is experimenting with sponsored questions. The dam has broken.

Ad formats will get subtler. Currently, OpenAI plans ads "at the end of responses"—still clearly separated from actual content. But the industry is already testing integrated formats: sponsored product recommendations within responses, preferential brand mentions, "native" placements. The line between answer and ad will blur—until eventually it becomes invisible.

Privacy becomes a luxury good. Ad-free as a premium feature creates a two-tier society: those who pay keep their privacy. Those who don't become the product. We know this from social media. But with a tool people use as a therapist, this dynamic takes on a new, disturbing quality.

Regulation lags behind. The EU is working on the AI Act. Eventually regulators will take a closer look at how advertising works in AI systems. Whether this happens fast enough to protect users? Historically speaking: unlikely.

What You Can Do Now

Enough analysis. What can you actually do?

  • Dig through the settings. Look for opt-out options for personalized advertising and data usage for model training. It's tedious, but it's your right.

  • Don't treat ChatGPT as a therapist. No matter what the marketing suggests, no matter how understanding the responses sound: there's no legal protection for your conversations. Everything you type can theoretically be shared. Act accordingly.

  • Consider alternatives. Open-source models like Meta's Llama or Mistral can run locally—with no ads, and without your data ever leaving your system. For businesses, such solutions are often the safer choice.

  • Pay—or accept the consequences. This sounds cynical. But it's been the reality of the internet for 20 years: if you don't want to be the product, you have to become the customer. OpenAI is no exception.

Altman was right: advertising puts user and company interests in conflict. He's now created that conflict himself. The AI industry is repeating the mistakes of the old internet—only faster and with more intimate data.

effektiv Dot
Ads in ChatGPT: Your Secrets, Monetized - effektiv