šŸ­ Why AI Hallucinates šŸ¤”

And How to Stop it

Good morning. Writers got $1.5B. The intern got a parking ticket.

Life isn’t fair.

Let’s dig in.

šŸ­ What’s Cookin’:

  • Anthropic Pay’s $1.5B

  • OpenAI Research Paper

  • ToolBoxā„¢: Speed Run Content Creation

  • And Everything Else

OpenAI Research Paper
šŸ¤·ā€ā™‚ļø Why AI Halucinates

The Bite: OpenAI just published a new research paper on why AI’s make stuff up.

Common term for this: Hallucination

It turns out the issue isn’t spooky ā€œAI dreamsā€ but the way we train and test them.

Models learn patterns well (like spelling ā€œhippopotamusā€) but fumble unique facts (like your grandma’s birthday).

Then grading systems double down by rewarding confident guesses instead of an honest ā€œI don’t know.ā€

Snacks:

  • School Rules: Current evals treat guessing as better than silence — so the models learn to bluff.

  • Pattern Genius, Fact Dummy: Pretraining nails repeatable stuff, flops on unique details.

  • New Scorecard: OpenAI says: dock points for confident wrong answers, reward safe passes.

  • Numbers Game: On a fact test, o4-mini nailed just 24%… but made errors 75% of the time.

  • IDK Power: GPT-5 Thinking models do better simply because they skip the guesses.

  • Press Spin: TechCrunch: incentives are the problem — change the rules, shrink hallucinations.

Why it Bites: You’ve all had it happen to you. The confident wrong answer from your AI.

Nobody wants that.
Everybody would prefer the model to just say ā€œidkā€.

If we could get better grading it could mean less ā€œimprov actor,ā€ more ā€œresponsible friendā€;
the kind of friend that admits when they don’t know instead of swearing France invented pizza.

Follow us on X šŸ‘‡

Can you tell which image is real?

Login or Subscribe to participate in polls.

Stop pretending you read those email attachments

Just forward or Cc the offending emails to [email protected].

We’ll reply with smart answers.

(You can even try forwarding this newsletter).

Steal This Prompt
šŸ“ø Surreal Product Ad in Studio Style

This prompt transforms everyday products into dreamy cinematic ads — with rich textures, clean lighting, and a surreal twist inside the packaging.

Centers your object. Adds a surreal reveal.

Keep it minimalist, premium, and brand-aware.

Try it with your favorite product.

(Prompt below. Nutella vibes optional.)

  1. Click this link (prompt)

  2. Enter a brand and product

  3. Paste into GPT


    šŸ­ Try the prompt on Snack Prompt šŸ­

Everything Else
🧠 You Need to Know

🧩 Ex-Scale AI exec launches AI Agent to fix enterprise data.

šŸ§’ Google Gemini flagged ā€˜High Risk’ for kids and teens

🧪 OpenAI reshuffles team behind ChatGPT’s personality

šŸŽ¬ Amazon-backed startup revives lost Orson Welles film with AI

āš–ļø Warner Bros. sues Midjourney over Superman and Batman

ToolBoxā„¢
🧰 5 AI Tools to Make Learning Feel Like Cheating

Today’s ToolBoxā„¢

Friday’s ToolBoxā„¢

šŸ’Æ 100 Vibe Coding — 100 vibe‑coding challenges

→ Learn by doing: prototype in a live sandbox, share instantly, then follow step‑by‑step tutorials to ship your first real project.

šŸ—£ļø Fluently ā€” AI English coach

→ Real-time help with vocabulary, pronunciation, and grammar so non-native pros sound confident on work calls.

šŸ”¬ SciSpace ā€” Research AI agent

→ Explains papers, answers follow-ups, and helps with literature reviews and discovery.

šŸŽ“ nFactorial AI ā€” Live tutoring from top minds

→ Book expert screen-share sessions for tailored mini-lectures on your topic.

šŸ“š EverTutorAI ā€” Voice AI tutor

→ Interactive lessons that adapt to your pace with instant feedback for test prep and tough concepts.

Cost of Doing Business
šŸ’° Anthropic Buys Its Way Out

The Bite: Anthropic will pay $1.5 billion to settle claims it trained it’s AI models on copyrighted books.

After getting a $183B valuation last week, this may just be table scraps.

Snacks:

  • Record Bill: Largest publicly reported U.S. copyright settlement.

  • In June, Judge William Alsup said training on copyrighted works can qualify as fair use; the problem was pirated book downloads.

  • Who Gets Paid: ~500,000 authors, about $3,000 per work.

  • Not Enough: Critics argue nothing stops future scraping.

  • Next Up: The case adds pressure for clear copyright laws on AI training data.

Why it bites: Big settlements may deter obvious piracy (maybe…) but without clear rules, companies can still scrape with plausible deniability.

Sitting here thinking about what the long term solution could be…

We’ve seen a series of video generation models that are ā€œtrained on ethically sourced or licensed dataā€.

Is this possible for writing?
Absolutely.

The question: Can ethically trained AI ever rival models built on stolen work?

ā€œEthically trained AI video modelsā€