
Welcome back to the Neural Net! 2025 was supposed to be the year of the AI agents, but leading AI researcher Andrej Karpathy says we’ve got a decade to go, which means we’ve got front-row seats to the build-up.
In today’s edition: Stanford research concludes “Accept All Cookies” is now basically a life decision, companies are using AI as cover for layoffs, Claude drops a new coding tool worth trying, and more.
▼
The Street

note: stock data as of last market close
▼
👀 When Big Tech Knows You Better Than Your Mom

AI is quietly turning our data into training fuel, and most users don’t even know it. That means that if you’re not paying attention, your next “AI upgrade” might just be built on you.
According to a Stanford study, the biggest names in AI — OpenAI, Google, Meta, Microsoft, Amazon, and Anthropic — all train their models on user chats by default, with little transparency about what happens next.
Among their key findings:
Some de-identify data, but others don’t clarify how or when personal info is removed.
Several keep chat data indefinitely, without clear deletion timelines.
Human reviewers read user transcripts to fine-tune models.
Personal details, like health or financial info mentioned in a chat, can lead to algorithmic profiling.
Cross-product tracking merges your chatbot data with your social, shopping, and search history.
Meanwhile, Meta’s newest “helpful” feature for Facebook scans your entire camera roll (even photos you haven’t shared) and uploads them to its cloud. Meta says it won’t train its AI on those unless you edit or post them, but that’s exactly how data creep begins. And remember, Meta already admitted last year to training its models on every public Facebook and Instagram post by adult users since 2007. While this latest feature thankfully requires opt in for now, even if you opt out, your face or data could still show up in someone else’s uploaded photo.
If your privacy alarms are going off, here’s where to start so you don’t become AI training data:
Don’t feed sensitive info into any AI chat, unless you’re ok with it being public knowledge.
Actively opt out of training when possible.
Push your company to adopt privacy-preserving AI, tools that learn from patterns, not your personal data.
Check data-retention policies.
It’s not much, but it’s a start, especially since staying aware is half the game when the rules are still being written.
▼
In Partnership With Enterpret
How Canva, Perplexity and Notion turn feedback chaos into actionable customer intelligence
Support tickets, reviews, and survey responses pile up faster than you can read.
Enterpret unifies all feedback, auto-tags themes, and ties insights to revenue, CSAT, and NPS, helping product teams find high-impact opportunities.
→ Canva: created VoC dashboards that aligned all teams on top issues.
→ Perplexity: set up an AI agent that caught revenue‑impacting issues, cutting diagnosis time by hours.
→ Notion: generated monthly user insights reports 70% faster.
Stop manually tagging feedback in spreadsheets. Keep all customer interactions in one hub and turn them into clear priorities that drive roadmap, retention, and revenue.
▼
Heard in the Server Room
U.K.’s channel 4 just pulled the ultimate TV plot twist after revealing the host for its “Will AI Take My Job?” special turned out to be entirely AI-generated. Channel 4’s head of news Louisa Compton said the network won’t be swapping out journalists for AI anytime soon, and that the stunt was meant to make viewers question what (and who) they can trust on screen. But TV anchors already sound like robots, so maybe they should let the ratings decide.
OpenAI had a dream — that historical figures could freely appear in Sora videos. The estate of Martin Luther King Jr. quickly disagreed, asking the company to remove his likeness after users made disrespectful clips. OpenAI hit pause and promised stronger guardrails for historical figures in its latest walk-back, as Hollywood and rightsholders push back on AI-generated content. The move comes just weeks after Sam Altman said OpenAI “isn’t the moral police of the world” and planned to loosen restrictions for adults.
From tech to airlines, companies are blaming AI for mass layoffs, but some experts say that’s just good old-fashioned corporate spin. Oxford’s Fabian Stephany argues that AI has become the perfect scapegoat for overhiring and cost-cutting, allowing firms to look “innovative” while trimming the fat. Despite the panic, data from Yale and the New York Fed show no mass AI job apocalypse, with most companies using it to retrain workers rather than replace them. So far, the robots are still mostly taking tasks, not jobs.
▼
In Partnership With Morning Brew
Business news as it should be.
Join 4M+ professionals who start their day with Morning Brew—the free newsletter that makes business news quick, clear, and actually enjoyable.
Each morning, it breaks down the biggest stories in business, tech, and finance with a touch of wit to keep things smart and interesting.
▼
How to AI: Claude Code Debuts With Promises of Streamlined Dev Work

Anthropic just launched Claude Code on the Web, a new browser tool that helps teams delegate coding work right from their browser. Now in beta, it runs in the cloud so you can hand off multiple projects at once and get clear summaries when the work’s done.
What it does:
Tackles tasks in parallel so you can move faster across projects.
Understands plain language, turning simple instructions into working code.
Tracks progress in real time so you can step in anytime.
Keeps data secure with locked-down cloud environments.
Works on mobile, letting you review or kick off work on the go.
Unlike most coding assistants out there, Claude Code is built for seasoned developers, so if you’re into vibe coding, this might not be your scene. But that said, the tool is still in its early days for the more advanced features, so we’ll have to see how well they perform in everyday use.
▼
That’s it for today! Have a great week, and we’ll catch you Friday.





