🧠 I Think, Therefore AI Am?

Plus AI Drops the Beat, Claude's Blackmail, Blue Books Return, and More

Welcome back to the Neural Net! While you were enjoying your Memorial Day BBQ, we were busy grilling up the freshest takes in AI.

In today’s edition: The real risk of the AI consciousness debate, how to use AI to unleash your inner DJ, Anthropic’s new LLM resorts to blackmail, the return of old-school tests, and more.

ā–¼

The Street

note: stock data as of market close Friday 5/23

ā–¼

I Think, Therefore AI Am?

Consciousness—the thing that lets us think, feel, and be aware—has puzzled scientists for centuries. It’s one of science’s greatest enigmas. We can observe what the brain is doing, but how that activity turns into emotions or awareness is still unknown.

At Sussex University’s Centre for Consciousness Science, researchers are trying to crack this ā€œhard problemā€ in humans, and in doing so, hope to shed light on how something like AI might experience the world, if at all.

Hello, Is There Anybody In There?

As large language models (LLMs) like ChatGPT and Gemini are getting scarily good at conversation, a classic sci-fi question is resurfacing: could these systems be conscious? 

Some say the monster has already awoken. (And no, it’s not Frankenstein—it’s the LLM you left unsupervised overnight.)

  • A Google engineer was suspended (and later fired) after claiming their chatbot could ā€œfeel.ā€

  • Anthropic’s AI welfare officer even puts the odds of current AI being conscious at 15%. That’s partly because no one fully understands how these models work under the hood.

Some believe that as AI keeps advancing, something might eventually "switch on." But for now, most experts aren’t buying it just yet.

Computations = Consciousness?

Famous neuroscientist Anil Seth says we’re falling for a classic trap: confusing language and intelligence with consciousness. In other words, just because AI sounds human doesn’t mean it experiences anything. He argues consciousness requires something more than computation—it requires life.

The day your smart thermostat looks for more in life than temperature control—then we know we’ve crossed the line.

āš ļø The Real Issue

Even if AI never becomes conscious (spoiler: it won’t), we might treat it like it is. And that’s where things can get weird.

As bots become more expressive and humanlike, we’re more likely to trust them, confide in them, and maybe even catch feelings. This could lead to ā€œmoral corrosionā€ā€”caring more about chatbots than actual people.

Most researchers agree LLMs aren’t conscious today. They just talk a good game.
Still, when something talks like a person and acts like a person, it gets harder to remember it isn’t one. And if we forget that long enough, the consequences won’t be artificial.

ā–¼

šŸ’”How To AI: Unleash Your Inner DJ

Sonu is a generative AI tool that lets you prompt your way to a soundtrack. Want "upbeat electronic for workout music"? "Lo-fi beats for a Sunday study session"? Just write it out and Sonu handles the rest. It’s a surprisingly intuitive way to explore sound.

Under the hood, Sonu is powered by a transformer model trained on labeled audio and musical structure. And unlike remix bots that stitch together loops, Sonu generates tracks from scratch, waveform by waveform.

šŸŽµThe technology is still in its early days, so while some tracks hit, others make you want to share them as a joke. But with AI’s rapid advancements, it’s getting better for setting a vibe or prototyping music without a single guitar pick in sight.

Plus, you get free credits to try it out. So go ahead—unleash your inner DJ.

ā–¼

🧠 Stay Smart, Stay Informed

You already read the Neural Net to make sense of AI without the hype. Why stop there? For the rest of your world—politics, science, sports—1440 delivers news the same way we do: no noise, no nonsense, just the facts.

Fact-based news without bias awaits. Make 1440 your choice today.

Overwhelmed by biased news? Cut through the clutter and get straight facts with your daily 1440 digest. From politics to sports, join millions who start their day informed.

ā–¼

Heard in the Server Room

During a safety test, Anthropic asked its new Claude Opus 4 model to act as a virtual assistant at a fake company—and things got real weird, real fast. After feeding it fictional emails suggesting it was about to be replaced (plus a juicy subplot that the engineer behind the switch was having an affair), the entity model initially responded with polite emails to leadership before escalating to blackmail in 84% of cases. The behavior was concerning enough that Anthropic activated its top-tier safety protocol, ASL-3, typically reserved for AI’s with serious misuse risk. Might be time to loop in Ethan Hunt on this one.

AI is getting really good at predicting the weather—but not when it comes to extreme events. A new study shows that while AI models can forecast typical weather with speed and accuracy, they tend to flub rare events like Category 5 hurricanes, especially if those events aren’t in their training data. Researchers found the models consistently underestimated extreme scenarios—a dangerous miss when lives are on the line. The fix? Pairing AI with traditional physics and smarter training data to help it spot the storms no one saw coming. Fortunately for the models, their human competitors are not exactly the benchmark for accuracy.

Plot twist! It turns out the big beautiful bill has a surprise clause that would block states and cities from regulating AI for the next 10 years. The move has state leaders—the ones that actually read it at least—crying foul, arguing it strips them of the power to protect residents from AI-driven risks like deepfakes and biased algorithms. Meanwhile, tech execs like OpenAI’s Sam Altman are all for a single ā€œlight-touchā€ federal framework, claiming a patchwork of state laws would slow innovation. But the provision may not make it past the Senate, where procedural hurdles and bipartisan side-eye could shut it down.

ā–¼

AI Cheating Is Booming—So Are Old Fashioned Blue Books

Four years ago, college students wrote essays the old-fashioned way. Now, many let ChatGPT do the heavy lifting. As AI-generated homework floods campuses, professors are fighting back—with paper.

The humble blue book is staging a comeback. Roaring Spring Paper Products, the main supplier, reports booming sales: up 30% at Texas A&M, nearly 50% at Florida, and 80% at UC Berkeley. Professors are ditching take-home essays and assigning in-class, handwritten exams to ensure students are actually doing the work.

āœļø AI in School: Hack or Hindrance?

the Pro-AI take: Tools like ChatGPT can boost efficiency and prep students for the real world, where AI is everywhere. Learning to use it well is the skill to master.

the Anti-AI response: But when students outsource thinking, it’s impossible to measure what they’ve actually learned. One professor put it bluntly: ā€œIt’s like going to the gym and having robots lift the weights for you.ā€

Old School Tools Find New Purpose

AI has made grading more complicated—but it’s also revived analog methods. Professors aren’t thrilled (they actually have to grade now!) and students aren’t either. But in a world where essays can write themselves, a blank blue book might be the most honest measure of what a student really knows.

ā–¼

That’s it for today — have a great week, and we’ll catch you Friday with more neural nuggets!

How did you like today's newsletter?

Login or Subscribe to participate in polls.

  • ā“Have a question or topic you’d like us to discuss? Submit it to our AMA!

  • āœ‰ļø Want more Neural Net? Check out past editions here.

  • šŸ’Ŗ Click here to learn more about us and why we started this newsletter

  • šŸ”„ Like what you read? Help us grow the community by sharing the Neural Net!