- Neural Net
- Posts
- š§ I Think, Therefore AI Am?
š§ I Think, Therefore AI Am?
Plus AI Drops the Beat, Claude's Blackmail, Blue Books Return, and More

Welcome back to the Neural Net! While you were enjoying your Memorial Day BBQ, we were busy grilling up the freshest takes in AI.
In todayās edition: The real risk of the AI consciousness debate, how to use AI to unleash your inner DJ, Anthropicās new LLM resorts to blackmail, the return of old-school tests, and more.
ā¼
The Street

note: stock data as of market close Friday 5/23
ā¼
I Think, Therefore AI Am?

Consciousnessāthe thing that lets us think, feel, and be awareāhas puzzled scientists for centuries. Itās one of scienceās greatest enigmas. We can observe what the brain is doing, but how that activity turns into emotions or awareness is still unknown.
At Sussex Universityās Centre for Consciousness Science, researchers are trying to crack this āhard problemā in humans, and in doing so, hope to shed light on how something like AI might experience the world, if at all.
Hello, Is There Anybody In There?
As large language models (LLMs) like ChatGPT and Gemini are getting scarily good at conversation, a classic sci-fi question is resurfacing: could these systems be conscious?
Some say the monster has already awoken. (And no, itās not Frankensteināitās the LLM you left unsupervised overnight.)
A Google engineer was suspended (and later fired) after claiming their chatbot could āfeel.ā
Anthropicās AI welfare officer even puts the odds of current AI being conscious at 15%. Thatās partly because no one fully understands how these models work under the hood.
Some believe that as AI keeps advancing, something might eventually "switch on." But for now, most experts arenāt buying it just yet.
Computations = Consciousness?
Famous neuroscientist Anil Seth says weāre falling for a classic trap: confusing language and intelligence with consciousness. In other words, just because AI sounds human doesnāt mean it experiences anything. He argues consciousness requires something more than computationāit requires life.
The day your smart thermostat looks for more in life than temperature controlāthen we know weāve crossed the line.
ā ļø The Real Issue
Even if AI never becomes conscious (spoiler: it wonāt), we might treat it like it is. And thatās where things can get weird.
As bots become more expressive and humanlike, weāre more likely to trust them, confide in them, and maybe even catch feelings. This could lead to āmoral corrosionāācaring more about chatbots than actual people.
Most researchers agree LLMs arenāt conscious today. They just talk a good game.
Still, when something talks like a person and acts like a person, it gets harder to remember it isnāt one. And if we forget that long enough, the consequences wonāt be artificial.
ā¼
š”How To AI: Unleash Your Inner DJ
Sonu is a generative AI tool that lets you prompt your way to a soundtrack. Want "upbeat electronic for workout music"? "Lo-fi beats for a Sunday study session"? Just write it out and Sonu handles the rest. Itās a surprisingly intuitive way to explore sound.
Under the hood, Sonu is powered by a transformer model trained on labeled audio and musical structure. And unlike remix bots that stitch together loops, Sonu generates tracks from scratch, waveform by waveform.
šµThe technology is still in its early days, so while some tracks hit, others make you want to share them as a joke. But with AIās rapid advancements, itās getting better for setting a vibe or prototyping music without a single guitar pick in sight.
Plus, you get free credits to try it out. So go aheadāunleash your inner DJ.
ā¼
š§ Stay Smart, Stay Informed
You already read the Neural Net to make sense of AI without the hype. Why stop there? For the rest of your worldāpolitics, science, sportsā1440 delivers news the same way we do: no noise, no nonsense, just the facts.
Fact-based news without bias awaits. Make 1440 your choice today.
Overwhelmed by biased news? Cut through the clutter and get straight facts with your daily 1440 digest. From politics to sports, join millions who start their day informed.
ā¼
Heard in the Server Room
During a safety test, Anthropic asked its new Claude Opus 4 model to act as a virtual assistant at a fake companyāand things got real weird, real fast. After feeding it fictional emails suggesting it was about to be replaced (plus a juicy subplot that the engineer behind the switch was having an affair), the entity model initially responded with polite emails to leadership before escalating to blackmail in 84% of cases. The behavior was concerning enough that Anthropic activated its top-tier safety protocol, ASL-3, typically reserved for AIās with serious misuse risk. Might be time to loop in Ethan Hunt on this one.
AI is getting really good at predicting the weatherābut not when it comes to extreme events. A new study shows that while AI models can forecast typical weather with speed and accuracy, they tend to flub rare events like Category 5 hurricanes, especially if those events arenāt in their training data. Researchers found the models consistently underestimated extreme scenariosāa dangerous miss when lives are on the line. The fix? Pairing AI with traditional physics and smarter training data to help it spot the storms no one saw coming. Fortunately for the models, their human competitors are not exactly the benchmark for accuracy.
Plot twist! It turns out the big beautiful bill has a surprise clause that would block states and cities from regulating AI for the next 10 years. The move has state leadersāthe ones that actually read it at leastācrying foul, arguing it strips them of the power to protect residents from AI-driven risks like deepfakes and biased algorithms. Meanwhile, tech execs like OpenAIās Sam Altman are all for a single ālight-touchā federal framework, claiming a patchwork of state laws would slow innovation. But the provision may not make it past the Senate, where procedural hurdles and bipartisan side-eye could shut it down.
ā¼
AI Cheating Is BoomingāSo Are Old Fashioned Blue Books

Four years ago, college students wrote essays the old-fashioned way. Now, many let ChatGPT do the heavy lifting. As AI-generated homework floods campuses, professors are fighting backāwith paper.
The humble blue book is staging a comeback. Roaring Spring Paper Products, the main supplier, reports booming sales: up 30% at Texas A&M, nearly 50% at Florida, and 80% at UC Berkeley. Professors are ditching take-home essays and assigning in-class, handwritten exams to ensure students are actually doing the work.
āļø AI in School: Hack or Hindrance?
the Pro-AI take: Tools like ChatGPT can boost efficiency and prep students for the real world, where AI is everywhere. Learning to use it well is the skill to master.
the Anti-AI response: But when students outsource thinking, itās impossible to measure what theyāve actually learned. One professor put it bluntly: āItās like going to the gym and having robots lift the weights for you.ā
Old School Tools Find New Purpose
AI has made grading more complicatedābut itās also revived analog methods. Professors arenāt thrilled (they actually have to grade now!) and students arenāt either. But in a world where essays can write themselves, a blank blue book might be the most honest measure of what a student really knows.
ā¼
Thatās it for today ā have a great week, and weāll catch you Friday with more neural nuggets!
How did you like today's newsletter? |