
Welcome back to the Neural Net! It’s International Talk Like a Pirate Day but luckily this is a text-only publication, matey.
In today’s edition: MIT + IBM publish new research on how to scale LLMs, Meta announces a new generation of smart glasses, Nvidia faces a chip ban in China and competition at home, and more.
▼
The Street

note: stock data as of last market close
▼
💰 MIT & IBM Just Solved One of AI’s Biggest Money Problems

Training a cutting-edge AI model today is like having to buy a multimillion-dollar house sight unseen or a car without ever test-driving it. You commit the money upfront and only learn later whether it performs as promised.
Researchers at the MIT-IBM Watson AI Lab just changed the rules. They developed a universal guide to “scaling laws”, a method for using smaller, cheaper AI models to accurately predict how the giant, expensive ones will perform. It’s the difference between gambling millions on blind faith and running a real inspection before you buy.
What the Research Found
After analyzing 485 models across 40 AI families, including GPT, LLaMA, and Bloom, the team surfaced practical ways to forecast performance without breaking the bank:
Variety beats size: Training multiple small and mid-sized models improves prediction accuracy more than one giant run.
Partial training works: Even 30% training on a large model can deliver solid performance.
Mid-training checkpoints are gold: Snapshots during training improve accuracy at zero extra cost.
Cross-model insights hold up: Performance trends often apply across different model families, not just one.
Together, these findings form a playbook for efficient AI development that helps teams train models smarter, not just bigger.
Why This Is Big for Business
Cuts costs dramatically: train less for similar results.
Democratizes AI innovation: smaller companies can better compete with the big players.
Accelerates timelines: fail fast, learn faster, and quickly identify what works.
The bottom line: this research could make AI innovation cheaper and faster. So what happens to the massive data centers Big Tech is pouring billions into, those headline-grabbing projects that seem to be announced almost every day? In a world where everyone is trying to do more with less, studies like this could shift the AI conversation from “how big can we go?” to “how smart can we scale?”
▼
In Partnership With Masters in Marketing
How 1,500+ Marketers Are Using AI to Move Faster in 2025
Is your team using AI like the leaders—or still stuck experimenting?
Masters in Marketing’s AI Trends Report breaks down how top marketers are using tools like ChatGPT, Claude, and Breeze to scale content, personalize outreach, and drive real results.
Inside the report, you’ll discover:
What AI use cases are delivering the strongest ROI today
How high-performing teams are integrating AI into workflows
The biggest blockers slowing others down—and how to avoid them
A 2025 action plan to upgrade your own AI strategy
Download the report. Free when you subscribe to the Masters in Marketing newsletter.
Learn what’s working now, and what’s next.
▼
Heard in the Server Room
Meta unveiled its latest lineup of smart glasses, including the Ray-Ban Display, Ray-Ban Gen 2, and Meta Oakley Vanguards (think Patrick Mahomes). New features include a built-in lens screen for texts, maps, translations, and calls, along with a neural wristband for gesture controls, and a water-resistant design. Only the wearer can see the display, making the glasses more private, practical, and closer to replacing quick tasks usually done on a phone.
Nvidia CEO Jensen Huang said he is “disappointed” after reports that China banned purchases of its RTX Pro 6000D AI chips as U.S.-China tensions heat up. Nvidia has told analysts to leave China out of forecasts given the uncertainty, following earlier U.S. export limits and a recent deal allowing some chip sales under strict conditions. For those keeping score at home that means Nvidia got banned by the U.S., unbanned, then banned again, this time by China.
AI chip startup Groq (not to be confused with xAI’s chatbot Grok) just raised $750M at a $6.9B valuation, more than doubling in a year. Groq’s language processing units (their version of a GPU) are built for AI inference and are offered via cloud or on-prem racks. Usage has surged to over 2M+ developers using models from Meta, Google, and OpenAI. The rapid growth and adoption of these new chips add to what’s already been a rough week for Nvidia.
▼
In Partnership With the Marketing Millenials
The best marketing ideas come from marketers who live it. That’s what The Marketing Millennials delivers: real insights, fresh takes, and no fluff. Written by Daniel Murray, a marketer who knows what works, this newsletter cuts through the noise so you can stop guessing and start winning. Subscribe and level up your marketing game.
▼
💡How To AI: Remaster Old Photos With Remini

Remini AI gives your old, blurry photos a serious glow-up, turning grainy snapshots into sharp, shareable pics. It can smooth faces, swap backgrounds, and even crank out headshots that look ready for a corner office. And it’s excellent for turning those grainy iPhone shots from 2018 into crisp masterpieces.
Just upload a photo (or video) to their app or website, pick an enhancement, and let the AI do its thing. Like most new AI tools, you can try it for free then get a subscription if you can’t live without it.
▼
That’s it for today! Have a great weekend and we’ll catch you next week.




