📉 Nvidia Chips And The Dip

Plus New Gemini Features, Legions Of Optimus Bots, AI Drive-Thrus, And More!

Welcome to another edition of the Neural Net…

Where we demystify AI—because someone has to translate what the robots are saying.

Nvidia Stock Down As Investors Trade Hardware For Software

The race for profitable AI is heating up—but the focus is shifting from hardware infrastructure to software solutions, signaling a new stage in the AI investment landscape.

Nvidia, nearly synonymous with AI for powering many of the advancements we rely on today, is facing new pressure as investors look beyond hardware. While Nvidia's chips initially drove rapid AI growth, investors increasingly see software as the next logical next step for AI investments. As AI technology matures and its real-world applications become clearer, investors are recognizing software as the primary driver of long-term profitability.

Well known AI software-focused companies are benefiting—Palantir Technologies (+28.7% YTD), IBM (+13.0% YTD), and CrowdStrike (+7.3% YTD) are all up year to date. Nvidia, by contrast, is -12.2% YTD. Even Nvidia’s recent conference, featuring the launch of faster and cheaper AI chips, wasn't enough to stop the slide—its stock dropped more than 3% immediately after the announcement.

For Nvidia and other hardware leaders, adaptation is critical. Nvidia is responding by pivoting toward AI software applications, including tools like its Agent software now helping fast-food brands streamline operations (more on this below).

Meanwhile, AI Investment Faces Pressure at Home and Abroad

Investor caution has been heightened by broader macroeconomic uncertainties in the U.S., prompting a retreat from AI stocks. CNBC’s Jim Cramer explains that AI stocks are just too expensive amidst economic unease, and that investors are no longer willing to pay for the premium valuations.

China’s AI advancements are also adding to the pressure. Jack Ma’s Ant Group claims their newly unveiled semiconductor chip is capable of cutting model-training costs by 20%. Coupled with China's DeepSeek GenAI model, which reportedly cost significantly less to train compared to OpenAI's equivalents, the need for such eye-watering AI investments in the U.S. appears increasingly uncertain.

U.S. companies are under growing pressure to demonstrate that their substantial AI investments are worth the premium—but that's just the cost of being on the bleeding edge: someone innovates, others replicate at half the price.

Heard In The Server Room

Google's Gemini Live is leveling up the AI assistant game with some seriously smart new features that let the tech peek at your screen and analyze live video in real-time. Think asking Gemini about an article you’re reading or a text you’re sending, or pulling up your camera and asking for help choosing between paint colors. Starting with Gemini Advanced subscribers, these capabilities from "Project Astra" are putting Google ahead of the pack in the competitive AI assistant landscape. With Apple announcing further delays to Siri, Google has solidified its lead in the AI-powered smartphone race.

In a recent all-hands, Musk indicated that Tesla is prepping to roll out 5,000 Optimus humanoid robots in 2025, dramatically comparing the initial batch to a "Roman legion" and projecting a bold scale-up to 50,000 units by 2026. The tech titan believes Tesla’s ability to manufacture at scale will allow these bots to change the world—predicting the bots impact could be “10 times bigger than the biggest product ever made”.

Netflix cofounder Reed Hastings just gave Bowdoin College a $50 million AI touchdown. The donation launches the Hastings Initiative for AI and Humanity, aimed at beefing up AI education through faculty hiring, research support, and critical discussions. Hastings, a Bowdoin alum ('83), wants to prep future leaders for an AI-driven world with a mix of technical skills and ethical insight. Or, use AI to free up more time for Netflix binging.

Yum Brands (KFC, Taco Bell, and Pizza Hut) Latest To Try AI Drive-Thrus

Fast-food giant Yum Brands is going all-in on AI, teaming up with Nvidia to give its drive-thrus a high-tech makeover. The company behind Taco Bell, KFC, and Pizza Hut is rolling out AI-powered voice ordering at 500 restaurants this quarter alone, signaling a major tech pivot that's about to "Live Mas" in the world of quick-service technology.

The game plan? Ditch human order-takers and embrace digital channels. Yum's already crushing it, with digital orders (whether from AI drive-thrus or mobile/online orders) jumping from 19% in 2019 to over 50% today. Joe Park, the company's tech chief, argues that digital ordering enables restaurants to personalize pitches and entice hungry customers to spend more.

This Isn't Your Average, Off-the-Shelf Tech Stack

We covered McDonald’s attempt at AI drive-thrus using third party tech from IBM, which wasn’t exactly a hit. Yum is taking a decidedly different approach. The company is developing its AI capabilities in-house, using Nvidia's development tools and frameworks—a strategic move that proves "No One Out-Techs the Hut". By leveraging Nvidia's NIM microservices and working closely with the chip giant, Yum's 2,000-strong tech team is building custom AI solutions that give them more control, potentially lower costs, and a competitive edge.

The AI wishlist goes way beyond voice ordering, too. Yum is exploring computer vision to catch order mistakes, developing AI that can analyze online customer reviews to generate actionable insights for managers, and building a proprietary platform called "Byte by Yum" to house these innovations.

It's a bold bet on technology that could reshape the quick-service restaurant industry, proving that in the world of fast food, the future is not just fast, but frighteningly smart—and definitely "Finger Lickin' Good." The only questions is how well these systems will handle the spirited clientele that shows up at 2 AM.

AI Me Anything - Answering Your Questions!

❝

How can you detect fake or inaccurate answers to questions or research given by AI? Short of checking all the sources and recalculating everything manually is there an AI that checks the accuracy of AI?

GenerativeAI is such a powerful tool, it’s easy to get lulled into a false sense of security, especially when you’re not an expert in the field and unable to quickly catch mistakes. Hallucinations, when an LLM outputs false or misleading information, can be frustrating and discourage users. While systematically reducing these hallucinations is a major focus today, current models are still plagued by the issue. Here are a few tips to help speed up your fact checking and give you confidence to move forward with your GenAI assisted research.

How to Fact-Check Your LLM Output

  • Use AI tools like Originality.ai, which offers to fact check your LLM output, but with this important disclaimer: This feature WILL sometimes provide inaccurate responses… So you still have to fact-check your fact-checker.

  • Enter the prompt into multiple LLM’s and cross-check the answers. Different models have different biases, training data, and assumptions. This will lead to different answers.

  • Provide guardrails in your prompts - restricting research to public information, providing preferred data sources, and limiting the scope per prompt can reduce hallucinations.

  • Be mindful of AI training cutoffs - Most LLMs operate on information that can be months or even years out of date, depending on their training timelines.

  • Brute force it - the only way to know for certain that your results are accurate is to ask the LLM to provide citations to all the outputs, and then manually verify those sources.

Beyond some of these basic options, quality control in GenAI is very much up to the user (and Google).

How did you like today's newsletter?

Login or Subscribe to participate in polls.

  • ❓Have a question or topic you’d like us to discuss? Submit it to our AMA!

  • ✉️ Want more Neural Net? Check out past editions here.

  • 💪 Click here to learn more about us and why we started this newsletter

  • 🔥 Like what you read? Help us grow the community by sharing the Neural Net!

That’s all folks! Have a great week and catch you Friday.