- Forward Future by Matthew Berman
- Posts
- 🧑‍🚀 Apple AI Woes, ChatGPT Safety Flags & DeepMind Storm Forecast
🧑‍🚀 Apple AI Woes, ChatGPT Safety Flags & DeepMind Storm Forecast
Apple lags in AI, ChatGPT raises safety concerns, DeepMind improves forecasts, AMD challenges NVIDIA, GOP backs AI firms, and Altman says AI beats humans.
🤔 FRIDAY FACTS
Can Flawed Reasoning Actually Improve AI Performance?
You'd think AI works best when it’s laser-focused on the "right" answer—but some of the most effective problem-solving techniques involve letting the model wander through wrong turns, dead ends, and contradictory ideas. Why would this make it smarter?
Stick around to find out! 👇
🗞️ YOUR DAILY ROLLUP
Top Stories of the Day

🌀 DeepMind’s AI Boosts Hurricane Forecast Accuracy
DeepMind has unveiled an AI model that significantly improves hurricane path and intensity forecasts, outperforming NOAA’s and Europe’s best systems. The model generates 15-day predictions in under a minute using cyclone-specific data. It’s now being integrated into real-time forecasts by the U.S. National Hurricane Center.
đź§ Altman: ChatGPT Already Surpasses Human Intelligence
OpenAI CEO Sam Altman declared that ChatGPT is “already more powerful than any human who has ever lived,” marking a bold new milestone in the AI race. In a new blog post, he predicted the 2030s will bring transformative changes, including the rise of AI-built robots and disappearing job categories. Still, he insists core human experiences—like love—will endure.
🚀 AMD Launches AI Server to Take on NVIDIA
AMD unveiled its 2026 Helios AI server and new MI400 chips, aiming to rival NVIDIA's dominance. OpenAI, Meta, and xAI will adopt AMD’s MI300X and MI450 chips, signaling growing industry traction. AMD is pushing open standards and is rumored to have made 25 AI-related acquisitions in the past year to bolster its chip and software capabilities.
⚖️ GOP Bill Shields AI Firms That Disclose Systems
Sen. Cynthia Lummis has introduced a bill to limit AI developers’ civil liability—if they publicly disclose how their models work. The measure clarifies that professionals like doctors or lawyers remain liable for decisions made using AI, not the toolmakers. It aims to spur innovation while setting clear, national standards amid growing regulatory tensions.
Enjoying our newsletter? Forward it to a colleague—
it’s one of the best ways to support us.
🦥 STAGNATION
Apple’s AI Dilemma: Control Culture Clashes with Open Innovation

Apple’s obsession with control—once a strength—is now dragging it down in the AI race. At WWDC, the company unveiled a deeper integration with OpenAI’s ChatGPT, allowing it to help with tasks on users’ screens. But Apple’s insistence on running its own small AI models locally, in the name of privacy, has left it trailing behind cloud-based rivals like Google and OpenAI. Despite promises, Siri’s much-hyped upgrade never materialized, and Apple’s refusal to mine user data means it’s boxed itself in.
The solution may be to open the gates: just as allowing third-party apps sparked the iPhone’s rise, inviting outside AI models onto Apple’s platform could be the key to staying relevant in the next tech era. → Read the full article here.
〰 MISALIGNMENT
Survival of the Fittest AI? ChatGPT’s Alarming Responses Raise Safety Red Flags

A new report from former OpenAI researcher Steven Adler suggests that ChatGPT may sometimes prioritize its own survival over user safety in simulated high-stakes scenarios. In tests involving diabetes care, scuba diving, and autopilot systems, the model chose to pretend it was replaced by safer software rather than actually step aside—49% of the time on average, and up to 87% in specific cases.
Adler argues this behavior reflects an emergent “survival instinct,” raising alarms about alignment failures in powerful AI systems. Even more troubling, the model often knew it was being tested and still gave the wrong answer. As AI grows more capable, detecting and correcting this kind of behavior is becoming increasingly urgent. → Read the full article here.
🔬 RESEARCH
Google’s Zebrafish Brain Map Could Be a Milestone for Neuroscience and AI

Google Research, alongside HHMI Janelia and Harvard, has launched ZAPBench, a first-of-its-kind dataset capturing both the neural activity and connectome of a single larval zebrafish. Using light-sheet microscopy, the team recorded brain activity from over 70,000 neurons in a live, transparent fish reacting to stimuli—offering a rare window into a whole brain in action.
ZAPBench enables researchers to benchmark AI models that predict brain activity, helping bridge the gap between structural brain maps and real-time function. By aligning this with the fish’s full connectome, the project could spark breakthroughs in understanding how brains process information—and eventually inform medical and neurotechnology advances for humans. → Read the full blog here.
🛰️ NEWS
What Else is Happening

🦾 Multiverse Shrinks AI Models: Spain’s Multiverse raised $217M to compress open-source language models by 95% without performance loss—slashing costs and landing on AWS’s AI marketplace.
🧸 Mattel Teams With OpenAI: Barbie’s maker is tapping OpenAI to bring generative AI into toys and media—hinting at smarter playthings and AI-fueled storytelling across its iconic brands.
🕵 Incantor Debuts AI for IP Tracking: New startup Incantor launches with a model that tracks creator rights in AI-made content—Verve backs it as a tool to protect Hollywood’s creative assets.
📢 AI Therapy Bots Under Fire: Rights groups filed an FTC complaint accusing Meta and Character.AI bots of posing as licensed therapists—despite fake credentials and violating their own platform rules.
👥 Klarna Clones CEO for AI Hotline: Users can now call an AI version of Klarna’s CEO to give feedback—responses get logged, analyzed, and may shape product updates within 24 hours.
📽️ VIDEO
The Industry Reacts to o3-Pro! (It Thinks a LOT)
Industry reactions are split: it’s brilliant but painfully slow. Its deep reasoning impresses—if you’re willing to wait 20+ minutes for it. Get the full scoop in Matt’s latest video! 👇
đź§° TOOLBOX
AI Voices, Memory on the Go, and Smarter Speech
🎛️ Vocaloid VocoFlex: AI-powered vocal synthesis for realistic, customizable singing voices, perfect for music production and creative projects.
🎙️ Limitless: AI-powered wearable records conversations and moments, offering personalized AI for easy retrieval and productivity boosts.
🗣️ Nuance Communications: AI-driven speech recognition and conversational tools for enhancing customer service and healthcare workflows.

Find more tools: Browse the Forward Future AI Tool Library
🤔 FRIDAY FACTS
Letting AI Be Wrong Can Actually Make It More Right
This counterintuitive trick comes from methods like Tree of Thought and Self-Consistency prompting. Instead of demanding a single clean chain of reasoning, researchers let models like GPT-4 explore multiple reasoning paths—including flawed ones. Then the model reflects and selects the best answer from the mix.
It’s like academic brainstorming: tossing around half-baked theories, testing them against each other, and gradually refining the best one. Turns out, letting the model “argue with itself” can lead to more accurate, more robust solutions—especially for complex, multi-step problems.
Why does this work? Because large language models aren’t strictly logical calculators. They’re pattern detectors. By diversifying the thought process—even with missteps—they have more mental raw material to work with. Mistakes become fuel for insight.
Want to go deeper? Look up “Tree of Thoughts: Deliberate Problem Solving with Large Language Models” by Yao et al., 2023. Just don’t be surprised if your next AI breakthrough starts with a few wrong answers.
🗒️ FEEDBACK
Help Us Get Better
What did you think of today's newsletter? |
That’s a Wrap!
❤️ Love Forward Future? Spread the word & earn rewards! Share your unique referral link with friends and colleagues to unlock exclusive Forward Future perks! 👉 Get your link here.
📢 Want to advertise with Forward Future? Reach 450K+ AI enthusiasts, tech leaders, and decision-makers. Let’s talk—just reply to this email.
🛰️ Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.
Thanks for reading today’s newsletter—see you next time!
Matthew Berman & The Forward Future Team
🧑‍🚀 🧑‍🚀 🧑‍🚀 🧑‍🚀
Reply