🌌 NASA & Google Test AI Doctor for Space: Their new tool, CMO-DA, helps astronauts diagnose and treat medical issues without Earth contact. Tested on injuries like ankle sprains, it showed up to 88% accuracy—paving the way for AI-powered care on deep space missions.
🪨 Altman on GPT-5’s Rocky Debut: OpenAI’s CEO admits rollout missteps, promises fixes, reintroduces faster GPT-4o, and pokes fun at the team’s now-infamous confusing chart.
🥊 OpenAI’s GPT-5 Pricing Sparks AI Price War: With API rates as low as $1.25 per million input tokens, GPT-5 drastically undercuts Anthropic and rivals Google, pressuring the industry to lower costs as startups and devs cheer the cheapest top-tier model yet.
✨ Google Finance Gets AI Makeover: A revamped version lets users ask complex finance questions, view advanced charts, and access real-time data, aiming to turn search into a smarter investing tool. Rollout begins this week in the U.S. with toggle option.
🏆 OpenAI Beats Grok in AI Chess Showdown: In a surprise upset, OpenAI’s o3 model crushed Elon Musk’s Grok 4 in the final of an AI chess tourney, despite Grok dominating earlier rounds. Google's Gemini took third in the first-of-its-kind LLM chess battle.
💰 Meta Picks Pimco, Blue Owl for $29B Deal: Meta is raising $29B to expand AI data centers in Louisiana—$26B in debt led by Pimco, $3B equity from Blue Owl. The move mirrors Microsoft and xAI’s funding blitz as tech giants scramble to scale AI infrastructure fast.
🦾 Perplexity Adds GPT-5 for Subscribers: Max and Pro users on Perplexity and Comet can now access GPT-5, bringing OpenAI’s newest model to more platforms as competition in AI assistants heats up.
🛍️ Pinterest Says Agentic Shopping Still Distant: CEO Bill Ready downplays near-term AI agents shopping for users, but pitches Pinterest as an AI-enabled assistant that “just gets you”—offering smart, proactive recommendations without full automation.
Enjoying our newsletter? Forward it to a colleague—
it’s one of the best ways to support us.
Most companies treat AI like a pure engineering problem. But the real challenge isn’t just writing code — it’s learning to speak in a language machines can truly understand. Success hinges on bridging human meaning and machine logic, blending meta-cognition, taxonomy, and philosophy into a shared vocabulary that unlocks AI’s full potential. → Read the full article here.
VentureBeat’s Carl Franzen writes that OpenAI’s long-awaited GPT-5 debut has gotten off to a rocky start. Early users have reported basic math and logic errors, unreliable model-switching, and confusion over which version they’re actually using. While benchmark scores looked promising, real-world performance has left many unimpressed—especially as popular legacy models like GPT-4o are phased out for non-paying users. Frustrations are growing over the clunky “Thinking” mode router, along with safety gaps flagged by third-party researchers.
Meanwhile, competitors like Anthropic’s Claude Opus 4.1 and Alibaba’s updated Qwen 3 are drawing praise for superior coding performance and larger context windows. OpenAI still commands a massive user base, but with sentiment shifting and rivals closing in, GPT-5’s lukewarm reception could become a serious credibility test. → Read the full article here.
China’s leading AI labs—like DeepSeek, Moonshot, and Alibaba—are building models that rival or outperform their Western counterparts in coding and reasoning. Yet hardware bottlenecks are hobbling their rollout. The U.S. ban on NVIDIA’s H20 chips had crippled inference capacity, causing lags, delays, and even launch postponements. Open-source releases helped sidestep some limits, but can’t solve the core constraint: a lack of high-end, scalable compute.
A surprise U-turn by the Trump administration in late July partially reversed those restrictions, allowing NVIDIA’s chips back into China. Still, with supply shortages expected into Q4, Chinese labs are doubling down on smaller, faster models and efficiency-focused breakthroughs. The edge in AI development may be within reach—but without sustained access to inference hardware, China’s momentum risks stalling. → Read the full article here.
Mistral AI has released one of the first detailed environmental audits of a large language model, spotlighting the real-world toll of powering advanced AI. Its Mistral Large 2 model generated over 20,000 tons of CO₂ and guzzled 281,000 cubic meters of water during 18 months of operation—mostly from training and inference. A single 400-token chatbot reply emits about 1.14 grams of CO₂, roughly the same as 10 seconds of video streaming.
Partnering with sustainability experts, the Paris-based lab is pushing for transparency on AI’s climate costs and wants industry-wide reporting standards. But with U.S. policy drifting in the opposite direction, aligning AI progress with climate goals remains an uphill battle. → Read the full paper here.
FROM THE LIVE SHOW: on knowing if new models are better
> You have to do the vibe check
> We’re at the point where you just need to use the model — they’re all getting gold on the IMO...
> Use it, explore its boundaries
> Figuring out the right workflow between models is key— Forward Future (@forward_future_)
4:31 PM • Aug 9, 2025
GPT-5 introduces thinking and non-thinking modes, better coding, writing, and health support, less hallucination, and is now free for all users.
What did you think of today's newsletter? |
📢 Want to advertise with Forward Future? Reach 550K+ AI enthusiasts, tech leaders, and decision-makers. Let’s talk—just reply to this email.
🛰️ Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.
Thanks for reading today’s newsletter—see you next time!
Matthew Berman & The Forward Future Team
🧑‍🚀 🧑‍🚀 🧑‍🚀 🧑‍🚀
Reply