šŸ§‘ā€šŸš€ Superintelligence Nears, Meta Eyes Closed Models, & Murati’s Rejection

Big Tech spends billions on AI, Murati rejects Meta, Meta may close models, Google joins EU code, youth lead AI use, AlphaEarth maps Earth precisely.

šŸ—žļø YOUR DAILY ROLLUP

Top Stories of the Day

šŸ‘“ļø Not Wearing AI Glasses Is a ā€œCognitive Disadvantageā€
Mark Zuckerberg positioned AI-powered eyewear as the future front line of human-computer interfaces, asserting during the July 30, 2025 earnings call that people who don’t adopt AI glasses could face a significant cognitive disadvantage. Meta is doubling down on this vision with steep investments ($66–72 billion in AI infrastructure in 2025) and the long-term aim to build personal ā€œsuperintelligenceā€ embedded in wearable glasses.

🧠 Zuck: Superintelligence Is in Sight as Meta Spends Billions on AI
In a memo released ahead of Meta’s second‑quarter earnings, Mark Zuckerberg declared that developing ā€œsuperintelligenceā€ā€”AI systems capable of self‑improvement—is now clearly within reach, thanks to massive investments in infrastructure, high‑profile talent and elite teams. Meta reported $47.5 billion in revenue (up 22%) and $7.14 EPS (up 36%), beating projections, even amid soaring capital expenditures.

šŸ‘Øā€šŸ’» Most U.S. Adults Use AI, But Young Users Lead
An AP-NORC poll finds 60% of U.S. adults use AI to search for information, but younger users dominate more creative and task-based uses. Nearly 6 in 10 under 30 use AI for brainstorming, compared to just 2 in 10 over 60. While work, email, and entertainment uses are growing, AI companionship remains limited, yet more common among young adults.

šŸ“ Google’s AI Can Map Earth with New Precision
Google DeepMind’s AlphaEarth Foundations compresses massive satellite data into sharp 10-meter-resolution maps, reducing errors by 24% and storage needs by 16x. Used to track deforestation, climate change, and land use, it enables real-time, global monitoring. Now on Google Earth Engine, it gives organizations powerful, privacy-safe planetary insight tools.

Enjoying our newsletter? Forward it to a colleague—
it’s one of the best ways to support us.

šŸ“Š MARKET PULSE

Ramp’s AI‑Driven Finance Platform Surges to $22.5 Billion Valuation

Ramp, the AI-powered finance operations platform, just raised a $500 million Series E-2 led by ICONIQ—pushing its valuation to $22.5 billion. → See how the agentic revolution is reshaping finance.

šŸ‘¾ FORWARD FUTURE ORIGINAL

The Risk of Personalized Learning Tools

Can AI personalize learning without isolating students? In this thought-provoking guest post, ClassWaves CEO Mandy McLean argues that real learning is social, rooted in dialogue, not just data. She explores the risks of hyper-personalization and makes a compelling case for building AI that amplifies classroom conversation rather than replacing it. → Read the full article here.

ā™Ÿļø MODEL STRATEGY

Zuckerberg: Meta Likely to Keep Superintelligence Closed

Meta is moving fast toward building artificial general intelligence, but that doesn’t mean the company plans to open-source its most advanced models. In a recent press briefing, CEO Mark Zuckerberg said Meta will likely not open-source future ā€œsuperintelligenceā€ systems due to safety risks, a reversal from its previous open-weight Llama strategy.

This shift reflects a broader tension in the AI world: openness vs. control. While Meta has championed openness as a competitive advantage, it now joins peers like OpenAI and Anthropic in putting up guardrails as the stakes rise. Still, Zuckerberg maintained that Meta’s current Llama models remain open and continue to improve, with Llama 4 on the way and training underway for Llama 5.

The pivot highlights growing concern about dual-use risks, misuse, and regulatory pressure as models approach human-level reasoning. Meta says it will support external scrutiny, but the age of open-source frontier AI may be fading. ā†’ Read the full article here.

šŸ„‡ TALENT WARS

Mira Murati Turns Down $1B Meta Offer, Doubles Down on Her AI Startup Vision

Mira Murati Turns Down $1B Meta Offer

Mira Murati, former CTO of OpenAI and a key architect behind ChatGPT and DALLĀ·E, has rejected a $1 billion offer from Meta to join its new Superintelligence Lab. Instead, she and her team are betting on their own startup, Thinking Machines Lab—a stealth-mode company aiming to build interpretable, customizable AI systems.

Founded in early 2025, the startup has already raised $2 billion at a near-$12 billion valuation. Murati’s team, offered up to $1B individually by Meta, unanimously declined, citing belief in their equity and the freedom to chart their own course. In a market where talent often follows the money, this rare defiance underscores Murati’s status as one of AI’s most influential leaders. → Read the full article here.

āš›ļø SCIENCE

AI That ā€˜Feels’ Guilt Could Encourage Greater Cooperation

Can AI Learn to ā€˜Feel’ Guilty

A new study from the University of Stirling suggests that programming artificial agents with a form of guilt, modeled as a self-imposed penalty, can improve cooperation in social networks. In simulations using the prisoner’s dilemma, guilt-prone AI consistently outperformed selfish strategies, encouraging trust and collaboration.

While these agents don’t experience guilt like humans, their behavior mimics emotional accountability. It’s a glimpse into how emotion-inspired code could stabilize AI behavior, but experts caution the results are limited to simplified models. In the real world, saying ā€œsorryā€ is cheap, and not always sincere. → Read the full story here.

šŸ›°ļø NEWS

What Else is Happening

Voice Actors Battle AI Dubbing

šŸŽ™ļø Voice Actors Battle AI Dubbing: As studios test synthetic voices, European actors demand EU regulation to protect their craft from being replaced by cheaper, less emotive AI alternatives.

šŸ“‹ Microsoft Ranks AI Job Risks: New study reveals writers, telemarketers, and PR pros top the list of AI-vulnerable roles, while roofers, nurses, and dishwashers remain safest.

šŸ™‹ Google Joins EU AI Code Pledge: Google says it will sign the EU’s voluntary AI Code of Practice, backing transparency and safety rules ahead of stricter regulation expected in 2026.

āš ļø Alibaba’s AI Coder Sparks Security Fears: Qwen3-Coder impresses on benchmarks but raises alarms in the West over potential backdoors, data exposure, and ties to China’s national security laws.

šŸŽ¬ Amazon Backs AI TV Startup Fable: With fresh funding, Fable launches Showrunner — a ā€œNetflix of AIā€ platform where users can generate entire TV episodes from a few typed prompts.

🪪 Google Rolls Out AI Age Checks: Using search and video habits, Google will auto-restrict under-18 users—limiting ads, app access, and YouTube content—with ID checks for mistaken flags.

šŸ“½ļø VIDEO

Chinese Open-Source Dominates Coding

China’s GLM 4.5 rivals top closed models with strong reasoning, coding, and agentic skills—solving puzzles, simulating games, and powering interactive 3D demos.

šŸ—’ļø FEEDBACK

Help Us Get Better

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

That’s a Wrap!

šŸ›°ļø Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.

Thanks for reading today’s newsletter—see you next time!

Matthew Berman & The Forward Future Team

šŸ§‘ā€šŸš€ šŸ§‘ā€šŸš€ šŸ§‘ā€šŸš€ šŸ§‘ā€šŸš€

Reply

or to participate.