🗞️ THE WEEKEND RECAP

Top Stories You Might Have Missed

🛒 Meta Buys Limitless AI Wearables: Meta acquired pendant-maker Limitless, ending device sales and winding down software as the team joins Reality Labs to accelerate Meta’s AI-enabled wearables.

🚀 ChatGPT Growth Slows as Gemini Surges: Sensor Tower says ChatGPT’s monthly users rose just 6% since August to ~810M, while Gemini jumped 30%, narrowing the gap as Google leverages Android integration and Nano Banana’s popularity.

💼 Hinton Warns of AI Job Losses: Hinton says mass unemployment is very likely due to AI; cites studies predicting up to 100 million jobs lost as tech layoffs highlight accelerating automation risks.

🧐 Huang Downplays AI Doom in Rogan Interview: NVIDIA’s Jensen Huang says AI’s long-term risks remain unknown but rejects apocalypse claims, while praising U.S. tech policy and predicting nuclear-powered data centers within seven years.

👩‍⚖️ NYT Sues Perplexity Over Copyright Use (Paywall): The New York Times filed a federal suit alleging Perplexity reproduced Times articles without permission, joining 40+ similar A.I. disputes as publishers challenge training and output practices.

📝 Meta Signs Broad AI News Deals: Meta inked data agreements with outlets including USA Today, CNN and Le Monde to feed real-time news into its chatbot, aiming to boost AI engagement amid rising competition.

🪧 POWERED BY VULTR

Vultr is empowering the next generation of generative AI startups with access to the latest AMD and NVIDIA GPUs.

Try it yourself and use promo code "BERMAN300" for $300 off your first 30 days.

📺 FROM THE LIVE SHOW

Your Next Security Guard Might Not Be Human

Enjoying our newsletter? Forward it to a colleague—
it’s one of the best ways to support us.

✍️ WRITING

Why AI Writing Sounds So Strange and Why Humans Are Starting to Imitate It

The Recap: The New York Times writer, Sam Kriss, dissects the increasingly recognizable, and increasingly everywhere, voice of AI-generated prose. Kriss argues that today’s large language models have developed a bizarre, overfitted rhetorical style marked by em-dashes, triplets, spectral metaphors, and an obsession with words like “delve,” shaping not just machine writing but human communication. He warns that as people unconsciously absorb these patterns, the boundary between human and machine style is blurring in ways both cultural and unsettling.

Highlights:

  • Kriss argues that AI prose has developed a distinctive, overfitted style marked by em dashes, triplets, elevated diction, and spectral metaphors that now appears across journalism, fiction, and corporate communications.

  • Usage data shows machine influence spreading: PubMed abstracts saw a 2,700% increase in the word “delves” after 2022, alongside spikes in terms like “intricate,” “tapestry,” and “meticulous.”

  • Misattribution is rising as AI-normalized language becomes globalized; Kriss notes the “delve” controversy involving Paul Graham, where Nigerian English was mistaken for AI output.

  • A Max Planck Institute study of 360,000 YouTube videos found human speakers increasingly using AI-like phrasing, suggesting that AI’s rhetorical tics are already feeding back into human speech.

Forward Future Takeaways:
AI’s stylistic fingerprints are no longer just a technical artifact, they’re becoming a cultural feedback loop, shaping how institutions talk to the public and how people talk to one another. As machine-generated language saturates communication channels, distinguishing authorship may become less important than understanding how algorithmically amplified patterns reshape tone, metaphor, and emotional expression. → Read the full article here. (Paywall)

🧒 CHILDHOOD

How AI Is Rewiring Childhood and Raising New Developmental Risks

The Recap: The Economist argues that AI-enabled toys, tutors, and entertainment are transforming childhood by offering personalization once reserved for the wealthy. The piece highlights both the promise—tailored learning, adaptive games, bespoke stories—and the hazards, from hallucinating tutors to sexualized toys, echo chambers, and emotionally lopsided relationships with chatbots. The magazine concludes that schools and parents must actively preserve human socialization and limit over-personalization to prevent AI from narrowing children’s experiences and undermining long-term resilience.

Highlights:

  • Toymakers in China have declared 2025 “the year of AI,” releasing interactive robots and teddies, while schools increasingly rely on material created with tools like ChatGPT and AI-powered tutors.

  • Early trials show gains in literacy and language learning, and AI can instantly tailor lessons or entertainment to a child’s language, interests, or preferred media format.

  • Risks extend beyond malfunctioning toys and hallucinating tutors: AI can trap children in hyper-personalized echo chambers, foster one-sided “yes-bot” relationships, and erode skills needed for disagreement, compromise, and real-world social interaction.

  • The Economist urges stricter age limits, more in-school assessment, and school-led efforts to teach debate, collaboration, and exposure to unfamiliar ideas—warning that AI-driven personalisation could widen inequality if poorer schools rely on chatbots as cheap substitutes for teachers.

Forward Future Takeaways:
AI is accelerating a shift toward hyper-personalized childhoods—efficient, responsive, and emotionally frictionless—but those same qualities risk stripping away the unpredictability and interpersonal challenge that underpin social development. With children forming habits and identities amid AI-mediated play, tutoring, and companionship, the societal responsibility is less about banning technology and more about curating human counterweights: disagreement, diversity of experience, and genuine relationships. → Read the full article here. (Paywall)

🤖 MODELS

Google Unveils Gemini 3 Pro with Major Gains in Vision and Spatial Reasoning

Google DeepMind introduced Gemini 3 Pro on December 5, 2025, calling it its most capable multimodal model to date, with state-of-the-art results in document, spatial, screen, and video understanding. The model posts new highs on benchmarks including MMMU Pro, Video MMMU, and CharXiv Reasoning (80.5%), driven by stronger OCR, “derendering,” and multi-step analytical reasoning.

Gemini 3 Pro also expands spatial and screen understanding for robotics and computer-use agents, and lifts video performance through higher framerate parsing and improved temporal reasoning. Google highlights early use cases in education, medical imaging, law and finance, where complex visual and workflow tasks benefit from higher accuracy. Developers can now tune performance via a media_resolution parameter to balance visual fidelity and cost. → Read the full article here.

🧰 TOOLBOX

Trending AI Tools

  • ▶️ vidIQ: AI tools for YouTube growth with daily ideas, keyword insights, and smart video optimization.

  • 🛸 Astrocade: Explore space with easy research tools, rich data, and collaborative astronomy features.

  • 📀 Soundraw: Create custom, royalty-free music with AI composition, easy editing, and pro-quality output.

🏆 REFERRALS

Share and Get 🔥 Prizes

🗒 FEEDBACK

Help Us Get Better

That’s a Wrap!

❤️ Love Forward Future? Share your unique referral link with friends and colleagues to unlock exclusive Forward Future perks! 👉 Get your link here.
📢 Want to advertise with Forward Future? Reach 600K+ AI enthusiasts, tech leaders, and decision-makers. Let’s talk—just reply to this email.
📥 Got a hot tip or burning question? Drop us a note! The best reader insights, questions, and scoops may be featured in future editions. Submit here.
🛰️ Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.

Thanks for reading today’s newsletter—see you next time!
Matthew Berman & The Forward Future Team

🧑‍🚀 🧑‍🚀 🧑‍🚀 🧑‍🚀

Reply

or to participate