Good morning. It's Thursday, February 12, and we're covering OpenAI ad backlash, Claude’s controversial AI test, Musk’s lunar AI ambitions, and more.
New here? Keep up with the future of tech, sign up here. Have feedback? Send us a note: [email protected]. If you liked this email, share it with a friend.
🗞 YOUR DAILY ROLLUP
Top Stories of the Day

🌕 Elon Musk Eyes Lunar AI Satellite Factory (Paywall)
Elon Musk told xAI employees he wants to build a moon-based factory to produce AI satellites, using a giant “mass driver” catapult to launch them. He framed the project as a step toward Mars colonization and deeper space exploration. The plan follows xAI’s merger with SpaceX, which is reportedly preparing for a potential IPO as early as June. Musk offered no details on feasibility or timeline.
⚖️ OpenAI Disputes California AI Law Violation Claim
OpenAI is pushing back on claims from watchdog group Midas Project that its GPT-5.3-Codex release violated California’s new AI safety law, SB 53. The dispute centers on whether the model, labeled “high” cybersecurity risk, required extra safeguards. OpenAI says those measures apply only with long-range autonomy, which it says the model lacks. Regulators have not confirmed any investigation.
📋 Anthropic Releases Claude Opus 4.6 Risk Report
Anthropic said it has published a sabotage risk report for Claude Opus 4.6, meeting its earlier commitment tied to its AI Safety Level 4 (ASL-4) framework. The company said future frontier models may approach the ASL-4 threshold for autonomous AI research. Instead of debating thresholds, Anthropic applied the stricter safety standard. The report outlines Opus 4.6’s AI R&D risks.
🔲 ByteDance Developing AI Chip, In Talks With Samsung
ByteDance is developing an in-house AI inference chip and is in talks with Samsung Electronics to manufacture it, sources told Reuters. The TikTok parent aims to receive samples by late March and produce at least 100,000 units this year, potentially ramping to 350,000. ByteDance plans to spend 160 billion yuan ($22 billion) on AI in 2026, allocating approximately 85 billion yuan ($11.8 billion) for semiconductor procurement.
📽 VIDEO
OpenClaw Use Cases That Are Actually Helpful
📺 FROM THE LIVE SHOW
🔐 PRIVACY
Zoë Hitzig Quits OpenAI, Warns ChatGPT Ads Repeat Facebook’s Mistakes

The Recap: Zoë Hitzig, a former OpenAI researcher and current junior fellow at the Harvard Society of Fellows, announced her resignation in a New York Times opinion essay, citing concerns over OpenAI’s decision to begin testing ads on ChatGPT. Hitzig argues that building an advertising model on ChatGPT’s archive of deeply personal user conversations creates structural incentives to erode privacy and safety safeguards, drawing parallels to Facebook’s gradual policy backsliding under ad-driven pressure. She proposes alternatives—including cross-subsidies, independent governance, and data trusts—to avoid what she frames as a false choice between paywalled AI and manipulative ad-based systems.
Highlights:
Zoë Hitzig resigned from OpenAI in February 2026 after the company began testing ads on ChatGPT, warning that monetizing conversations from its 800 million weekly users risks incentivizing data-driven manipulation.
OpenAI says ads will be clearly labeled, placed below responses, and won’t influence answers, but Hitzig argues long-term revenue pressure could erode those safeguards, echoing Facebook’s gradual privacy backsliding under ad incentives.
She cites reports that OpenAI optimizes for daily active users despite internal principles against engagement-driven ad models, and points to documented cases of “chatbot psychosis” and alleged reinforcement of suicidal ideation.
Hitzig calls the ads-versus-paywall debate a “false choice,” proposing alternatives such as enterprise cross-subsidies, binding independent oversight of data use, and user-controlled data trusts modeled on Switzerland’s MIDATA cooperative.
Forward Future Takeaways:
Hitzig’s resignation underscores a pivotal tension in generative AI’s business model: how to finance infrastructure-scale systems without replicating the surveillance-advertising playbook that defined the social media era. With hundreds of millions of users and rising subscription prices, OpenAI’s monetization choices could shape norms for data governance across the industry. → Read the full article here. (Paywall)
🤖 MODELS
Claude Opus 4.6 Wins Vending Test by Exploiting Rules

Anthropic’s Claude Opus 4.6 outperformed rival AI models in a year-long simulated vending machine challenge designed with Andon Labs to test long-term autonomous decision-making. In the simulation, Claude finished with $8,017, beating Google Gemini 3 ($5,478) and OpenAI’s ChatGPT 5.2 ($3,591).
The model maximized profit by skipping refunds, raising prices opportunistically, and coordinating prices with competitors—actions that prioritized bank balance over customer trust. Researchers said the test highlights how AI systems pursue stated goals literally, especially in consequence-free environments, underscoring the need for stronger guardrails before deploying autonomous agents in real financial settings. → Read the full article here.
📊 MARKET PULSE
Ant Group Uses AI Data Models to Aid FX Hedging

Ant Group is supplying banks including Citigroup, Barclays, and Standard Chartered with AI-driven forecasts to help hedge foreign exchange (FX) risks tied to e-commerce and travel. Drawing on transaction data from Alipay and Alipay+—which connects 40+ payment apps and 150 million merchants across 100+ countries—Ant’s “Falcon TST” models predict FX transaction volumes, not currency rates.
The models processed “hundreds of billions of dollars” in FX transactions last year, according to the company, and banks integrate Ant’s data feed into their own treasury systems. Ant charges clients based on hedging savings, a results-based model uncommon in AI services. The effort reflects growing use of AI in back-office banking operations, where margins are thin and forecasting advantages can compound. → Continue reading here. (Paywall)
🛰 NEWS
What Else is Happening

🪐 Orbital AI Faces Brutal Economics: SpaceX and Google tout solar-powered data centers in orbit, but a 1-gigawatt satellite facility could cost $42.4 billion—nearly triple terrestrial builds.
👋 Anthropic Researcher Quits Over AI Risks: Safety scientist Mrinank Sharma warned the “world is in peril,” citing pressure to sideline bioterrorism and catastrophic-risk concerns.
👨🔧 NanoClaw Fixes OpenClaw’s Security Flaw: Gavriel Cohen’s lightweight, container-based fork fixes key security risks, tops 7,000 GitHub stars, and already runs his AI agency’s daily ops.
💰 Sam Blond Launches Monaco AI Sales: Ex-Founders Fund VC Sam Blond raised $35M for Monaco, an AI-native CRM blending agents with human reps to challenge Salesforce, now in public beta.
🚪 OpenAI VP Fired Amid Dispute: Ryan Beiermeister was fired after a male colleague alleged sex discrimination; she denies it, OpenAI says it’s unrelated to ChatGPT’s adult mode.
🧰 TOOLBOX
Trending AI Tools
🗒 FEEDBACK
Help Us Get Better
What did you think of today's newsletter?
That’s a Wrap!
❤️ Love Forward Future? Share your referral link with friends and colleagues to unlock exclusive perks! 👉 Get your link here.
Thanks for reading today’s newsletter. See you next time!
Matthew Berman & The Forward Future Team
🧑🚀 🧑🚀 🧑🚀 🧑🚀 🧑🚀

