Good morning, it’s Thursday. Does DeepSeek’s big win settle the open-source debate once and for all? Or does it just prove that bigger isn’t always better—especially when it comes to energy-guzzling data centers? We’re breaking it all down in today’s edition.
Plus, in our latest Forward Future University article, we dive into expert prompting techniques for image models like Midjourney and DALL·E—because great AI art starts with great prompts.
🗞️ YOUR DAILY ROLLUP
Top Stories of the Day
⚔️ OpenAI vs. DeepSeek: AI Theft Allegations Spark Tensions
OpenAI accuses Chinese AI firm DeepSeek of copying its research to build a ChatGPT rival using "knowledge distillation." Microsoft is investigating potential data misuse, while US officials warn of security risks. DeepSeek's claims of low-cost development are under scrutiny, and the US Navy has already banned its apps. This dispute highlights growing geopolitical tensions in AI innovation and intellectual property protection.
🤖 Alibaba Claims Its AI Beats DeepSeek & GPT-4o
Alibaba has unveiled Qwen 2.5-Max, an AI model it says outperforms DeepSeek-V3, GPT-4o, and Llama-3.1-405B. Released on Lunar New Year, the launch highlights China’s escalating AI race, driven by DeepSeek’s rapid rise. DeepSeek’s disruptive models have shaken Silicon Valley and triggered AI stock sell-offs. Meanwhile, ByteDance has joined the battle, claiming its latest model surpasses OpenAI’s o1 in key benchmarks.
⚠️ AI 'Godfather' Warns of DeepSeek’s Global Risks
AI pioneer Yoshua Bengio warns that DeepSeek’s rapid advances could heighten global AI risks by pushing companies to prioritize speed over safety. A new AI Safety Report, co-authored by Bengio and Geoffrey Hinton, highlights threats like AI-assisted bioweapons, deepfake scams, and cybersecurity dangers. As AI’s role in automation and national security grows, experts stress the need for global governance to balance innovation with safety.
🎓 Google Bets $120M on AI Education Amid Scrutiny
As AI regulations tighten, Google is investing $120 million in AI education to influence public perception and policy. Partnering with colleges and expanding its Grow with Google initiative, it aims to boost AI literacy while addressing fears of job displacement. However, Google faces growing regulatory scrutiny, including antitrust cases in the U.S. and EU. By reskilling workers, it hopes to steer the AI conversation in its favor.
đź“Ą FF INTEL
Introducing: Forward Future Intel
Not to get all sappy, but you—the humans reading this—are the most important part of what we do. Full stop.
So, let’s make this a two-way street. Got a hot tip, a burning question, or something you wish we’d cover? We’re all ears. Drop us a note, and we’ll feature the best reader insights, questions, and scoops in future editions. Let’s build this thing together.
đź“© Hit the button below and spill the tea!
🏆 OPEN-SOURCE WIN
Meta’s Open-Source Gamble Pays Off as DeepSeek Surges Ahead
The Recap: When the Chinese AI company DeepSeek unveiled a cutting-edge AI model that rivaled U.S. tech giants, it wasn’t just a warning sign about China’s AI progress—it was also a vindication of Meta’s controversial decision to open-source its AI technology two years ago. Meta engineers now see DeepSeek’s success as proof that freely sharing AI models accelerates progress, allowing smaller players to compete with billion-dollar firms.
In 2023, Meta released its Llama AI model as open-source, a move that was criticized for potentially helping competitors—including Chinese firms.
DeepSeek built its own powerful AI system using Meta’s technology and other open-source tools, achieving high performance with fewer resources than expected.
Meta is closely studying DeepSeek’s methods, creating “war rooms” to analyze how the Chinese company cut AI development costs.
Meta’s open-source strategy differs from rivals like OpenAI and Google, which have kept their AI models proprietary.
The decision to open-source AI aligns with Meta’s business model, which profits from ads rather than selling AI services.
Some experts worry that sharing advanced AI tech benefits China, but others argue that restricting access would just shift the open-source AI epicenter overseas.
AI leaders like Yann LeCun argue that DeepSeek’s success is proof that open-source AI is overtaking proprietary models.
Forward Future Takeaways:
Meta’s bet on open-source AI is reshaping the landscape, empowering smaller firms while challenging the dominance of companies like OpenAI. If DeepSeek’s cost-efficient approach proves replicable, AI development could become more accessible and decentralized, disrupting the industry’s current trajectory. → Read the full article here.
🏫 FORWARD FUTURE UNIVERSITY
Examples of Good Prompting With LLMs
When applying the prompting techniques to image models such as Midjourney, DALL-E, Ideogram etc., some methods such as “clarity” or “specificity” are similar. However, other categories such as “lightning” and “composition” are, as expected, image-specific. Basically, it can be said that some patterns in prompting for LLMs are similar and that there are fewer areas in which prompting with image models is an exception.
As you can imagine, image prompting is more about visually presenting the idea. It's best to try to visualize an image in advance, to take a perspective from which you can see the visualized image, and to include as many details as possible. You then transfer this idea as a prompt into the image model.
Not All Models Are Equally Suitable for Similar Promotions
Once you’ve stuffed the model with as much context as possible — focus on explaining what you want the output to be. With most models, we’ve been trained to tell the model how we want it to answer us. e.g.“You are an expert software engineer. Think slowly + carefully” This is the opposite of how I’ve found success with o1.
latent.space
Although reasoning models such as o1 are currently receiving a lot of attention and also provide excellent results, they are not necessarily better suited for all tasks. There is no doubt that o1 using the Chain on Thought method performs significantly better in math, coding and other domains where the correctness of the results can be clearly verified, while it is subject to regular LLMs such as GPT4o in creative tasks. In the following, I would therefore like to take a closer look at reasoning models and emphasize their special features. → Continue reading here.
🚦 AI CROSSROADS
AI’s Energy Obsession Just Got a Reality Check
The Recap: The AI industry is at a crossroads: OpenAI and its partners have announced a $500 billion data center project, Stargate, to supercharge AI with vast computing power—while a Chinese startup, DeepSeek, just released a model that challenges the assumption that bigger always means better. This raises a fundamental question: is raw compute the only path to AI breakthroughs, or is there a smarter, more efficient way forward?
The $500 billion Stargate project is a joint effort by OpenAI, Oracle, SoftBank, and MGX to build massive AI data centers across the United States, with Trump calling it the most important project of the era.
Environmental groups warn that these data centers could strain local power grids, drive up energy costs, and rely heavily on carbon-intensive energy sources.
A Chinese startup, DeepSeek, released an AI model called DeepSeek R1 that rivals OpenAI’s best reasoning models but operates with far greater efficiency, challenging the assumption that more compute always means better AI.
Meta has reportedly set up "war rooms" to analyze how DeepSeek achieved its efficiency, while also ramping up its own AI infrastructure spending by 70 percent.
The U.S. has tried to curb China’s AI progress through chip export controls, but DeepSeek’s success suggests these measures may not be as effective as hoped.
OpenAI researcher Noam Brown argued that more compute would still make DeepSeek even stronger, reinforcing the belief that raw computing power remains the key to AI advancement.
Forward Future Takeaways:
DeepSeek’s efficiency-first approach could force a rethink of the AI arms race, shifting focus from brute-force computing power to smarter optimization strategies. If the AI industry learns from DeepSeek’s methods, projects like Stargate might become less critical, reducing the environmental and financial burden of AI development. However, if OpenAI’s vision prevails, the AI winners will be those who control the most data centers—at any cost. The industry now faces a defining question: innovate smarter, or just build bigger? → Read the full article here.
🛰️ NEWS
Looking Forward
🏟️ AI Takes Over Super Bowl Ads: Super Bowl LIX will be flooded with AI-focused commercials as companies race to make AI mainstream. Meanwhile, movie studio and streaming ads are down, reflecting Hollywood’s struggles.
🙅‍♂️ Another OpenAI Safety Researcher Quits: Steven Adler warns AI labs are racing toward AGI recklessly, calling it a “very risky gamble.” His exit adds to OpenAI’s growing list of safety-focused departures.
📱 ChatGPT’s Mobile Users Are 85% Male: A report finds men overwhelmingly dominate ChatGPT’s user base, with younger users leading adoption. Meanwhile, skepticism and AI’s risks may be keeping many women away.
👤 93% of IT Leaders See AI Agent Value: A Salesforce survey finds most enterprises plan to deploy AI agents, yet integration challenges slow progress. Businesses are racing to harness AI’s full potential.
💰 SoftBank Eyes $4B Investment in Skild AI: Masayoshi Son is betting big on robotics, backing Skild AI’s effort to build a universal AI "brain" for robots. The robotics sector is shaping up as AI’s next frontier.
📽️ VIDEO
Try DeepSeek R1 Now: Hosted, Local, and Secure!
Today, Matt breaks down the best ways to run DeepSeek R1. You can access it directly via DeepSeek’s website, use Groq for lightning-fast cloud inference, or run a distilled version locally with LM Studio. Get the full scoop! 👇
Reply