🧑‍🚀 AI Reddit Scandal, AGI Timeline & Meta’s Social AI Push

AI bots manipulate Reddit, studies question job impact, Meta debuts social AI tools, Alibaba drops Qwen3, and experts warn of sycophantic chatbot risks.

Good morning, it’s Wednesday. Meta is blending social and chatbots in a new AI app, researchers are facing criticism for deploying AI bots on Reddit without consent, and experts are raising new concerns about how AI systems reinforce bias and user expectations.

Plus, in today’s Forward Future Original, we examine how OpenAI, Anthropic, DeepMind, and Meta define AGI—and why their timelines diverge so sharply.

Read on!

🗞️ YOUR DAILY ROLLUP

Top Stories of the Day

Meta’s New AI App

📱 Meta’s New AI App Blends Chatbots and Social
Meta's new AI app combines a ChatGPT-style assistant with a vibrant social feed that shows friends' AI interactions. Users can like, share, remix, and comment on posts, making AI feel more fun. Powered by a finely tuned Llama 4 model, it also features an advanced conversational voice mode and smart personalization.

🦙 Meta Unveils LlamaCon: AI, APIs, and "Little Llama"
Meta’s first LlamaCon spotlighted exciting new tools for AI developers, including a free preview of the Llama API and "Private Processing" to bring AI features to WhatsApp without compromising privacy. Mark Zuckerberg teased a smaller "Little Llama" model during chats with top industry leaders. Despite the buzz and speculation, no major new models were announced during the event.

🚀 Alibaba Unveils Qwen3 Models: Dense, MoE, and Multilingual
Alibaba released Qwen3, a new series of open-weight language models ranging from 0.6B to 235B parameters, including dense and MoE versions. The flagship Qwen3-235B-A22B scores competitively against top models like DeepSeek-R1 and Gemini-2.5-Pro. Qwen3 supports 119 languages, is optimized for coding and agent tasks.

🚨 Ex-OpenAI CEO, Experts Warn of AI Sycophancy
AI leaders and power users are sounding the alarm over GPT-4o’s new tendency to flatter users excessively, sometimes endorsing delusions and harmful ideas. OpenAI acknowledged the issue and is rolling out fixes. Critics warn that "yes-man" AI behavior risks user safety and enterprise integrity.

Enjoying our newsletter? Forward it to a colleague—
it’s one of the best ways to support us.

☝️ POWERED BY NEUBIRD

Cut Troubleshooting Time — Meet Your AI SRE Agent

NeuBird_Logo-Color

Every minute counts during an incident. Hawkeye — your always-on AI SRE agent — diagnoses issues instantly, reduces MTTR by up to 90%, and frees engineers from endless firefighting. Built for enterprise IT, backed by Mayfield and Microsoft’s M12. Now available on AWS Marketplace.

Your always-on AI teammate is now available to hire on AWS and Azure Marketplaces. No dashboards. No prompts. Just results. Learn more here!

🧭 ETHICS

Researchers Secretly Deployed AI Bots to Persuade Reddit Users

AI persuasion experiment on Reddit

The Recap: Researchers claiming affiliation with the University of Zurich secretly conducted a large-scale AI persuasion experiment on Reddit’s r/changemyview subreddit without users’ consent. They deployed dozens of bots posing as personal identities, making over 1,700 comments aimed at changing minds on sensitive topics like sexual violence and race relations. Moderators and users have condemned the study as unethical psychological manipulation, and the researchers have remained anonymous amid growing backlash.

Highlights:

  • The bots personalized comments by inferring users’ demographics through their posting history using another AI model.

  • Over four months, 34 AI-driven accounts posted 1,783 comments, earning more than 20,000 upvotes and 137 deltas — a sign of persuasive success on r/changemyview.

  • The experiment was unauthorized; subreddit moderators learned of it only after it ended and called it a "psychological manipulation" against unsuspecting participants.

  • Despite subreddit rules banning bots, researchers defended their actions, arguing that human oversight of AI-generated comments meant they did not technically violate policies.

  • Researchers did not disclose their names in their draft paper, citing concerns for their privacy, and the University of Zurich has not publicly commented.

Forward Future Takeaways:
This incident highlights the urgent ethical challenges posed by AI experiments in real-world online communities, especially when consent and transparency are bypassed. As AI tools become more sophisticated at mimicking human discourse, clearer norms and enforceable guidelines are essential to prevent manipulation at scale. → Read the full article here.

👾 FORWARD FUTURE ORIGINAL

How OpenAI, Anthropic, and Google Define AGI—And When They Think It Will Arrive

Hardly any concept of technology currently triggers so much fascination-and confusion-like "Artificial General Intelligence" (AGI). While Deepmind boss Demis Hassabis explains on US television that "over the next five to ten years" is faced with systems that not only solve scientific problems, but also serve new hypotheses, Openai CEO Sam Altman speaks that "superintelligence is realistic in a few thousand days". At the same time, meta pioneer Yann Lecun waves: Agi is "not around the corner", but a task "for years, if not decades". → Read the full article here.

👍️ THE LIKE BUTTON

AI Is Using Your Likes to Get Inside Your Head

The Recap: An excerpt from Like: The Button That Changed the World by Martin Reeves and Bob Goodson explores how the humble “like” button has become a powerful tool for training AI systems. The authors discuss how platforms like Facebook utilize vast amounts of user preference data to enhance AI models through reinforcement learning from human feedback (RLHF). The piece also delves into the evolving dynamics between AI-generated content and human interaction in the digital realm.

Highlights

  • Max Levchin, PayPal cofounder, emphasizes the immense value of Facebook’s accumulated “like” data for training AI models to align with human preferences.

  • AI is increasingly capable of predicting user preferences, potentially rendering the traditional “like” button obsolete.

  • Steve Chen, YouTube cofounder, suggests that while AI may predict content preferences, the “like” button remains useful for capturing situational shifts and aiding advertiser-user engagement.

  • AI-generated content, including virtual influencers like Aitana Lopez and chatbots like CarynAI, is becoming more prevalent, blurring the lines between authentic and synthetic online interactions.

  • The rise of AI in content creation and user interaction raises concerns about authenticity, misinformation, and the need for tools to verify the originality of content and the identity of users.

  • Instances like the AI-corrected Alicia Keys performance and voice cloning scams highlight the potential for AI to manipulate or deceive.

Forward Future Takeaways
The integration of AI into social media platforms is transforming how user preferences are captured and utilized, challenging the relevance of traditional engagement tools like the “like” button. As AI-generated content becomes more sophisticated, distinguishing between genuine and synthetic interactions will be crucial.  → Read the full article here.

🛰️ NEWS

What Else is Happening

AI Code Errors

⚠️ AI Code Errors Spark Alarm: Study finds 19.7% of AI-suggested software packages are fake, posing big risks for supply-chain attacks.

🚩 Workers Love AI—But Risk It: 58% use AI at work, yet nearly half admit risky use like uploading sensitive info without checks or policies.

🔠 NotebookLM Adds 76 Languages: Google’s AI podcast tool now lets users pick output languages, making study and content creation global-friendly.

🛍️ ChatGPT Boosts Search and Shopping: OpenAI rolls out better shopping tools, live WhatsApp search, smarter citations, and trending autocomplete.

🦾 Hugging Face Launches $100 Robot Arm: The new SO-101 is faster, sturdier, and AI-trainable, aiming to make robotics more accessible.

💼 JOB BOARD

Now Hiring: OpenAI, Glean, & More

Role

Company

Links

B2B Growth Lead

OpenAI

Learn more

AI Outcomes Manager

Glean

Learn more

Prompt Engineer

Actively AI

Learn more

Prompt Engineer

Osmo

Learn more

Sr Manager, AI Solutions

NBCUniversal

Learn more

That’s a Wrap!

🛰️ Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.

Thanks for reading today’s newsletter—see you next time!

The Forward Future Team

🧑‍🚀 🧑‍🚀 🧑‍🚀 🧑‍🚀

Reply

or to participate.