š Oracleās AI Boom Hinges on OpenAIās Future
Oracle stock skyrocketed on bold AI projections, tied heavily to OpenAIās success. The company expects massive cloud growth, despite its key customer being unprofitable. With capital spending at record highs, some see echoes of 1999ās dot-com bubble. The rally is realābut so is the risk.
š California Moves to Regulate AI Companions
Californiaās SB 243 bill, aimed at protecting minors from harmful AI chatbot interactions, is headed to Gov. Newsomās desk. If signed, it would require safeguards like regular disclaimers, content limits, and transparency reporting. Inspired by real-world tragedies, it may become the first law of its kind in the U.S.
š° Perplexity Hits $20B Valuation After Fresh Funding (Paywall)
Perplexity AI has raised $200 million, pushing its valuation to $20 billionājust two months after its last round. The AI search startup has now raised over $1 billion amid its aggressive bid to rival Google and OpenAI. Its meteoric rise signals a shift toward AI-native search and the new battleground of GEO.
š¤ Box Bets on Modular AI to Tame Unstructured Data
At Boxworks, CEO Aaron Levie unveiled Box Automate, a new system for deploying AI agents across complex workflows. Targeting unstructured enterprise data, Box is emphasizing modular, context-aware agents with strict guardrails and access controls. Levie calls this the "era of context"āand says thereās no free lunch in AI.
Not exactly, but it might come closer than you'd think.
Keep reading to find out how close. š
Vultr is empowering the next generation of generative AI startups with access to the latest AMD and NVIDIA GPUs.
Try it yourself and use promo code "BERMAN300" for $300 off your first 30 days.
Replitās Amjad Masad touts vibe coding and autonomous agentsāpush automation to the limit, ship apps fast without learning to code; brace for job shifts.
The Recap: As AI tools become more embedded in classrooms and childrenās devices, parents are being urged to step up as active guides and gatekeepers in their kidsā learning journeys. In this essay, Jenny Anderson and Rebecca Winthrop ā co-authors of The Disengaged Teen ā argue that unfettered use of generative AI can hinder cognitive development and critical thinking. They call on families, schools, and tech companies to share responsibility for how AI affects student learning.
Highlights:
Google and OpenAI launched new AI tools for education, including Geminiās 30 features and ChatGPTās student-focused study mode.
While tools like Khanmigo guide learning, many students use general AI chatbots to bypass effort, undermining critical thinking development.
An MIT study showed students who wrote essays with AI from the start had lower writing quality and reduced brain activity in learning areas.
Only 20% of U.S. teachers say their schools have formal AI policies, leaving students to use platforms like Snapchatās My AI to skirt restrictions.
Most parents underestimate their kids' AI use in schoolwork, with actual usage rates potentially three times higher than parents believe.
Forward Future Takeaways:
This piece underscores a growing tension in AIās role in education: the same tools that can enhance learning can also erode it if misused. As AI becomes ubiquitous in studentsā academic and personal lives, families must become literate in its benefits and risks ā and not leave the responsibility solely to schools. The challenge isnāt just regulating access, but helping young people develop the discernment to use AI as a tool, not a crutch. ā Read the full article here. (Paywall)
Written by Varsha Bansal, the piece reports that Googleās Gemini and AI Overviews depend on thousands of contract āAI raters,ā hired largely via GlobalLogic, to compare model responses, verify factuality and sources, and flag policy issues. Workers describe tight deadlines, exposure to distressing material, and pay starting around $16/hour for generalists and $21/hour for āsuper raters,ā with the team reportedly growing to nearly 2,000 mostly U.S.-based staff.
Interviewees say guidelines shift frequently and claim guardrails around repeating user-supplied hate or explicit content have loosened; the story notes a December 2024 policy change allowing limited exceptions when public benefits outweigh harms. Google says rater feedback is just one of many signals and doesnāt directly change algorithms or models; GlobalLogic declined comment. ā Read the full article here.
With instant access to their medical records, more patients are turning to AI chatbots like ChatGPT and Claude to make sense of confusing lab resultsāsometimes before hearing back from their doctors. While this can ease anxiety and empower patients to ask better questions, experts caution that AI can misinterpret data or hallucinate plausible-sounding but false information.
A recent proof-of-concept study found that chatbot accuracy depends heavily on how questions are phrased. Meanwhile, concerns over data privacy and lack of HIPAA compliance remain unresolved. As one physician put it: AI can assist, but itās not a second opinionāyet. ā Read the full paper here.
š² Alibaba, Baidu Ditch NVIDIA for AI Chips: The Chinese tech giants are now training models with their own processors, signaling a shift away from U.S. chip reliance amid export curbs.
𦾠Albania Appoints AI Minister: Diella, a digital avatar powered by AI, will manage public procurementāmarking the worldās first virtual cabinet member in a bid to fight corruption and boost transparency.
āļø Cruz AI Bill Sparks Backlash: Critics say Ted Cruzās SANDBOX Act lets Big Tech dodge safety laws by striking political deals, risking public harm in the name of āinnovation.ā
š¤ OpenAI Strikes $300B Cloud Deal with Oracle: The five-year pact, one of the biggest in tech history, will power AI growth with energy needs rivaling two Hoover Damsāprofits not expected until 2029.
š Arm Launches Lumex for On-Device AI: With 5x faster performance and better battery life, Armās new CPU platform powers smarter phones, wearables, and gamingāno cloud required.
What did you think of today's newsletter? |
In a series of experiments that sound like science fiction (but are very real), researchers used functional MRI (fMRI) scans to capture brain activity while participants viewed images or watched silent video clips. They then fed that data into a generative model, like a precursor to DALLĀ·E or Stable Diffusion, that had been trained to align visual stimuli with brain patterns.
The result? Rough but eerily accurate reconstructions of what the person had been looking atācomplete with recognizable shapes, objects, and even stylistic cues. In one 2023 study from Osaka University, for example, the AI-generated images based on fMRI scans resembled dogs, birds, or buildings the subjects had viewedāright down to color and composition.
No implants, no real-time decoding (yet), just a high-res peek into the visual cortex. While we're still far from Minority Report-level mind-reading, this work hints at a future where thought-to-image translation could aid in dream analysis, silent communication, or even creative ideation.
š°ļø Want more Forward Future? Follow us on X for quick updates, subscribe to our YouTube for deep dives, or add us to your RSS Feed for seamless reading.
Thanks for reading todayās newsletterāsee you next time!
Matthew Berman & The Forward Future Team
š§āš š§āš š§āš š§āš
Reply