
Salesforce Chief Scientist Silvio Savarese joined us on Forward Future Live at Dreamforce last month. This article distills the most important ideas and insights from that conversation.
Estimated Read Time: 6 minutes
When Salesforce's Chief Scientist talks about the future of AI, he doesn't describe armies of autonomous bots replacing human workers. Instead, he sketches something more nuanced: a world where professionals become conductors of intelligent systems, wielding AI agents like a master orchestrator leads a symphony.
It's an optimistic vision, but not a naive one. And understanding the distinction matters, because how we frame AI's role will shape how we deploy it.
From Copilots to Something More Capable
The term "AI agents" gets thrown around constantly in tech circles, often without much precision. So what actually distinguishes an agent from, say, a chatbot or a copilot?
According to Savarese, it comes down to autonomy and task execution. Agents don't just assist or suggest, they act. They break down complex objectives into component tasks, execute those tasks independently, and coordinate multiple functions to achieve specific goals.
Think of it this way: A copilot helps you fly the plane. An agent can file the flight plan, check weather conditions, calculate fuel requirements, and adjust the route based on air traffic, all while you focus on the destination.
This autonomy is what makes agents fundamentally different from previous generations of AI assistants. They're not waiting for constant input. Give them a goal and sufficient tools, and they'll figure out the path forward.
The technology is already here. Salesforce's Agentforce platform, which Savarese helped develop, exemplifies this approach, deploying specialized agents for customer service, sales development, and operational tasks that previously required human intervention at every step.
But here's where it gets interesting. As these agents become more capable, the question shifts from "what can they do?" to "how should humans relate to them?"
Humans as Orchestrators, Not Operators
Savarese sees the future workforce as collaborative teams where humans and AI agents work side by side, each playing to their strengths. Humans set direction, make judgment calls, handle nuanced situations. Agents execute, coordinate, and handle the structured, repetitive work that bogs down most knowledge workers.
"You have at your disposal a number of functions, a number of tools that can be implemented by those agents," Savarese explained in a recent conversation.
"The human becomes an orchestrator of those agents."
But orchestration itself has limits.
As tasks grow more complex, humans may need help managing the agents themselves. This introduces what Savarese calls an "orchestrator layer"—meta-agents that handle coordination and task management.
It's agents managing agents, with humans providing high-level direction and judgment.
Sound familiar? It should. The structure mirrors how organizations already work. Just as managers coordinate teams who coordinate individual contributors, future work may involve humans directing orchestrator agents who coordinate specialist agents.
Why the Human Must Stay in the Loop
Despite the power of AI agents, Savarese emphasized that humans must remain integral to the process, not just at the beginning and end, but throughout.
Why? Because agents can't handle uncertainty and conflict resolution the way humans can. When an agent lacks sufficient information or faces ambiguous situations, human judgment becomes essential. Agents may be capable, but they're not infallible. They need guidance when the path forward isn't clear.
Beyond troubleshooting, humans serve another critical function: continuous learning. Agents improve through feedback loops, and human input is what makes those loops meaningful. By providing corrections, refinements, and contextual understanding, humans help agents evolve and perform better over time.
This isn't just about catching errors. It's about teaching systems to understand nuance, context, and the unwritten rules that govern complex professional work. That kind of knowledge transfer can't be automated, at least not yet.
Expanding Roles, Not Just Replacing Them
Here's where Savarese's vision gets compelling. AI doesn't just automate existing work, it can expand what's possible within existing roles.
Consider a product manager who needs to create a prototype for a customer meeting. Traditionally, this might require involving designers and developers, scheduling time, and waiting for deliverables. With AI agents, that same product manager can code the prototype themselves through an iterative dialogue.
"I come up with an idea, I come up with another example, then it's 'okay, I don't like this, do this,' and it's an iterative process until I produce the right level of quality I want," Savarese described.
This isn't about product managers replacing developers. It's about empowering them to move faster on exploratory work, test ideas rapidly, and arrive at better-defined requirements before involving the broader team. The developers aren't eliminated—they're freed to focus on more complex architectural challenges and production-quality implementation.
The same dynamic applies across roles. Marketers can create more sophisticated campaigns without waiting for design resources. Sales representatives can generate personalized content at scale. Analysts can build their own analytical tools without submitting IT tickets.
AI doesn't just automate existing tasks; it unlocks capabilities previously beyond reach.
From Reactive to Proactive: The Age of Ambient Intelligence
Today's AI agents are reactive. They respond to commands, answer questions, execute tasks when prompted.
But Savarese sees a near-future shift to something far more powerful: ambient intelligence.
Imagine an agent that listens to your sales conversation and proactively surfaces relevant data about the customer, suggests responses to questions, or identifies opportunities you might have missed. Not because you asked, but because it understood the context and anticipated your needs.
This concept builds on recent advances in what Savarese called "sleep-time compute"—systems that run inference in the background, preparing for questions you might ask before you ask them. OpenAI's "Pulse" feature exemplifies this approach, where AI continuously processes information to stay ready with relevant insights.
For sales representatives, this could transform customer interactions. Before a meeting, AI prepares relevant documentation and context. During the conversation, it provides real-time insights tailored to what the customer is asking. If the discussion turns to a specific product feature, the agent surfaces talking points, case studies, or technical specifications, all without interrupting the flow of conversation.
As augmented reality glasses and advanced interfaces become mainstream, these insights could be delivered visually and contextually. A sales rep wearing AR glasses might see customer sentiment analysis, product recommendations, and relevant quotes floating in their field of vision during a pitch.
The shift from reactive to proactive agents marks a fundamental change in how we interact with technology. Instead of pulling information from systems, information flows to us based on context and anticipated needs.
When the Interface Disappears
Savarese envisions a future where traditional interfaces become obsolete. If your personal agent can book flights, reserve cars, and handle shopping on your behalf, why would you need a web interface at all?
"At some point, why do we need to have a web interface when the agent will work on our behalf?" he asked. "You have to imagine a completely different kind of fabric of our society."
This represents a profound shift in how software is built and delivered. Today's applications are designed for human interaction, buttons to click, forms to fill, dashboards to monitor. But if agents become the primary users, software interfaces could be optimized for machine-to-machine communication instead.
Glasses and earbuds emerge as the likely interface layer between humans and their AI agents. Rather than staring at screens, we might simply speak our intentions and receive auditory or visual feedback through lightweight wearables.
The smartphone as we know it could evolve into something entirely different, perhaps a personal AI hub that connects to multiple interface points rather than a single glowing rectangle.
Personal agents would be deeply customized to individual needs, preferences, and contexts. They'd know your work patterns, understand your communication style, and anticipate your requirements. The relationship between human and agent would be continuous and personalized, more like working with a longtime assistant than using a piece of software.
When Agents Start Negotiating With Other Agents
Perhaps the most intriguing frontier Savarese mentioned is cross-organizational agent communication.
What happens when your company's procurement agent starts negotiating directly with a vendor's sales agent? When hiring agents from different companies coordinate interview schedules and exchange candidate information?
"That's the new society where we're going," Savarese noted, "and I think it's going to be itself a big topic of conversation."
The implications are dizzying. How do we ensure agents represent their principals faithfully? What happens when agents disagree or make mistakes that affect multiple organizations? Who's responsible when an automated agent-to-agent transaction goes wrong?
The protocols and norms for this agent-mediated future are still being written. It will require new frameworks for authentication, authorization, audit trails, and dispute resolution.
The technical challenges? Manageable. The organizational and legal questions? That's where it gets complicated.
Avoiding New Forms of Digital Drudgery
There's a legitimate concern in all this transformation: Are we simply exchanging one form of tedious work for another? Instead of manually processing invoices, will we just manage the agents that process invoices? Instead of writing reports, will we spend our days reviewing and verifying agent-generated reports?
Savarese acknowledged this risk but pushed back against the framing. The future isn't just humans at the beginning and end of agent workflows. It's humans integrated throughout—providing judgment, resolving conflicts, offering feedback, and making decisions when uncertainty arises.
The goal isn't to remove humans from the loop but to elevate what humans do within that loop. Less time on mechanical tasks means more time for strategic thinking, creative problem-solving, and the kinds of judgment calls that still require human intuition and experience.
Whether we achieve this ideal or simply create new forms of tedium will depend on how thoughtfully we design these systems and how intentionally we think about the human role.
The Evolution Is Already Underway
When asked about the timeline for this transformation, Savarese emphasized that we're in the middle of it right now. The tools are evolving rapidly, and humans are simultaneously learning how to use them effectively.
"Right now we are actually still learning how to use the tools, and these tools are evolving as we speak,"
he said. "It's a bit of an interesting process."
This co-evolution between technology and human capability means predictions are difficult. Roles will change in ways that aren't fully predictable because the changes depend on how people choose to adopt and adapt these tools.
The future isn't predetermined—it's being negotiated in real-time through millions of interactions between humans and AI systems.
What seems certain is that this is a continuum, not a cliff edge. We won't wake up one day to find work completely transformed. Instead, we'll experience a gradual shift where certain tasks become automated, new capabilities emerge, and roles evolve to incorporate these new possibilities.
Raising Kids in the Age of Agents
In a personal aside, the conversation touched on Savarese's marriage to Fei-Fei Li, the renowned AI researcher often called "the godmother of AI" and co-founder of World Labs. The intersection of two AI luminaries raises interesting questions about how those most immersed in this technology think about its implications.
Savarese shared that dinner conversations include both daily family logistics and technical discussions about the future of AI. With Li now focused on world models and spatial reasoning, and Savarese having moved from 3D vision to digital agents—they've almost exchanged research domains.
The dinner table is likely one of the most informed forums on Earth for discussing AI's trajectory.
When it comes to raising children in the age of AI, Savarese indicated this is a topic of significant consideration in their household. How do you prepare the next generation for a world where AI is ambient, agents are collaborative, and the nature of work is fundamentally different?
It's a question that extends far beyond their family, and one that every parent and educator is beginning to grapple with.
What This All Means
The agent revolution isn't about replacing human workers with AI. It's about fundamentally rethinking the relationship between humans and technology.
Agents will serve as workforce multipliers, thought partners, and proactive assistants that extend our capabilities in ways we're only beginning to explore. But this future requires maintaining humans in the loop, not just as overseers, but as active participants who provide judgment, resolve ambiguity, and continuously improve these systems through feedback.
As interfaces evolve from screens to ambient intelligence delivered through wearables, and as agents begin communicating across organizational boundaries, we'll need new frameworks for governance, responsibility, and trust. The technical capabilities are advancing rapidly. The harder work is organizational, legal, and cultural.
We're in a transition period right now, learning to use these tools as they evolve in real-time. The future isn't written yet. It's being shaped by how we choose to deploy these technologies, what roles we imagine for them, and how thoughtfully we integrate them into the fabric of work and society.
The most important question isn't what agents can do. It's what we want humans to become when agents can handle so much of what we used to do ourselves.

Nick Wentz
I've spent the last decade+ building and scaling technology companies—sometimes as a founder, other times leading marketing. These days, I advise early-stage startups and mentor aspiring founders. But my main focus is Forward Future, where we’re on a mission to make AI work for every human.
👉 Connect with me on LinkedIn

