
He's the guy the chip industry reads before making a move. Dylan Patel, founder of SemiAnalysis, joins Matthew Berman to break down the stark realities of the AI landscape. From the billion-dollar talent war to the corporate grudges shaping the future of technology, Dylan delivers unfiltered analysis on who’s winning, who’s losing, and why.
In this must-watch interview, Dylan explains why Scale AI is “kind of cooked,” why he thinks GPT-4.5 fell short, and the surprising story behind Apple’s long-standing grudge against NVIDIA.
Key Moments from the Interview
00:00 – Intro
The high-stakes race for superintelligence.
01:30 – What's Wrong at Meta?
Why Llama struggled and Behemoth might never be released.
05:15 – The Billion Dollar Talent War
Zuck’s spending spree and the true motivation behind it: "It's not the money, it's more the power."
11:45 – OpenAI vs. Microsoft
Breaking down the "weird ass deal" and who really holds the cards.
18:20 – His Views on GPT-4.5
"Too slow, too expensive," and the months-long bug that derailed OpenAI's big bet.
23:00 – Why Apple Hates NVIDIA
The untold story of "Bumpgate" and a corporate feud that's hobbling Apple's AI ambitions.
28:10 – The Case Against On-Device AI
Why the cloud will dominate and the security argument doesn't hold up.
33:40 – NVIDIA's Iron Grip
Can AMD really compete? Dylan explains why NVIDIA is still "God" in the chip world.
41:00 – The Future of White-Collar Work
"The junior software engineering market is nuked."
45:30 – Who Wins the Superintelligence Race?
Dylan's final prediction on who will get there first.
Full Interview:
In His Own Words: What Dylan Patel Revealed
The Race to Superintelligence (05:15)
The stakes couldn't be higher. For the tech giants, it's an all-or-nothing game.
“If you believe superintelligence is the only thing that matters, then you need to chase it. Otherwise, you're a loser.”
What’s Wrong at Meta? (01:30)
Talent and compute aren't enough. Without the right leadership, even the best researchers go down the wrong path.
“Part of AI research is that you have all the wrong ideas too... and now what happens if your choosing of them is really bad?”
The State of Scale AI (06:40)
Dylan pulls no punches on the state of the data-labeling giant amidst Meta's acquisition of its top talent.
“For one, Scale AI is like, it's kind of cooked right now as a company.”
OpenAI vs. Microsoft (11:45)
The complex partnership is a minefield of risks for OpenAI, especially with one key player holding all the legal power.
“Microsoft will just sue the **** out of them. And Microsoft has more lawyers than God.”
The Release of GPT-4.5 (18:20)
OpenAI's much-hyped model was a massive, expensive bet that didn't pay off.
“In general, it's not that useful and it's too slow.”
Why Apple Hates NVIDIA (23:00)
It's not just business, it's personal. A years-old hardware failure created a lasting rift.
“The solder balls connecting the chip and the board would crack... It was called bumpgate. Apple really hates NVIDIA because of that.”
The Case Against On-Device AI (28:10)
Dylan argues that for most valuable use cases, on-device AI is a dead end.
“No one actually cares about security, they say they do. The number of people who actually make decisions based on security are very little.”
The Future of Work (41:00)
AI is already reshaping the job market, and the impact on entry-level roles is severe.
“50% of white collar jobs could disappear... The junior software engineering market is nuked.”
Who Wins the Superintelligence Race? (45:30)
When forced to pick a winner, Dylan bets on the company that's consistently been first to the finish line.
“Superintelligence, reaching it first. Who are you picking and why? OpenAI. They're the first to every major breakthrough.”
Full Transcript
00:00:00–00:05:41
Matthew Berman:
All right, Dylan, thank you so much for joining me today. I'm really excited to talk to you. I've seen you do a number of talks and interviews. We're going to talk about a whole bunch of things.
First, I want to talk about Meta. Let's start with Llama 4. I know it's been a little while in the AI world since that was released, but there was a ton of anticipation. It was good, not great; it wasn't world-changing at the moment. And then they delayed Behemoth. What do you think is going on there?
Is it an expertise thing internally? I mean, they have to have some of the best people in the world—and we're going to get to some of their hiring efforts—but why haven't they been able to really do it?
Dylan Patel:
Yeah. So, it's funny, there are three different models, and they're all quite different. Behemoth got delayed. I actually think they might not ever release it; there are a lot of problems with it. The way they trained it, some of the decisions they made, didn't pan out. And then there's Maverick and Scout. One of those models is actually decent. It was comparable to the best Chinese model on release, but then Alibaba came out with a new model, DeepSeek came out with a new model, so it was worse. The other one was objectively just bad. I know for a fact they trained it as a response to DeepSeek, trying to use more of the elements of DeepSeek's architecture, but they didn't do it properly. It was just a rush job, and it really messed up because they went really hard on the sparsity.
Funnily enough, if you actually look at the model, it oftentimes won't even route tokens to certain experts. So it was a waste of training, basically. In between every layer, the router can route to whatever expert it wants to, and it learns which expert to route to, and each expert learns its own independent things. What you can see is which experts tokens route to as they go through the model, and some of them just didn't get routed to. You have a bunch of empty experts that are not doing stuff, so there's clearly something wrong with the training.
I think it's a confluence of things. Yes, they have tons of talent and tons of compute, but the organization of people is always the most challenging thing. Which ideas are actually the best? Who's the technical leader choosing the best ideas? If you have a bunch of great researchers, that's awesome, but if you put product managers on top of them and there's no technical lead evaluating what to choose, then you have a lot of problems.
At OpenAI, Sam is a great leader and gets all the resources, but the technical leader is Greg Brockman, and he's choosing a lot of stuff. There are other folks, like Mark Chen and others, who are the technical leaders really deciding which technical route to go down. A researcher is going to have their research, and they're going to think their research is the best. Who's evaluating everyone's research and then deciding, "That idea is great, let's use that one. That one sucks, let's not use that one"?
When you end up with researchers not having a technical leader who can choose the right things, you end up with a situation where you had all the right ideas, but part of AI research is that you also have all the wrong ideas and you learn from them. What happens if your process of choosing them is really bad, and you actually choose some wrong ideas? Then you go up that branch of research, and now you're branching off of a bad idea. Everyone's like, "Okay, we made that decision. Let's see what's researchable from here." So you end up with great researchers potentially wasting their time on bad paths.
There's this thing that researchers talk about which is "taste," which is very funny. You think these are nerds who won the International Math Olympiad, but there's actually a lot of taste involved. It's an art form to some extent—what is worth researching and what's not. And it's an art form choosing what's best, because you're doing experiments with 100 GPUs, and then all of a sudden you're like, "Great, now let's make a run with 100,000 GPUs with that idea." Things don't just translate perfectly, so there's a lot of taste and intuition here. It's not that they don't have good researchers; it's that who's choosing the taste is difficult. It's challenging, even if you have great people, to actually have good stuff come out because of organizational issues. The right people aren't in the right spot, and maybe the wrong person gets to be political and have their idea and research path put into the model when it's not a good idea.
00:05:41–00:11:15
Matthew Berman:
Let's continue down the path of who is making decisions. Last week, there was a lot of news about Zuck giving $100 million offers. Sam Altman literally said it. They acquired Scale AI, seemingly for Alexander Wang and his team. He's in founder mode. What does the Scale AI acquisition actually give Meta?
It seems like the narrative throughout all of these major companies is now "superintelligence," even when it was "AGI" just a month ago. Why the transition, by the way?
Zuck, at least rumored, tried to acquire SSI and was rebuffed by Ilia. I also wanted to ask you about Daniel Gross and Nat Friedman. It seems like Zuck is trying to hire them as well. What do those two folks give Zuck?
Sam Altman also mentioned that Meta has been giving $100 million bonus offers to their top researchers. Apparently, none have left. Is that a successful strategy—to just throw money at the problem and get the best people in? It feels like the cultural element might be lacking.
Dylan Patel:
For one, Scale AI is kind of cooked right now as a company. Everybody's canceling their contracts. Google's backing out. OpenAI allegedly cut the external Slack connection. These companies don't want Meta to know what they're doing with their data. So clearly, Meta didn't buy Scale for Scale; they bought it for the purpose of having Alex and his few best colleagues. They bought them to bring them over. More importantly, it's about getting someone to help lead this superintelligence effort. Alex is stupendously successful. People can hate on him if they want, but he's obviously very successful, especially when he convinces Mark Zuckerberg—who's not an irrational person—to buy his company and chase superintelligence. If you look at Zuckerberg's interviews even a handful of months ago, he wasn't chasing superintelligence. He was like, "AI is good and great, but AGI is not going to happen soon." This is a big shift in strategy. He's basically saying, "Superintelligence is all that matters, we're on the path there. I believe now, what can I do to catch up because I'm behind?"
The word AGI has no meaning anymore. It's amorphous. You can look at an Anthropic researcher in the face and ask what AGI means, and they literally think it just means an automated software developer. That's not artificial general intelligence. Ilia Suster saw everything first, and he started his company, Safe Superintelligence (SSI). I think that started the rebranding, and now, almost a year later, everyone's like, "Oh, superintelligence is a thing." So that's another research direction that Ilia got first.
Zuck tried to buy SSI, Thinking Machines, Perplexity—these are all rumors. Mark tried to buy SSI, and Ilia obviously said no because he's committed to straight-shotting superintelligence, not worrying about products. He's probably not even that money-focused. If the rumors are true about Daniel Gross, then Daniel Gross was probably the one wanting the acquisition. He comes from a venture fund background, not an AI research background. He probably wanted the acquisition, and when it didn't happen, it would make sense that's a chasm and he's going.
When you look at a lot of very successful people, it's not just the money; it's more the power. A lot of people going to Meta are going because now they have control over the AI path for a trillion-dollar-plus company. They're right there talking to Zuck, who has full voting rights. They can implement whatever AI technology they want across billions of users. That would make a lot of sense for an Alex Wang or a Nat Friedman or a Daniel Gross, who are much more product people.
If you believe superintelligence is the only thing that matters, then you need to chase it. Otherwise, you're a loser. Mark Zuckerberg certainly doesn't want to be a loser. So you go and try to acquire the best teams. That didn't work out. So now you go with Alex, who's tremendously connected and can help you build the team.
As far as Sam saying that no top researchers have gone, I don't believe that's accurate. I think initially, the top researchers definitely did say no. And you said $100 million; I've heard a number for someone over a billion, actually, for one person at OpenAI. But it's a ridiculous amount of money, but it's the same thing as buying one of these companies. If you know superintelligence is the end-all, be-all, $100 million, even a billion dollars, is a drop in the bucket compared to Meta's market cap and the total addressable market of AI.
00:11:15–00:16:47
Matthew Berman:
I want to talk about Microsoft and OpenAI's relationship. We're well past the honeymoon phase. It definitely seems to be the choppy waters of their relationship now. OpenAI's ambitions seem to have no bounds. Microsoft wants to restructure the deal; OpenAI does, but Microsoft really has no reason to. What do you think about the dynamics of this relationship going forward?
What did Microsoft get in that exchange where they gave up the exclusivity?
Dylan Patel:
OpenAI would not be where they are without Microsoft, and Microsoft signed a deal where they get tremendous power. It's a weird-ass deal because OpenAI wanted to be a nonprofit and they cared about AGI, but at the same time, they had to give up a lot to get the money. Microsoft didn't want to run into antitrust stuff, so they structured this deal really weirdly. There's revenue shares, profit guarantees, and all these different things, but nowhere is it like, "Oh yeah, you own X percent of the company." I think it's something like a 20% revenue share and a 49% or 51% profit share up until some cap. And Microsoft has the IP rights of all of OpenAI's IP until AGI. All of these things are just nebulous as hell.
The profit cap might be 10x what Microsoft gave, which was roughly $10 billion. So what incentive does Microsoft have to renegotiate now if they get $100 billion of profit from OpenAI? Until then, OpenAI has to give them half of their profit, they get this 20% rev share, and they have access to all of OpenAI's IP until AGI. But what is the definition of AGI? Theoretically, OpenAI's board gets to decide when they hit AGI, but if that happens, Microsoft will just sue the heck out of them. And Microsoft has more lawyers than God. It's a crazy-ass deal.
One of the main worrisome things in there for OpenAI got removed because Microsoft was really scared about antitrust: OpenAI had to exclusively use Microsoft for compute. They backed off of this last year. OpenAI is now going to Oracle, SoftBank, Crusoe, and the Middle East to build their Stargate clusters. They're still getting a bunch from Microsoft, of course, but whereas before, OpenAI could not do that without going directly to Microsoft.
What's been reported is that they just gave up the exclusivity, and in return, all they have is a first right of refusal. Anytime OpenAI goes and tries to get a contract for compute, Microsoft can provide that same compute at the same price in the same time frame. They did it to reduce risk from antitrust. From OpenAI's perspective, they were just really annoyed that Microsoft was way slower than they needed them to be.
The real challenging thing here is that Microsoft has the monorepo; it has the OpenAI IP, and they have rights to it all. They can do whatever they want with it. The possibilities are endless. The other thing is, if you're truly superintelligence-pilled, you have all the IP up until superintelligence is achieved. That would imply that the day before superintelligence is achieved, you have all of the IP, and then it gets cut off. But you have all the IP up until right there. Maybe it takes 10 days of work instead of one to get there, but Microsoft has access to it. That's the real big risk.
These sorts of things scare investors. Sam said it himself: OpenAI is going to be the most capital-intensive startup in the history of humanity. The valuation is going to keep soaring. They have no plans to produce a profit anytime soon. They're going to be losing money, and they need to keep raising money and convince every investor in the world, and these things are dirty. They're not clean and easy to understand.
00:16:47–00:22:18
Matthew Berman:
You talked a little bit about compute capacity and specifically with Azure being able to go to CoreWeave and elsewhere. I want to talk specifically about GPT-4.5. It was deprecated, I believe, last week. Was the model too big? Was it too costly to run? What went wrong with GPT-4.5, or Orion, as it was internally called?
So it was while they had already invested all this, while they were in the process of training this massive model, they realized that for a much lower cost, they could get so much more efficiency and higher quality out of a model because of reasoning?
Dylan Patel:
What they hoped would be GPT-5, they started training in early 2024. It was a bet on full-scale pre-training. They were just going to take all the data, make this ridiculously big model, and train it. It is much smarter than 4.0 and 4.1, to be completely clear. I've said it's the first model to make me laugh because it's actually funny. But in general, it's not that useful, and it's too slow and too expensive versus other models like 03, which is just better.
They went pure on pre-training scaling, but data doesn't scale as fast, so they weren't able to get a ton of data. Without data scaling so fast, they have this model that's really, really big, trained on all this compute, but you have this issue called overparameterization. Generally, in machine learning, if you build a neural network and feed it some data, it will tend to memorize first, and then it will generalize. To some extent, GPT-4.5 Orion was so large and so overparameterized that it memorized a lot. When it initially started training, people at OpenAI were so excited because it was already crushing the benchmarks, but that's because it just memorized so much. Then it stopped improving. It finally did generalize, but it was such a big, complicated run. They actually had a bug in it for months during the training. They had a bug in the training code for a couple of months that was messing up the training.
They also had to restart training from checkpoints a lot. It was so big, so complicated, so many things could go wrong. From an infrastructure perspective, just corralling that many resources and having it train stably was really, really difficult. But on the flip side, even if the infrastructure and code were pristine, you still have this problem of data. Everyone points to the Chinchilla paper from DeepMind. What it basically said is, for a dense model, what's the optimal ratio of parameters to tokens? And as you add compute, you want to add more data and parameters at a certain ratio. They didn't go there; they had to go to way more parameters versus tokens.
In the meantime, different teams at OpenAI figured out something magical, which is the reasoning stuff, the "strawberry." If you really try and boil down reasoning to first principles, you're giving the model a lot more data. Where are you getting this data from? You're generating it. And how are you generating it? You're creating these verifiable domains where the model generates data, and you throw away all the data where it doesn't get to the right answer.
So, looking backwards, the intuition makes a lot of sense: 4.5 failed because it didn't have enough data. Also, it was just very complicated from an infrastructure perspective. But also, it just didn't have enough data. And now this breakthrough that happened from a different team is generating more data, and that data is good. It's really, from a first-principles basis, that data is the wall. Just adding more parameters doesn't do anything.
00:22:18–00:29:43
Matthew Berman:
I want to talk about Apple for a second. I'm sure you have some thoughts. Apple is clearly behind. We're not getting much in the way of public models, leaks, or anything. What do you think is going on at Apple? Do you think they just made a misstep? Why aren't they acquiring companies?
I don't remember that.
I want to ask you one last question about Apple. They are very big on on-device AI, and I actually really like that approach for security and latency. What's your take on on-device AI versus having it in the cloud? Is it somewhere in the middle?
Dylan Patel:
Apple is a very, very conservative company. They've acquired companies in the past, but they've never done really big acquisitions. Their acquisitions have been really small. They buy these startups that haven't achieved product-market fit.
In terms of AI researchers, Apple has always had problems attracting them. AI researchers like to publish their research, and Apple has always been a secretive company. They actually changed that policy, but at the end of the day, they're still a secretive, old, antiquated company. It's hard to get talent to come to you.
How is Apple going to attract these best researchers? They're not. So it's really challenging for them to be competitive. Then there's the whole stigma: they hate Nvidia, maybe for reasonable reasons. Nvidia threatened to sue them over some patents at one point. Nvidia also sold them GPUs that ended up breaking. It was called Bumpgate.
No, you don't? Okay, so this is a very fun story. One generation of Nvidia's GPUs for laptops... chips have solder balls on the bottom that connect their I/O pins to the motherboard. Somewhere along the supply chain, the solder balls were not good enough. So when the temperature swung up and down, due to the coefficient of thermal expansion, the chip versus the solder balls versus the PCB would expand and shrink at different rates. What ended up happening is the solder balls connecting the chip and the board would crack. The connection was severed. It was called Bumpgate. I think Apple wanted compensation from Nvidia, and Nvidia was like, "No." Between that and Nvidia trying to sue everyone over GPU patents in mobile, Apple really doesn't like Nvidia. So Apple doesn't really buy much Nvidia hardware.
If I'm a researcher, first of all, I'm going to go where the talent is, where I have a culture fit, where the money is. Apple is not going to offer that crazy money, and they don't even have the compute. It's challenging for Apple.
I'm generally an on-device AI bear. I think security is awesome, but I know human psychology: free is better than free with ads, which is better than security. No one actually cares about security. They say they do, but very few people make decisions based on it.
On-device AI is limited by the hardware. How fast the model can inference is based upon your memory bandwidth. If I want to increase that, I spend $50 more on hardware, pass on $100 to the customer. With $100, I could have like a hundred million tokens from a cloud provider. Or better yet, I save the $100, and Meta will give me the model for free on WhatsApp, OpenAI will give it free on ChatGPT, and Google will give it free on Google.
Lastly, I don't agree with the latency standpoint. The AI workloads that are the most valuable are things like, "Find a restaurant at this time." My data—Gmail, calendar—is all in the cloud anyway. Or if it's an agentic workflow of, "Find an Italian restaurant between you and I that has gluten-free options with a reservation at 7 p.m.," this is a deep research query that takes minutes. Or we envision a future where AI books flights for us. This is not a low-latency task. Where is the necessity for it to be on-device? Because of the hardware constraints, your phone cannot run Llama 7B as fast as I can query a server. And no one wants to run Llama 7B; they want to use a good model like GPT-4.1 or Claude Opus, and those can't possibly run on-device.
00:29:43–00:36:20
Matthew Berman:
Speaking of chips, let's talk about Nvidia versus AMD. I've read a couple of articles out of SemiAnalysis lately that have said these new AMD chips are actually really strong. Do you think AMD, with their new chips, is that enough to really tackle the CUDA moat? Are they going to start taking market share from Nvidia?
The chip alone, not the ecosystem?
Dylan Patel:
It's a confluence of things. AMD is trying really hard. Their hardware is behind in some factors, especially against Blackwell, but there are some ways their hardware is better. The real challenge for them is software. The developer experience on AMD is not that great, but it's getting better. We've provided them a long list of recommendations, and they've implemented a number of them. But they're just so far behind on software, it's incredible.
Are they going to gain some share? I think they are. The challenge is, versus Nvidia's Blackwell, it's just objectively worse as a chip.
Yeah, because of the system. Because Nvidia is able to network their chips together with NVLink, they can build their servers where 72 of them work together really tightly. AMD can currently only have eight of them work together really tightly, and this is really important for inference and training.
Then Nvidia's got this software stack. It's not just CUDA. Most researchers don't touch CUDA; they call PyTorch, and PyTorch calls down to CUDA. Even less than that, so many people aren't even touching PyTorch. They're going to VLM or SGLang, which are inference libraries, plugging in model weights from Hugging Face, and just saying "go." Those libraries call down the stack. The end user just wants to use a model. Nvidia's building libraries that make this so much easier.
Here, AMD is trying really hard, but it's still a worse user experience. It's not that it doesn't work, but for Nvidia, there might be 10 flags to set; for AMD, there are 50, and it's hard to know what the best performance is.
The other aspect is Nvidia is not doing themselves favors. There's this ecosystem of cloud companies—Googles, Amazons, Microsofts—and Nvidia has propped up all these other cloud companies like CoreWeave and Oracle. There are over 50 of them that Nvidia is really helping to drive down the price of GPUs. But now they've made a major misstep. They acquired this company called Leptton, which does cloud software. And they're doing this thing called DGX Leptton, which is if anyone has a cloud with spare GPUs, "give them to us, and we'll rent them for you." Now the cloud companies are really mad at this because Nvidia is directly competing with them. You don't mess with God—what Jensen giveth, Jensen taketh—but the clouds are really mad.
So some cloud companies are turning to AMD. And there's this third thing AMD is doing: they're getting clusters at Oracle, Amazon, Crusoe, and they're renting GPUs back from them. They're selling them GPUs and renting them back. This is their reasoning: "Buy our GPUs, we'll rent them back, you'll see that it's great, and you can try and rent some to your customers." For the neoclouds, it's, "Here's a contract to get you comfortable." This fosters really good relations. Now clouds like TensorWave and Crusoe love AMD because they're renting GPUs from them. Meanwhile, these clouds are like, "Well, Nvidia is trying to compete with me anyway." So it's an interesting confluence. I think AMD will do okay.
00:36:20–00:43:08
Matthew Berman:
I want to talk about XAI and Grok 3.5. Obviously, there's not a ton of public information about it. Elon Musk has said it's by far the smartest AI on the planet and it's going to operate on first principles. Is this all puffery? Have they actually discovered something new?
You're using 03 day-to-day, even though it takes so much time to get your response back?
I want to talk about the 50% of white-collar jobs that could disappear. I know you probably read about that.
Do you foresee, as human productivity increases like crazy, that humans are going to be managing AI in the future, or are we going to be reviewing the output of AI?
Do you believe that? And what timeline are you thinking?
You had to pick one company to bet on reaching superintelligence first. Who are you picking and why?
Dylan Patel:
I think Elon is a fantastic engineering manager, but I also think he's a fantastic marketer. I don't know what the new model will look like. I've heard it's good, but everyone's heard it's good. When Grok 3 came out, I was pleasantly surprised.
Day-to-day, I don't use it, but there are certain queries I do send to it. Their deep research is much faster than OpenAI, so I use that sometimes. And sometimes, models are just pansies about giving me data that I want. I like human geography—how geography, politics, history, and resources interact. Grok is okay with doing that. It lets me understand things. For other models, if it's about Standard Oil, it'll be like, "Oh, it was a union buster." It's like, "No, just tell me what actually happened." Grok can sometimes get through the nonsense, but it's not the best model. The model I go to the most is either 03 or Claude.
It depends on the topic, but yeah, a lot of times I'm okay with waiting. That's why I use Claude sometimes, or Gemini at work for long context and document analysis.
Grok has a lot of compute, great researchers, and Elon is hyping them up. Will it be OpenAI-level? I don't know. Are they doing something fundamentally different? I don't think so. Generally, people are doing the same thing: pre-training large transformers and doing RL on top, mostly in verifiable domains.
Regarding jobs, populations are aging really rapidly, and generally, people work less than ever before. The average hours worked 50 years ago was way higher. Every metric is way better than 50 or 100 years ago, and AI should just enable us to work even less. Is it going to be that there are psychos like myself that work way too much, and normal people who work way less? The distribution of resources is the challenge. That's why I'm super excited about robotics as well.
Right now, we're in the transition from using models on a chat basis to a longer-horizon basis. Over time, these interactions with AI will become longer-horizon tasks, where AI is doing stuff for hours or days before coming back for me to review. And then eventually, there just won't be humans in the loop.
I'm generally more pessimistic on timelines. I don't think this decade for 20% of jobs to be automated. Maybe the end of this decade, maybe the beginning of the next. Meanwhile, there are people saying AGI in 2027. But reaching the tech doesn't mean the implementation will happen at that moment.
Deployment will be really fast. You can already see the junior software engineering market is nuked. But are companies going to choose to tackle more problems? Yes. So how do those junior engineers get into the market? I've basically doubled the size of my firm in the last year, but how many junior software developers am I going to hire? It's like, wouldn't I rather have a senior person commanding a bunch of AIs rather than a junior person? It's challenging.
My only hope is that it's not just two or three closed-source AIs that dominate human GDP, but that it's more distributed than that.
OpenAI. They're the first to every major breakthrough. Even reasoning, they were the first to it. And I don't think reasoning alone will take us to the next generation, so there's going to be something else. Anthropic is second. They have really good people. Third is a toss-up between Google, XAI, and Meta. I think Meta will get enough good people that they'll actually be competitive, too.
Matthew Berman:
Dylan, thank you so much for chatting with me.
Dylan Patel:
Appreciate it. Super fun conversation.
Key Takeaways
The AI race is a game of power, not just money. The ultimate prize is control over the path to superintelligence.
Leadership and "taste" are the biggest bottlenecks in AI. Having the best researchers is useless if you choose the wrong ideas to pursue.
Corporate history matters. Apple's historical feud with NVIDIA ("Bumpgate") is a major handicap in its efforts to catch up in AI.
The future of work is here, and it's brutal. The value of many white-collar skills is plummeting, with entry-level software jobs already "nuked."
Closed-source models will win. Despite the momentum of open-source, the most powerful AI will likely remain behind corporate walls.
OpenAI is still the frontrunner. Despite intense competition from Meta, Google, and xAI, OpenAI's track record of breakthroughs makes them the one to beat.
Enjoyed this conversation?
For more in-depth interviews with the people shaping AI, follow us on X and subscribe to our YouTube channel.

