A new test for AI labs: Are you even trying to make money? |


We’rein a unique moment for AI companies building their own foundation model.

First, there is a whole generation of industry veterans who made their name at major tech companies and are now going solo. You also have legendary researchers with immense experience but ambiguous commercial aspirations.There’sa clear chance that at least some of these new labs will become OpenAI-sized behemoths, but there’s also room for them to putter around doing interesting researchwithout worrying too much about commercialization.

The end result?It’sgetting hard to tell who isactually tryingto make money.

Tomake things simpler,I’mproposing a kind of sliding scale for any company making a foundation model.It’sa five-level scale where itdoesn’tmatter ifyou’reactually makingmoney — only thatyou’retrying to.The idea here is to measure ambition, notsuccess.

Think ofitin these terms:

  • Level 5: We are already making millions of dollars every day, thank you very much.
  • Level 4:We have a detailed multistage plan to become the richest human beings on earth.
  • Level 3:We have many promising product ideas, which will be revealed inthe fullness of time.
  • Level 2:We have the outlines of a concept of a plan.
  • Level 1:True wealth is when you love yourself.

The big names are all at Level 5: OpenAI, Anthropic,Gemini, and so on. The scale gets more interesting with the new generation of labs launching now, with big dreams but ambitions that can be harder to read.

Crucially, the people involved in these labs cangenerally choosewhatever level they want.There’sso much money in AI right now that no one is going to interrogate them for a business plan. Even if the lab is just a research project, investors will count themselves happy to be involved. If youaren’tparticularly motivated to become a billionaire, you might well live a happier life at Level 2 thanatLevel 5.

Techcrunch event

San Francisco
|
October 13-15, 2026

The problems arise because itisn’talways clear where an AI lab lands on the scale — and a lot of the AI industry’s current drama comes from that confusion.Muchof the anxiety over OpenAI’s conversion from a nonprofit came because the lab spent years at Level 1, then jumped to Level 5almost overnight. On the otherside, you might argue that Meta’s early AI research was firmly at Level 2, when what the company really wanted was Level 4.

With that in mind,here’sa quick rundown of four of the biggest contemporary AI labs, and how theymeasure upon the scale.

humans&

Humans& was the big AI news this week, and part of the inspiration forcoming up withthis whole scale. The founders have a compelling pitch for the next generation of AI models, with scaling laws giving way to an emphasis on communication and coordination tools.

But for all the glowing press, humans& has been coy about how that would translate into actual monetizable products.It seemsitdoeswant to build products; theteamjustwon’tcommit to anything specific.The mostthey’vesaid isthat they will be building some kind of AI workplace tool, replacing products like Slack, Jira, and Google Docs but also redefining how these other toolswork at a fundamental level. Workplace software for a post-software workplace!

It’smy job to know what this stuffmeans,andI’mstillpretty confusedabout that last part.But it is just specific enough that I think we can put them at Level 3.

Thinking Machines Lab

This is a very hard one to rate!Generally, ifyou have a former CTO and project lead for ChatGPT raising a$2 billionseed round, youhave toassume there is a pretty specific roadmap. Mira Murati does not strike me as someone who jumps in without a plan, so coming into 2026, I would have felt good putting TML at Level 4.

But thenthe last two weeks happened. The departure of CTO and co-founder BarretZophhas gotten most of the headlines, due in part tothe special circumstancesinvolved. But at least five other employees left withZoph, many citing concerns about the direction of the company. Just one year in,nearly halfthe executives on TML’s founding team are no longer working there. One way to read events is that they thought they had a solid plan to become a world-class AI lab, only to find the planwasn’tas solid as they thought. Or in terms of the scale, they wanted a Level 4 lab but realized they were at Level 2 or 3.

There stillisn’tquite enough evidence to justify a downgrade, butit’sgetting close.

World Labs

Fei-Fei Li is one of the most respected names in AI research, best known forestablishingthe ImageNet challenge that kickstarted contemporary deep learning techniques. She currently holds a Sequoia-endowed chair at Stanford, where she co-directs two different AI labs.I won’t bore you by going through all the different honors and academy positions, but it’s enough to say that if she wanted, she could spend the rest of her life just receiving awards and being told how great she is.Her bookis pretty good too!

Soin 2024, when Li announced she had raised $230 million for a spatial AI company called World Labs, you might think we wereoperatingat Level 2 or lower.

But that was over a year ago, whichisa long timeinthe AI world. Since then, World Labs has shipped botha full world-generating modelanda commercialized productbuilt on top of it. Over the same period,we’veseen real signs of demand for world-modeling from both video game and special effects industries — and none of the major labs have built anything that can compete. The result looks an awful lot like a Level 4 company,perhaps soonto graduate to Level 5.

Safe Superintelligence (SSI)

Founded by former OpenAI chief scientist IlyaSutskever, Safe Superintelligence (or SSI) seems like a classic example of a Level 1 startup.Sutskeverhas gone to great lengths to keep SSI insulatedfrom commercial pressures,to the point ofturning down an attempted acquisition from Meta. There are no product cycles, and, aside from the still-baking superintelligent foundation model, theredoesn’tseem to be any product at all. With this pitch, he raised$3 billion!Sutskeverhas always been more interested in the science of AI than the business, and everyindicationis that this is a genuinely scientific project at heart.

That said, the AI world moves fast — and it would be foolish to count SSI out of the commercial realm entirely. Onhis recent Dwarkesh appearance,Sutskevergave two reasons why SSI might pivot, either “if timelines turned out to be long, which they might” or because “there is a lot of value in the best and most powerful AI being out there impacting the world.” In other words, if the research either goes very well or very badly, we might see SSI jump up a few levels in a hurry.

Leave a Reply

Your email address will not be published. Required fields are marked *