$730B pre-money for a company where each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering. Research (Sara Hooker et. al.) is not encouraging on that front, compact models already outperform massive predecessors on downstream tasks while scaling laws only predict pre-training loss reliably.
Wrote about both the per-model math and the scaling question:
> each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering.
This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
We said all the same shit about VR, dude. Even had a global pandemic show up to boost everyone's interest in the key market of telepresence. Turns out the merry go round can stop abruptly.
Someone please explain how OpenAI is not Netscape 2026. They had first mover advantage but no network effect, no moat, and are racing to stay ahead of infinitely resourced incumbents.
I can’t. I think they are one viral TikTok away from the pendulum swinging to Chat Gemini, which for most people, the no cost version is perfectly adequate
Not GP, and not saying I agree with them, but it may be worth remembering that Netscape had 90% market share at one point. Active user count may not be the moat you imagine.
Adoption of web browsers was also much lower when Netscape was dominant. 90% marketshare is less meaningful if you're only 1% of the way to the potential market size. Peeling away users who talk to ChatGPT every day is very possible, but harder than getting someone whose never used an LLM before (but does use your OS, browser, phone...) to try yours first.
I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.
Maybe! Switching search engines is also very easy, and the top story on the front page is someone no longer using Google, but we know in practice almost nobody does that. As technologists we're much more likely to switch and know people who would switch.
How many of those users are paying? Where is the profit? How many users will be willing to use ChatGPT if they had to pay? Might have to pull out the questions like its 2026.
> This plan may include ads. Learn more
> When will ads be available in ChatGPT?
We’re beginning in the US on February 9, 2026
> Starting in February, if ads personalization is turned on, ads will be personalized based on your chats and any context ChatGPT uses to respond to you. If memory is on, ChatGPT may save and use memories and reference recent chats when selecting an ad.
You pay 8 USD / month and have higher limits and ads
IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
The "circular investment" is mostly start up companies using their stocks instead of cash to pay for server hardware and cloud computing. There is a few extra steps in between that make things look weird and convoluted, but the end results is really just big companies giving hardware and getting shares of ai companies in exchange for it.
It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.
But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.
Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.
Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.
Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.
It's not "continue" buying as much as this is NVIDIA fronting the money for (most of) the hardware OpenAI has already ordered from them. It's like borrowing rent money from your drug dealer.
It's like credit cards loaning money to people who are unemployed and will default on payments. It's a risky business that is legal and can be very profitable, but may also be disastrous in the future.
I don't see the problem as long as materially significant transactions by publicly traded companies are properly disclosed to investors. If someone loses money by buying NVDA then they have only themselves to blame.
> On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
> hence I end up with $100 worth of stock in your company and it only cost me $25.
You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.
It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.
So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.
If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.
> I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
If they couldn't borrow $100, or get $100 from any other investor, that just puts you in the position of being an investor, and even then the difference between bradfa's version and mine is simply when you became an investor, not that you became one.
Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.
The primary cheat code here would actually seem to be (a) getting preferential access to Nvidia's production through these deals and (b) creating a paper story of increasing OpenAI private valuation.
Aaaannd get to claim the 100 as revenue to show investors that the company is performing better than if I had not made the deal, which also means that demand for the product stays inflated which also means I can keep my margins higher by not needing to discount my product.
Urgently need an IPO so losers can chip in. If the sandcastle plummets before, funds and other AI companies lose a lot, so better bet again and again, even if this is nonsensical.
> Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80?
Why limit myself to $100 for a product that costs $80? I could just as well give you $1 000 000 to buy this same product from me. That way, I have a $1 000 000 share of your company, and I have $1 000 000 in revenue, and it only cost me $80.
This distorts the market for the product we're trading, and distorts the share price for both my company and yours.
> Isn't that basically the same as me giving you $80?
In your accounting, you can claim that you have an investment worth $100 and book $100 worth of revenue. You're juicing your sales numbers to impress shareholders - presumably, without your $100, the investee wouldn't have bought $100 worth of your product. The last thing your shareholders want to see are your sales numbers stop growing, or heaven forbid, start shrinking.
Nvidia is not the first company to "buy" sales of its own product via simple or convoluted incentive schemes. The scheme will work for a while until it doesn't.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
How I see it is the companies want to jack their revenue and in turn jack the price of their stock and please shareholders. Those are the two main goals which this accomplishes, regardless of the underlying fundamentals.
The reason this doesn't make sense is that this is the math of monopoly creation! The government should be making sure companies don't go around throwing money at circular deals that will make them and their friends a fortune while cornering the market, but it seems that capitalism rules don't exist anymore in the US.
I'm not a finance expert, but it may be because investment and purchase are are taxed differently (I don't know). You gave $100 away as investments, got $100 back as revenue. Meanwhile you establish that your product are worth $100 (while costing $80) and you have $100 worth of shares. Without considering side effects, you gave away $80 worth of product for $100 (supposed) worth of shares. But shares are subject to side effects and those side effects can be quite nice (making the news, establishing price,...).
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
> I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
What if the product only costs you $20 to produce?
Comparing OpenAI and WeWork is a nonsensical perspective. OpenAI is shipping the most revolutionary product in a generation, with 800 million monthly active users. It's the fastest revenue ramp ever, at incredible scale -- $20B+ ARR. These are real fundamentals. They matter. And the cost of inference is coming down all the time.
WeWork was a short-term/long-term lease arbitrage business. The two are nothing alike.
It used to be revolutionary, but now there is a huge difference: plenty of competition, and a growing number of high-quality models that can run offline (for free!) or cheaper (Gemini-Flash for example).
They are in some way the Nokia of AI, "we have the distribution, product will sell", but this is not enough if innovation is weak.
They are even lagging behind (GPT-5 is a weaker coder than Claude, Sora is a toy compared to Seedance 2.0, etc).
One Apple releases the AIPhone, running offline models, with 32 GB of unified memory, with optional cloud requests, then it's going to be super though for OpenAI.
How will they make money on their product exactly? To the tune of being worth nearly a trillion dollars? There is no guarantee that inference will go down, we’ve seen some improvement with cheap models, but they aren’t what people want, and otherwise models stay expensive to run and use
So what. In a highly competitive industry they can't keep selling inference unless they continually train better models. It's like saying my airline is profitable if you don't count the cost of buying new airplanes.
OpenAI have made this claim and maybe it is with API pay-per-use (there's also good evidence eveb that is not if you dive into how much a rack of B200s cost to operate), but I'd be very sceptical that the free, $20 or $200 a month plans are profitable.
Then the questions are if the market will bear the real cost and if so how competitive OpenAI are with Google when Google can do what Microsoft did to Netscape and subsidize inference for far longer than OpenAI can.
The only reason to draw this comparison is to show SoftBank are not as competent as they'd like to appear to be - so putting their name in relation to investors of OAI does not strengthen the prospects we should share re. OAI.
It’s one of the worst takes I’ve heard. OpenAI creates the fastest growing app ever, spawns a revolution bigger than the internet, and this guys take is they are like WeWork…
Both can be true. Just because you've created a revolutionary product doesn't mean it's a viable business, let alone one worth $700+ billion. There is a lot of history of the first movers that created revolutionary products that eventually faded away into nothing, while others capitalized on the innovation.
They don't need to reach AGI. They just need to put all of the engineers on HN out of work.
A year ago I would have said that was crazy. In the last month, I've been using Claude Code to write 20kloc of Rust code every day (and I review all of it).
A week is now a day. If that figure doubles, I have no idea what will happen to us. And I think it's coming.
So you now have 400Kloc of Rust code? Doing what? How much of that is "new"?
I can't get Augment / Opus 4.5 to edit a few C++ files from within VSCode without going off on a wild goose chase or getting stuck in an infinite loop after I tell that it should be doing this: "oh, you're right, I need to do X", "To do X, I must understand how to do Y", "I see now that to do Y, I should look at at Z". "Let me look at Z", followed by: "oh, you're right, I need to do X"..
To do what, exactly, and are people paying you for your output or are you just making things for yourself?
Building things at a mature company with a market is a lot different than hacking together your own tools. There are a lot more people you can let down at scale.
> They just need to put all of the engineers on HN out of work.
I think you've crossed the line from being an AI maxi to just rage baiting. This comment is a pointless anecdote at best, please take your ridiculous FOMO takes elsewhere.
That’s the same definition of reviewing code as saying watching the movie is the same as reading the book it’s based on.
No human has ever reviewed 600k lines of code in a month, ever. It’s hard to find someone who can even read and understand that amount in that time.
I’m convinced these “guys you gotta believe me I’m a seasoned veteran and this shit is the real deal” posts that show up in every AI thread are either coming from Sam Altman or a bot.
It'd be interested in seeing how exactly the lawyers figured out how to define AGI. It must be a fairly mundane set of KPIs that they just arbitrarily call AGI, the term will probably devalue significantly in the coming years.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
Has OpenAI laid out the specific definition of what an AGI is for this case? The one from their mission is quite vague and the general community has nothing close to a universal common definition... which means they will most likely just define it as what they already have when the timing is right.
At least in their Microsoft contract it means $100 billion in profit, though they don't need to have actually made that money, they just need to show they're on track to do so.
I'd assume the real trigger here is "reaching AGI," which would help OpenAI shrug off some of their Microsoft commitments thus making OpenAI models available on Amazon Bedrock. Which is what Amazon is really after.
Very convenient to put "AGI" in all these agreements because the term is fundamentally undefinable. So throw out whatever numbers you want and fight about it and backtrack later.
The problem with AGI is not that it's undefinable, but that everyone has a different one. Kinda like consciousness in that regard.
Fortunately, OpenAI already wrote theirs down. Well, Microsoft[0] says they did, anyway. Some people claimed it was a secret only a few years ago, and since then LLMs have made it so much harder to tell the difference between leaks and hallucinated news saying this, but I can say there's at least a claim of a leak[1].
At least investors like Amazon can afford to lose their investment ($50 billion). That would be like a normal person losing a few thousand dollars. It hurts, but life would go on.
That’s still $100B unaccounted for, and I’m pretty sure Amazon would expect fair treatment if other investors get a bailout. More likely, OpenAI is the one to receive the bailout, likely at the behest of the bigger investors, Amazon included.
So let´s see if I understood well this one:
Got 110 Billions with the promise that either AGI will happen soon (:) or going public before the end of the year.
Eitherway you get to double your 110 Billions no matter what (who will be left to pay the full bill after it, public or public)?
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
> Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.
e.g. it talks about running NVIDIA's systems (?) on AWS
> NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing. We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.
Probably something like NVLink Fusion. AWS has been doing deals with suppliers for which the smallest unit of deployable compute is a 44U rack (e.g. Oracle), so this is more of the same.
Use these freebies/relatively cheap tools up 'whilst stocks last'.
I personally managed to create a very high quality marketing promo vid using grok. After spending weeks of enduring a lot of pain. But I saved myself tens of thousands.
I took advantage of 30 Grok premium subscriptions that were given to me via a free trial. There's no doubt the cost of services I took advantage of is in the tens of thousands.
But what do I care? I get what I want and then I get out before the freebies disappear.
LOL at the cry babys down-voting. Get mad bruh, get mad.
Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
>Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
I don't know that OpenAI specifically is the weak link but this definitely adds to the argument that the entire sector is a wash with the same three or four companies passing around the same $50B over and over. OpenAI is just the link that seems most likely to break first.
I've seen this sentiment (OpenAI collapse imminent) a lot on Youtube and Reddit, but it somehow evaded me on here
Bad comments about OpenAI's long-term viability I've seen plenty here. But that's not the same as the people predicting one of the hottest companies right now will somehow suddenly run out of cash all on its own
Its hottest service by far is completely free, the vast majority of users of its free service aren't converting to users of its paid services (and often stop using the free service too because they were just tourists seeing what all the fuss was about, or they were compelled to use the free service by their employer), and its data center plans are an impossible money pit.
The fact it's become a household name internationally (giving it the appearance of success) can't save it from spending dramatically more money than it makes. It's been coasting on investments, but it's not even close to being actually profitable.
Huge or well-known companies have collapsed before, even though - because people become so used to them existing - it never quite feels like it will actually happen until it does.
If nobody invested in OpenAI how long could they keep the lights on? They're not profitable yet, and a lot of the wealth that Sam Altman seems to be making revolves around strange circular deals.
By comparison, Anthropic is projected to break even in 2028. Google's Gemini is already profitable.
What source do you have the Gemini is profitable? Are you referring only to the chat app, or to Google'a AI Ventures division? Or including Google Cloud AI related revenue?
Not agreeing with the parent, but that hardly matters. Google has a real business, advertising, that brings in $400 billion a year and income around $150B. They can afford to throw away tens of billions every year while still remaining immensely profitable and quite solid as a business. OpenAI has no such income to spend so it's as the above comments reflect, entirely unsustainable while Google's spending on AI is a drop in the bucket for them.
I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.
This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.
Indeed, that article doesn't support a single part of that claim.
It kinda feels like an LLM-generated article that another LLM picked as a "citation", and then no human bothered to check if it actually said what the LLM said it did.
And, really, advergroup.com? Who sites an advertising agency as if it's a reliable resource?
"AdverGroup Web Design and Creative Media Solutions is a full service advertising agency that delivers digital marketing services. We manage Google Ad Word campaigns and/or Meta Ad Campaigns for local clients in Chicago, Las Vegas and their surrounding suburbs."
So credible a resource on Gemini's performance/profitability... /sarc
But yeah it doesn't even actually say anything about profits, let alone attribute any specific percentage of profits to Gemini. It just vague marketing copy.
The title and escrow industry is using Gemini (via Qualia Clear) enough so that Qualia accounts for 100 billions of token usage in about 3 months. Just because you don't see who is using it, and how, doesn't mean that when the dust settles, the people actually using AI for real purposes wont keep using AI. I'm not sure which AI models big pharma is using, but there's already at least one new pharmaceutical drug in the secondary testing phase, showing strong results.
There will definitely be room for AI. OpenAI is just not really showing that they care about a particular business model. Probably a strong indicator that Sam Altman is probably the worst person to lead that company. Anthropic will be profitable before OpenAI ever will be.
Gemini is in the green in terms of spending / income ratio FYI. I'm not talking about stocks.
Maybe you should get your news from a different source. Personally I prefer raw sources. I watch every official press briefing to hear from the horses mouth. You come to find that regardless of who is president news orgs put their own spin on it and you miss things they dont cover. Its all streamed on official government accounts.
Lmao, press briefings from the office of the führer is such a solid source to base your reality off of.
By the way if Kamala, Biden or Newsom was in office id also call them führer.
We live in a technocratic authoritarian state, the worlds largest prison population, the most police executions, we are actively sponsoring multiple genocides, we've killed over one million civilions in the middle east in two decades.
our politicians on both sides will go out of their way to protect pedophilic members of the ruling class...
But you want to tell us we're exaggerating or interpreting a reality that doesnt exist, i think youre the one who's been convinced through the regimes doublespeak that everythings alright.
Please revaluate. The US government is literally the 4th reich and actively committing halocausts on multiple fronts.
Do you know any history? You dishonor the people who died from horrible atrocities in WWII to make some glib performative political posturing. It's shameful behavior. Do better. Be better.
WWII didn’t start overnight. The Sturmabteilung (SA), also known as “The Brownshirts,” have a strong similarity to what we’re seeing with ICE and CBP. The SA were Hitler’s enforcers before the SS, during the 1920s and early 1930s. They were eventually usurped by the SS during “Night of the Long Knives” where SA leadership were executed by the SS. Largely because Hitler had felt threatened by the power Ernst Röhm had amassed (among other reasons). And the SA, like ICE, was made up largely of untrained sycophants and thugs who enjoy violence. They committed violence, harassed citizens, and had no consequences for doing so. They were also instrumental in laying the foundation for the genocide and atrocities committed by the Nazi party.
It’s not a dishonor to their memories, or the atrocities committed, to call that out. It is not a dishonor to say there are stark and real similarities between the way the US is operating and treating civilians.
I personally find the opposite, IMHO it is dishonors their memories to refuse to acknowledge the similarities.
I’ve posted a comment similar to this one here before, and like how I ended it. I strongly encourage you to read about the history of Nazi Germany and how it came to happen. It wasn’t just a zero to death camps, it was 15 years in the making. That history is deeply shocking, as it is depressing, because the parallels and timelines are too similar for anything besides outright discomfort, sadness, and fear between it and the US. But without knowing it, we are ever more likely to repeat it.
One final thing to note: the US has a history of extreme violence, slave patrols and the treatment of non-whites of the 19th century were an inspiration for Hitler.
You'll always find someone claiming X or Y are close to collapse at any given time. As even a broken clock is right twice a day, eventually one of these predictions will randomly be proven correct. That person will then be elevated to a genius forecaster and rake in cash for a decade or two.
Actually it is the other way around; every upstart claims that their invention is the mostest revolutionariest thing ever. 99.9% of them are not. The nay sayers are right most of the time.
Recent high-profile examples include Segway, NFT, Crypto as a whole, pre-tranformers voice assistants and various "Design Thinking" projects like those Amazon prime buttons.
Free ChatGPT chat has made the company a household name, and helped it to persuade investors, but every single one of those free users costs the company money. Most of those free users have proved unwilling to convert to paid users, and adding ads to the free service promises to send it into the same enshittification death spiral so many other companies have fallen into.
Also, how on Earth would your grandma and parents not have heard of crypto? Crypto is frequently front page news, even in print newspapers. There have been crypto superbowl ads. Are they living under a rock?
I don't think they are going to collapse. But it was only a couple of years ago that many people thought OpenAI had a big (some thought insurmountable) lead in a race to dominate a winner take all markee. Some people did correctly state that OpenAI had no moat in those days so credit there where it's due.
Now it's looking like a competitive blood bath where ever increasing levels of investment is needed just to main market position. Their frontier models are SOTA for 4 weeks before a competitor comes and takes the crown. They are standing on much shakier ground than they were 2 years ago.
A competitive bloodbath plus OpenAI has investment valuing it like it will achieve agi rather than (merely) being a huge advancement in computing, but not a fundamental rewriting of how all work is done.
the $30b investment from nvidia is instead of a previously-announced $100b investment from nvidia, so it's not like this is an entirely good-news story for OpenAI.
How much revenue have they generated? How about profit?
If investors keep throwing obscene money at OpenAI, sure, they can stay afloat forever. Can't argue with that. But if we're talking about a sustainable business, I still don't see it.
Selling Shovels is quite lucrative whether there is an actual mining business or just a gold rush.
At one point Jensen Huang will be out (retired or forced by staginating sales) and can definitely look back on a very successful career. That much is certain.
Nobody saw coming the huge demand for coding agents. Not even OpenAI or Anthropic themselves. Those were side projects just a year ago and now dominate token demand. And they keep rising.
Oh I do think they did see it, considering how good they are they've probably been a tuning focus for a while.
The signal the agent usage is sending though is that Anthropic is way ahead since all we hear about is Claude these days despite OpenAI spending so much more money, Antrophic is also out trialling vending machines,etc.
ChatGPT apart from generating text was a bit of a query/research tool but now that Google has their AI search augmentation shit somewhat together I'm not feeling much need for ChatGPT as a research partner.
So now the big question is, with coding and search niches curtailed, where will OpenAI be able to generate profits from to justify their insane spending?
There's this saying that if you owe the bank a million dollars, you have a big problem, but if you owe the bank 100 million dollars, the bank has a big problem.
Is the same thing true for corporations? At some point the numbers are so wild the entire economy must help you succeed? I don't mean "too big to fail" exactly, more like "so big eventual success is guaranteed at all costs"
Those are the same thing. The whole point of saying "too big to fail" is to evoke the moment in the housing crash where governments largely threw most of their citizens under the bus by bailing out banks rather than homeowners for the banks' wildly irresponsible decisions. "Too big to fail" means the government steps in and bails you out, and that phrase became popular because for many it was the final nail in the coffin for their trust in government
They would give OpenAI anything they want if they proclaim the current guy the bestest and biggliest president of all time, ever. (edit: I meant, if chatgpt were to consistently claim that the current guy is the greatest president ever)
I wonder if there is "too big for IPO". Saudi Aramco in 2019 sold shares worth $25.6 billion in IPO. Even offering just 5% of OpenAI to public would shatter that record. Well, unless public isn't actually interested in investing such huge amounts.
What would really help is knowing the details of such funding. The hierarchy of who gets paid first in event of going under is very illuminating and while I am not a banker I always wonder if there are caveats too complicated even for the large investors to understand
> We continue to have a great relationship with Microsoft. Our stateless API will remain exclusive to Azure, and we will build out much more capacity with them.
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
OpenAI desperately needs to be available outside Azure. We are exclusively using Anthropic atm because it is what is available in AWS Bedrock and it works. These things are solidifying fast.
Interesting story for sure (to be clear I'm not talking about the writing by Reuters), but would you buy or skip the OpenAi IPO?
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
Nvidia will get all that money back via GPU purchases, Amazon via cloud rental and SoftBank is being typical SoftBank - a rich but not particularly bright kid in a class :) .
"I give you $30 billion if you use it to buy $30 billion of stuff from me" doesn't sound like a very good investment. Is Nvidia expecting more back than it puts in? Enough more to make the deal profitable?
"I give you 30B$ worth of hardware that costs me <10B$ to make in exchange for 30B$ worth of shares in your company" would be a more accurate description.
Well, I won't pretend I know the answer :) . But I assume that a) they are partially betting on making a normal return on investment (i.e. OAI not crashing), b) they profit from running a huge expense/revenue cycle (a company making say a million of profit and having a billion revenue is favored better than the same but with only ten million revenue), and c) even if all goes wrong, it is still better to get back most of the investment even if not everything and zero profit, compared to a possibility of just losing it all like SoftBank or other investors.
In the end it's exchanging GPUs for OpenAI shares. It's not a non-trade, and in the current market Nvidia could really sell the stuff for cash. The marginal cost is very much sharply positive.
Does anyone have any ethical concerns using openai regarding money donated to the current US administration in one way or another? I will search for more accurate details about that situation. I know about several other ethical concerns with openai that people have, including copyright and other considerations regarding the work being trained on, as well as lack of action regarding users who are harmed by their usage of the product, often regarding mental health, environmental concerns, actually quite a few others, but I am interested if many people think their political donations are an issue or not.
> The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls
Yes, this is kind of like Tesla promising full self driving in 2016
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
I should note for epistemic honesty that I expected I would be able to come up with an example of a mistake I made recently that was clearly equally dumb, and now I don't have a response to offer because I can't actually come up with that example.
> If this comes to pass OpenAI's value is near unlimited.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
Okay, I can understand investment from SoftBank, and maybe somewhat from Amazon (if they plan to use OpenAI's models), but investment from NVidia who will then sell OpenAI the GPUs with X% markup doesn't make sense to me.
"Basically invented AI" by running on principles that Minsky wrote about in the 80s, and improvements Google developed in the early 10s, on bigger and bigger computers. But "Basically invented".
That's a pretty lofty valuation for a company that has yet to demonstrate code generation anywhere near Anthropic's models if they're leaning into the engineering angle.
"Calvinism makes pretty lofty claims for a religion who has yet to demonstrate soul salvation anywhere near Lutheranism if they're leaning into the reformation angle"
and they say its not a bubble! we saw it with oracle deal, big announcement and than nothing, same with nvidia and now same thing is going again i hpe this is cash infusion and not some credit deal.
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Remember when it was a huge milestone when gigantic companies like Apple and Microsoft were striving to be the first $1T company backed with decades of building actual businesses with actual profit?
Feels like Nvidia getting in the game here might just put them at more risk. If things don't work out they'll be out their money and future sales and so on.
It is bad enough AI sucked up so much investment money, hitting companies that do make profitable things hard if AI bubble collapses would be bad...
Our economy has turned into an ouroboros: a circle of snakes shitting in each others mouth until they get so sick we the taxpayer will get the privilege of bailing them out. I'm really fucking excited to eat shit for the 3rd time in 18 years. Super pumped.
It’s already a joke to call the slop generators “AI”, so giving it another fake name won’t really make much of a difference any more. Nothing short of a miracle will be able to top the “creative marketing” we already have.
There is not a single OpenAI model in the top 10 on openrouter's ranking page. The market is saying something about the comparative value of OpenAI.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
Agreed it's not really good signal (many sampling biases) but user count is not relevant, most money is from heavy API users. 900M users with free or cheap subscription are nothing compared to even 10k heavy API users.
On the other hand, big users don't use openrouter. At $work we have our own routing logic.
1. openrouter is API usage. There is obviously consumer side
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
> At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
I agree, and most of my peers do as well. This is why most of us shifted to funding AI Applications startups back in 2023-24. Most of these players are still in stealth or aren't household names, but neither are ServiceNow, Salesforce, Palo Alto Networks, Wiz, or Snowflake.
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
- Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
- The $30B each from softbank and NVIDIA is paid in installments
So this is more a $35B fundraise, with a _promise_ of more, maybe, if conditions are met. Not _bad_, but yet more gaslighting from Mr Altman. Anyone reporting this as a closed fundraising deal is being disingenuous at best.
> - Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
Startup funding is often given in increments depending on milestones being met. Most startups just don’t announce that it’s conditional.
For large funding rounds, nobody gets a check for the full amount at once.
The funding would not be conditional on an IPO because that wouldn’t make any sense. The IPO is the liquidity event for the investors and there’s no reason for a startup to take private investment money that only enters the company after IPO.
This is pretty standard. Usually the conditions are performance benchmarks, but may also include IPO. Typically its done in multiple tranches, e.g. 15B at the start, 5 more if you gain +500m users, 5 more if your profit exceeds X, and the rest for IPO (im over simplifying)
The conditions are either an IPO or achieving AGI. I’d be curious to know how the contract defines AGI. If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer, insofar as we will have AGI when the markets decide we have AGI and not when some set of philosophical criteria seem to be satisfied.
> If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
Two economists were walking down the street when they spotted a giant dog turd on the ground.
One of them wanted to have some fun, so said to the other - "I'll give you $100 if you take a big bite of that turd".
His colleague figured $100 was a good chunk of cash, so did the deed. Feeling thoroughly humiliated, he pocketed the $100 and they carried on.
Further down the street they came upon another turd.
The angry economist now wanted revenge so made the same proposal back to his colleague, who also agreed and took a bite of the turd, earning back his $100.
Later one of them said to the other "you know, I can't help but feel we both ate shit for no reason."
His collegue replied "what do you mean? We raised the national GDP by $200."
I did upvote, it's witty, but it's a bit of a misrepresentation of how the economy works.
In practice, people don't tend to pay people to eat shit without gain. You are paying people to help you. Money gaslights everyone into helping each other, the most selfish people become the most selfless.
Of course, real capitalism is much more complex and much uglier than this fantasy. When certain people end up with long-term control of large piles of money, the whole thing gets distorted. They get to make lots of money on interest without doing anything, and making other people eat more shit for scraps. That's the "capital" part of capitalism.
But the toy world-model that this joke is making fun of, is actually the one core positive aspect of capitalism and brings all the prosperity we have: tricking people into helping each other.
It’s not craze. It’s technology shift. Bitcoin and 3D printing were craze. It’s like a move from analog photography to digital. I am telling you this as a very conservative person. Even for me it’s helpful.
3D printing is helpful too. The infrastructure created during the dot-com bubble of the late 1990s was also helpful. The UK is still profiting from the railway infrastructure created during the railway craze of the 1840s (https://en.wikipedia.org/wiki/Railway_Mania). The question is just how much of the valuation of AI companies is because they are useful and how much is speculation...
That's certainly a take, industry loves it. Sure, all that "everybody will print widgets at home instead of going to the store" stuff was never going to happen, but 3d printing is nonetheless here to stay.
It can be both a craze and a technology shift. AI isn't going away, it will transform some industries. But right now it's overhyped, overfunded and due a trip back to reality.
It most definitely COULD be a craze from the perspective of scope of investment, societal impact and timing. No one surfing the crest of this wave could be described as "conservative".
Personally at this point my combined AI spend is the most expensive recurring monthly subscription I have, and that’s even with my company also paying for the AI tools I use at work.
If it weren’t subsidized I would pay more. Wouldn’t be happy about it but I would do it.
At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
Actually it is not about this stage. It is about the sustainability of this when training data runs out and there is less and less human generated content.
When training data runs out, they usefulness will diminish quickly. They will still be useful for searching documents etc, but I guess they are not good at that even now.
What bitcoin gave us essentially? Huge pump and dump schemes coordinated by big hands? Crypto investments which made 95% of investors poorer? What's left?
Maybe 0.01% of it was beneficial.
$730B pre-money for a company where each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering. Research (Sara Hooker et. al.) is not encouraging on that front, compact models already outperform massive predecessors on downstream tasks while scaling laws only predict pre-training loss reliably.
Wrote about both the per-model math and the scaling question:
(1) https://philippdubach.com/posts/ai-models-as-standalone-pls/
(2) https://philippdubach.com/posts/the-most-expensive-assumptio...
> each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering.
This is a decent argument, but it's not the death knell you think.
Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.
The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.
If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.
For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.
I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).
But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...
Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.
> 99% more efficient every 3 years
It's 2x efficiency. Then I'd take 50% less power instead of ridiculous 99% less power.
"If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it."
AI stopped progressing, or LLMs? I really dislike people throwing the term AI around.
For the purposes of their argument, I don’t think the distinction matters.
> Models are getting 99% more efficient every 3 years
The LLM industry has only be around for like 4 years. Extrapolating trends from that is pretty naive.
We said all the same shit about VR, dude. Even had a global pandemic show up to boost everyone's interest in the key market of telepresence. Turns out the merry go round can stop abruptly.
Someone please explain how OpenAI is not Netscape 2026. They had first mover advantage but no network effect, no moat, and are racing to stay ahead of infinitely resourced incumbents.
I can’t. I think they are one viral TikTok away from the pendulum swinging to Chat Gemini, which for most people, the no cost version is perfectly adequate
How are ~1B active users not "moat"? Might have to pull out the "Haters gonna hate" like it's 2007
Are those users Locked in or are they treating the service like a commodity easily changed when the price goes up to stop hemorrhaging money.
Google worked as a free service because their backend was cheap. AI models lack that same benefit. The business model seems to be missing a step 2.
Not GP, and not saying I agree with them, but it may be worth remembering that Netscape had 90% market share at one point. Active user count may not be the moat you imagine.
Adoption of web browsers was also much lower when Netscape was dominant. 90% marketshare is less meaningful if you're only 1% of the way to the potential market size. Peeling away users who talk to ChatGPT every day is very possible, but harder than getting someone whose never used an LLM before (but does use your OS, browser, phone...) to try yours first.
I think the even better analogy than browsers is search engines. There aren't any network effects or platform lock-in, but there is potential for a data flywheel, building a brand, and just getting users in the habit of using you. The results won't necessarily turn out the same - I think OpenAI's edge on results quality is a lot less than early Google over its competitors - but the shape of the competition is similar.
Switching is super easy and people are doing it.
There is no moat
Maybe! Switching search engines is also very easy, and the top story on the front page is someone no longer using Google, but we know in practice almost nobody does that. As technologists we're much more likely to switch and know people who would switch.
Same strategy as for search. Gemini is going be shoveled down the mouth of users and they just won't change the default.
On iOS with the Apple agreement, and on Android (though the question of hardware remains beyond Pixel phones).
How many of those users are paying? Where is the profit? How many users will be willing to use ChatGPT if they had to pay? Might have to pull out the questions like its 2026.
But why are these users sticking to ChatGPT specifically ?
If it’s not the quality of their answers ?
They'll stay as long as it's cheap. The moment any attempt is made to raise the price, the number will crater.
Maybe: “ok I’m lazy, the app is preinstalled on my phone and it’s free, there are some ads but ok”
Isn't that the 'bull case' for Gemini?
Have the same feeling, they have Gemma-3 that is preparing to be on-device stuff, and getting deployed on iPhone if I understand it right.
Then it can be something along the lines of "subscribe to Google XXX or Apple +++ and have 'unlimited' cloud requests"
Also when they start seeing real ads.
It started to get deployed: https://chatgpt.com/pricing/ it's called "ChatGPT Go"
You pay 8 USD / month and have higher limits and adsRemember when everyone said Facebook would be dead if they started running ads
for 99% of normies ChatGPT is the only LLM provider they know or have heard of.
yeah, ~1B active users + when non-tech people think of AI, they think of "ChatGPT" not many of the competitors.
How do you think this compares to Google and the AI search?
Users are not a moat because there is no network effect here.
700 million and declining with no clear story to levering either the attention economy or paying
Netscape didn't have ridiculously high overheads?
They are in bed with Microsoft not against them. And Nadela is not the sharpest knife in the drawer unlike Bill Gates.
IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.
From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.
On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.
This piece that was on HN yesterday corroborates your gut: https://www.ben-evans.com/benedictevans/2026/2/19/how-will-o...
The "circular investment" is mostly start up companies using their stocks instead of cash to pay for server hardware and cloud computing. There is a few extra steps in between that make things look weird and convoluted, but the end results is really just big companies giving hardware and getting shares of ai companies in exchange for it.
I think you’re just describing how it’s circular.
It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.
But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.
Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.
Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.
OK, so absolutely good faith here what is the end game?
Obviously, there’s a scenario of super power AI and then it’s a matter of continuing course. Electricity and silicon.
What if you are right, and the scaling doesn’t work. It is too much power, time, hardware to improve… does openAI fold?
Do they just actual use the models they have?
Does everyone just decide that AI didn’t work and go back 5 years like it didn’t happen?
Does the price change so that they have to be profitable making AI services expensive and rare instead of today where they are everywhere pointlessly?
Or does this insane valuation only make sense with information you don’t have like insider scaling or efficiency news?
Does China’s strategy of undercutting US value of models pay off bigly?
Why so extreme, most likely just AI winter for a while, then when tech and societies has caught up, the advancements begins again.
It is not like we threw away the dotcom advances, they were just put on hold for a while..
Nope wrong framing.
Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.
You say calling it circular is wrong framing and the immediately proceeded to describe a circle.
It's not "continue" buying as much as this is NVIDIA fronting the money for (most of) the hardware OpenAI has already ordered from them. It's like borrowing rent money from your drug dealer.
Great analogy. ;-)
Doubt Jensen sees himself as a “dealer” but considering the vendor lock-in and margins, he pretty much is the Tony Montana of Ai Chips.
It’s nuts that this type of financing is legal.
It's like credit cards loaning money to people who are unemployed and will default on payments. It's a risky business that is legal and can be very profitable, but may also be disastrous in the future.
I don't see the problem as long as materially significant transactions by publicly traded companies are properly disclosed to investors. If someone loses money by buying NVDA then they have only themselves to blame.
It is legal because Jensen isn't selling drugs, payday loans are legal too!
It’s legal because both sides have armies of lawyers and are voluntarily entering into contracts where each party gets consideration.
How someone can compare the above situation to a person getting a payday loan to put a roof over their head or food on their plate is beyond me.
The “it’s like <insert wild and inappropriate analogy to stoke emotion>” is a tired trope.
Conversely it’s equity for an in-kind investment. Dave Choe taking the Facebook shares writ large.
> On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.
I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.
> hence I end up with $100 worth of stock in your company and it only cost me $25.
You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.
It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.
So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.
If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.
> I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?
Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.
They wouldn’t have bought $100 worth of product if the deal weren’t offered, because they didn’t have $100 to spend.
If they couldn't borrow $100, or get $100 from any other investor, that just puts you in the position of being an investor, and even then the difference between bradfa's version and mine is simply when you became an investor, not that you became one.
Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.
The primary cheat code here would actually seem to be (a) getting preferential access to Nvidia's production through these deals and (b) creating a paper story of increasing OpenAI private valuation.
Aaaannd get to claim the 100 as revenue to show investors that the company is performing better than if I had not made the deal, which also means that demand for the product stays inflated which also means I can keep my margins higher by not needing to discount my product.
Urgently need an IPO so losers can chip in. If the sandcastle plummets before, funds and other AI companies lose a lot, so better bet again and again, even if this is nonsensical.
The problem is here:
> Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80?
Why limit myself to $100 for a product that costs $80? I could just as well give you $1 000 000 to buy this same product from me. That way, I have a $1 000 000 share of your company, and I have $1 000 000 in revenue, and it only cost me $80.
This distorts the market for the product we're trading, and distorts the share price for both my company and yours.
> Isn't that basically the same as me giving you $80?
In your accounting, you can claim that you have an investment worth $100 and book $100 worth of revenue. You're juicing your sales numbers to impress shareholders - presumably, without your $100, the investee wouldn't have bought $100 worth of your product. The last thing your shareholders want to see are your sales numbers stop growing, or heaven forbid, start shrinking.
Nvidia is not the first company to "buy" sales of its own product via simple or convoluted incentive schemes. The scheme will work for a while until it doesn't.
That's like giving them* $20.
And inflate your revenue by $80.
Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.
In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.
We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.
In exchange for 100$ of your stock AND making your revenue numbers look insane for the next cycle ?
Also Nvidia margins are waaay higher than 20%
How I see it is the companies want to jack their revenue and in turn jack the price of their stock and please shareholders. Those are the two main goals which this accomplishes, regardless of the underlying fundamentals.
For both Amazon and Nvidia, their marginal costs are probably much lower than their fixed costs.
The reason this doesn't make sense is that this is the math of monopoly creation! The government should be making sure companies don't go around throwing money at circular deals that will make them and their friends a fortune while cornering the market, but it seems that capitalism rules don't exist anymore in the US.
I'm not a finance expert, but it may be because investment and purchase are are taxed differently (I don't know). You gave $100 away as investments, got $100 back as revenue. Meanwhile you establish that your product are worth $100 (while costing $80) and you have $100 worth of shares. Without considering side effects, you gave away $80 worth of product for $100 (supposed) worth of shares. But shares are subject to side effects and those side effects can be quite nice (making the news, establishing price,...).
The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.
> I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".
What if the product only costs you $20 to produce?
Comparing OpenAI and WeWork is a nonsensical perspective. OpenAI is shipping the most revolutionary product in a generation, with 800 million monthly active users. It's the fastest revenue ramp ever, at incredible scale -- $20B+ ARR. These are real fundamentals. They matter. And the cost of inference is coming down all the time.
WeWork was a short-term/long-term lease arbitrage business. The two are nothing alike.
They had a first-mover advantage for sure.
It used to be revolutionary, but now there is a huge difference: plenty of competition, and a growing number of high-quality models that can run offline (for free!) or cheaper (Gemini-Flash for example).
They are in some way the Nokia of AI, "we have the distribution, product will sell", but this is not enough if innovation is weak.
They are even lagging behind (GPT-5 is a weaker coder than Claude, Sora is a toy compared to Seedance 2.0, etc).
One Apple releases the AIPhone, running offline models, with 32 GB of unified memory, with optional cloud requests, then it's going to be super though for OpenAI.
How will they make money on their product exactly? To the tune of being worth nearly a trillion dollars? There is no guarantee that inference will go down, we’ve seen some improvement with cheap models, but they aren’t what people want, and otherwise models stay expensive to run and use
Inference is already profitable (training is not)
So what. In a highly competitive industry they can't keep selling inference unless they continually train better models. It's like saying my airline is profitable if you don't count the cost of buying new airplanes.
[citation needed]
OpenAI have made this claim and maybe it is with API pay-per-use (there's also good evidence eveb that is not if you dive into how much a rack of B200s cost to operate), but I'd be very sceptical that the free, $20 or $200 a month plans are profitable.
Then the questions are if the market will bear the real cost and if so how competitive OpenAI are with Google when Google can do what Microsoft did to Netscape and subsidize inference for far longer than OpenAI can.
The only reason to draw this comparison is to show SoftBank are not as competent as they'd like to appear to be - so putting their name in relation to investors of OAI does not strengthen the prospects we should share re. OAI.
It’s one of the worst takes I’ve heard. OpenAI creates the fastest growing app ever, spawns a revolution bigger than the internet, and this guys take is they are like WeWork…
Both can be true. Just because you've created a revolutionary product doesn't mean it's a viable business, let alone one worth $700+ billion. There is a lot of history of the first movers that created revolutionary products that eventually faded away into nothing, while others capitalized on the innovation.
Being the first doesn’t mean you’ll win. They have no product, only a commodity that you can find at other companies or even for free (DeepSeek).
They have a product but it’s a commodity now.
They are in the business of selling compute / datacenter rack spaces. A server where you pay per GBs transferred in/out.
If it’s Gemini or GPT behind, for most use cases users wouldn’t care.
Will they maintain an edge over other AI companies long term? With so many market participants will it become a race to the bottom?
This valuation puts their P/E around 40.
Anthropic $380B valuation on $13B ARR. P/E around 30.
5 years ago Uber was in similar territory. Tesla... Well we won't mention Tesla.
Nvidia sells the picks, AWS rents the mine, OpenAI digs, and the money just loops around the table...
I am expecting OpenAI stock to be the most volatile in history. The first 3-6 months will be fun.
How far the volatility ripples out will give us a real look into just how self-reinforced the financials truly are.
> Amazon will start with an initial $15 billion investment, followed by another $35 billion in the coming months when certain conditions are met.
Those conditions are an IPO or reaching AGI [1].
Nvidia and SofBank will pay in installments.
Also very interesting that Microsoft decided to not invest in this round. A PR statement was made though [2].
[1] https://americanbazaaronline.com/2026/02/26/amazon-to-invest...
[2] https://openai.com/index/continuing-microsoft-partnership/
Once they "reach AGI", will they have a big party on a carrier with a "Mission Accomplished" banner?
They don't need to reach AGI. They just need to put all of the engineers on HN out of work.
A year ago I would have said that was crazy. In the last month, I've been using Claude Code to write 20kloc of Rust code every day (and I review all of it).
A week is now a day. If that figure doubles, I have no idea what will happen to us. And I think it's coming.
> write 20kloc of Rust code every day (and I review all of it)
Only one of this can be true. It's not a shame to say you don't bother reviewing it, in the future that may well be the norm.
So you now have 400Kloc of Rust code? Doing what? How much of that is "new"?
I can't get Augment / Opus 4.5 to edit a few C++ files from within VSCode without going off on a wild goose chase or getting stuck in an infinite loop after I tell that it should be doing this: "oh, you're right, I need to do X", "To do X, I must understand how to do Y", "I see now that to do Y, I should look at at Z". "Let me look at Z", followed by: "oh, you're right, I need to do X"..
To do what, exactly, and are people paying you for your output or are you just making things for yourself?
Building things at a mature company with a market is a lot different than hacking together your own tools. There are a lot more people you can let down at scale.
> 20kloc of Rust code every day (and I review all of it).
Reviewing 1k lines of code an hour is a breakneck pace, are you spending 20 hours a day reviewing code?
It’s clearly code so flawless you can tell at a glance that it’s correct.
What does all this code do? What software are you writing?
> They just need to put all of the engineers on HN out of work.
I think you've crossed the line from being an AI maxi to just rage baiting. This comment is a pointless anecdote at best, please take your ridiculous FOMO takes elsewhere.
That’s the same definition of reviewing code as saying watching the movie is the same as reading the book it’s based on. No human has ever reviewed 600k lines of code in a month, ever. It’s hard to find someone who can even read and understand that amount in that time.
I’m convinced these “guys you gotta believe me I’m a seasoned veteran and this shit is the real deal” posts that show up in every AI thread are either coming from Sam Altman or a bot.
It'd be interested in seeing how exactly the lawyers figured out how to define AGI. It must be a fairly mundane set of KPIs that they just arbitrarily call AGI, the term will probably devalue significantly in the coming years.
The actual quote is this though:
> hitting an AGI milestone or pursuing an IPO
So it seems softer than actually achieving AGI or finalising an IPO.
Has OpenAI laid out the specific definition of what an AGI is for this case? The one from their mission is quite vague and the general community has nothing close to a universal common definition... which means they will most likely just define it as what they already have when the timing is right.
At least in their Microsoft contract it means $100 billion in profit, though they don't need to have actually made that money, they just need to show they're on track to do so.
I'd assume the real trigger here is "reaching AGI," which would help OpenAI shrug off some of their Microsoft commitments thus making OpenAI models available on Amazon Bedrock. Which is what Amazon is really after.
All the major investments in these big rounds have come in tranches, right?
Very convenient to put "AGI" in all these agreements because the term is fundamentally undefinable. So throw out whatever numbers you want and fight about it and backtrack later.
The definition used to be "passes the Turing test" .. until LLMs passed it.
The problem with AGI is not that it's undefinable, but that everyone has a different one. Kinda like consciousness in that regard.
Fortunately, OpenAI already wrote theirs down. Well, Microsoft[0] says they did, anyway. Some people claimed it was a secret only a few years ago, and since then LLMs have made it so much harder to tell the difference between leaks and hallucinated news saying this, but I can say there's at least a claim of a leak[1].
[0] https://blogs.microsoft.com/blog/2026/02/27/microsoft-and-op...
[1] It talks about it, but links to a paywalled site, so I still don't know what it is: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> fundamentally undefinable
Incredible, how an entire religion has sprung up around AGI.
So they’re getting in on the IPO.
Are they going to get stock for it or is it a PIPE?
Personally, I don’t think I want to get in on this at retail prices.
It can both be true at the same time that AI going to disrupt our world and that being an AI lab is a terrible business.
Hard not to hear the word “bailout” in my head when I see this many billions being tossed around.
At least investors like Amazon can afford to lose their investment ($50 billion). That would be like a normal person losing a few thousand dollars. It hurts, but life would go on.
That’s still $100B unaccounted for, and I’m pretty sure Amazon would expect fair treatment if other investors get a bailout. More likely, OpenAI is the one to receive the bailout, likely at the behest of the bigger investors, Amazon included.
So let´s see if I understood well this one: Got 110 Billions with the promise that either AGI will happen soon (:) or going public before the end of the year. Eitherway you get to double your 110 Billions no matter what (who will be left to pay the full bill after it, public or public)?
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
Depressing to see trillions sloshing around, and yet no jobs to be found.
Feels like my cheese has been moved
$110B at $840B post-money valuation for OpenAI vs
$30B at $380B post-money for Anthropic announced two weeks ago
This does not increase my confidence in OpenAI's future
Well Anthropic has said (a fairly weak but clear) no to DoW, I wonder who will say yes?
https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...
> Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
90% chance it's all PR but who knows
Never believe anything sam says
Never believe anything <insert tech ceo here> says
Sam in very particular here. This guy will say whatever for status and "power".
This should probably change to https://openai.com/index/scaling-ai-for-everyone/ which has more details.
> Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon.
Thanks, I've added that link to the toptext as part of merging a bunch of these threads.
We try to avoid having corporate press releases as the top-level link, though of course there are exceptions sometimes.
The tweet storm has a bit more substance
e.g. it talks about running NVIDIA's systems (?) on AWS
> NVIDIA has long been one of our most important partners, and their chips are the foundation of AI computing. We are grateful for their continued trust in us, and excited to run their systems in AWS. Their upcoming generations should be great.
Probably something like NVLink Fusion. AWS has been doing deals with suppliers for which the smallest unit of deployable compute is a 44U rack (e.g. Oracle), so this is more of the same.
https://www.nvidia.com/en-us/data-center/nvlink-fusion/
Hopefully this will allow them to continue to provide me unreasonable amounts of compute for €20/month. Enjoying it while it lasts…
This right here is the right attitude.
Use these freebies/relatively cheap tools up 'whilst stocks last'.
I personally managed to create a very high quality marketing promo vid using grok. After spending weeks of enduring a lot of pain. But I saved myself tens of thousands.
I took advantage of 30 Grok premium subscriptions that were given to me via a free trial. There's no doubt the cost of services I took advantage of is in the tens of thousands.
But what do I care? I get what I want and then I get out before the freebies disappear.
LOL at the cry babys down-voting. Get mad bruh, get mad.
I feel the same. I can't believe the amount of shit I am throwing at Codex for a measly 20€.
Have you tried to cancel recently?
Might save you €20 next month.
Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
Let the retailers decide this year at IPO! The heavy bags must be carried by someone
Is this really the con? Be part of the in-group and buy pre-IPO shares to dump them on joe-six-pack shortly afterwards?
Always has been? (Well, with pyramid schemes anyway.)
At least they don’t SPAC their way into the public market but the answer to your question is yes!
>Without circular investments and valuations what would Open AI be worth? 100B? 300B? Entirely on revenue alone it seems like 20B. Current valuation appears to be two orders of magnitude off.
They just passed $20B in revenue, you can't really expect a company with this much hype and traction to have a 1x multiple.. that's not to say a 35x multiple makes sense either.
HN told me OpenAI was on the verge of collapse.
I don't know that OpenAI specifically is the weak link but this definitely adds to the argument that the entire sector is a wash with the same three or four companies passing around the same $50B over and over. OpenAI is just the link that seems most likely to break first.
I've subscribed to a few AI tools for the last 3 years now. I'm someone who almost never subscribes to anything.
I'm sure that $50b has my money in there somewhere.
Yep same, I'd sooner starve than cut my Anthropic sub
I've seen this sentiment (OpenAI collapse imminent) a lot on Youtube and Reddit, but it somehow evaded me on here
Bad comments about OpenAI's long-term viability I've seen plenty here. But that's not the same as the people predicting one of the hottest companies right now will somehow suddenly run out of cash all on its own
Its hottest service by far is completely free, the vast majority of users of its free service aren't converting to users of its paid services (and often stop using the free service too because they were just tourists seeing what all the fuss was about, or they were compelled to use the free service by their employer), and its data center plans are an impossible money pit.
The fact it's become a household name internationally (giving it the appearance of success) can't save it from spending dramatically more money than it makes. It's been coasting on investments, but it's not even close to being actually profitable.
Huge or well-known companies have collapsed before, even though - because people become so used to them existing - it never quite feels like it will actually happen until it does.
If nobody invested in OpenAI how long could they keep the lights on? They're not profitable yet, and a lot of the wealth that Sam Altman seems to be making revolves around strange circular deals.
By comparison, Anthropic is projected to break even in 2028. Google's Gemini is already profitable.
What source do you have the Gemini is profitable? Are you referring only to the chat app, or to Google'a AI Ventures division? Or including Google Cloud AI related revenue?
Not agreeing with the parent, but that hardly matters. Google has a real business, advertising, that brings in $400 billion a year and income around $150B. They can afford to throw away tens of billions every year while still remaining immensely profitable and quite solid as a business. OpenAI has no such income to spend so it's as the above comments reflect, entirely unsustainable while Google's spending on AI is a drop in the bucket for them.
That's not saying that Gemini is profitable though.
Interesting. I’m having anything on Gemini being profitable though, do you happen to have a source?
Here's one, basically AI is driving 15% of Google's profits at the end of 2025.
https://advergroup.com/gemini-hits-650-million-users/
I didn't really realize how big Gemini was until I saw that Qualia was using it, they apparently used 0.01% of Geminis total tokens (100 billion) in about 3 months, they're in production with the title and escrow industry, so that's a great deal of data going through Gemini, unlike some chat subscription this is all API driven, which I doubt Google is charging at a loss for.
https://www.qualia.com/qualia-clear/
Unlike OpenAI, Google has an actual business model, not just strange circular deals.
Edit: I misswrote "majority of" instead of 15% of Google's profits.
> Here's one, basically AI is driving 15% of Google's profits at the end of 2025. https://advergroup.com/gemini-hits-650-million-users/
This does not at all tell us Gemini is profitable or driving 15% of its profits. The article does not mention profits even once. It then goes on to bizarrely compare Gemini's monthly active users to Open AI's weekly active ones.
Indeed, that article doesn't support a single part of that claim.
It kinda feels like an LLM-generated article that another LLM picked as a "citation", and then no human bothered to check if it actually said what the LLM said it did.
And, really, advergroup.com? Who sites an advertising agency as if it's a reliable resource?
https://advergroup.com/digital-marketing/
"AdverGroup Web Design and Creative Media Solutions is a full service advertising agency that delivers digital marketing services. We manage Google Ad Word campaigns and/or Meta Ad Campaigns for local clients in Chicago, Las Vegas and their surrounding suburbs."
So credible a resource on Gemini's performance/profitability... /sarc
But yeah it doesn't even actually say anything about profits, let alone attribute any specific percentage of profits to Gemini. It just vague marketing copy.
[flagged]
The title and escrow industry is using Gemini (via Qualia Clear) enough so that Qualia accounts for 100 billions of token usage in about 3 months. Just because you don't see who is using it, and how, doesn't mean that when the dust settles, the people actually using AI for real purposes wont keep using AI. I'm not sure which AI models big pharma is using, but there's already at least one new pharmaceutical drug in the secondary testing phase, showing strong results.
There will definitely be room for AI. OpenAI is just not really showing that they care about a particular business model. Probably a strong indicator that Sam Altman is probably the worst person to lead that company. Anthropic will be profitable before OpenAI ever will be.
Gemini is in the green in terms of spending / income ratio FYI. I'm not talking about stocks.
> Especially when that military is the reincarnation of Nazi Germany , and a fourth Reich (The USA)
I can't believe people who think this actually exist.
Have you watched the news recently?
Maybe you should get your news from a different source. Personally I prefer raw sources. I watch every official press briefing to hear from the horses mouth. You come to find that regardless of who is president news orgs put their own spin on it and you miss things they dont cover. Its all streamed on official government accounts.
Lmao, press briefings from the office of the führer is such a solid source to base your reality off of.
By the way if Kamala, Biden or Newsom was in office id also call them führer.
We live in a technocratic authoritarian state, the worlds largest prison population, the most police executions, we are actively sponsoring multiple genocides, we've killed over one million civilions in the middle east in two decades.
our politicians on both sides will go out of their way to protect pedophilic members of the ruling class...
But you want to tell us we're exaggerating or interpreting a reality that doesnt exist, i think youre the one who's been convinced through the regimes doublespeak that everythings alright.
Please revaluate. The US government is literally the 4th reich and actively committing halocausts on multiple fronts.
Do you know any history? You dishonor the people who died from horrible atrocities in WWII to make some glib performative political posturing. It's shameful behavior. Do better. Be better.
WWII didn’t start overnight. The Sturmabteilung (SA), also known as “The Brownshirts,” have a strong similarity to what we’re seeing with ICE and CBP. The SA were Hitler’s enforcers before the SS, during the 1920s and early 1930s. They were eventually usurped by the SS during “Night of the Long Knives” where SA leadership were executed by the SS. Largely because Hitler had felt threatened by the power Ernst Röhm had amassed (among other reasons). And the SA, like ICE, was made up largely of untrained sycophants and thugs who enjoy violence. They committed violence, harassed citizens, and had no consequences for doing so. They were also instrumental in laying the foundation for the genocide and atrocities committed by the Nazi party.
It’s not a dishonor to their memories, or the atrocities committed, to call that out. It is not a dishonor to say there are stark and real similarities between the way the US is operating and treating civilians.
I personally find the opposite, IMHO it is dishonors their memories to refuse to acknowledge the similarities.
I’ve posted a comment similar to this one here before, and like how I ended it. I strongly encourage you to read about the history of Nazi Germany and how it came to happen. It wasn’t just a zero to death camps, it was 15 years in the making. That history is deeply shocking, as it is depressing, because the parallels and timelines are too similar for anything besides outright discomfort, sadness, and fear between it and the US. But without knowing it, we are ever more likely to repeat it.
One final thing to note: the US has a history of extreme violence, slave patrols and the treatment of non-whites of the 19th century were an inspiration for Hitler.
You'll always find someone claiming X or Y are close to collapse at any given time. As even a broken clock is right twice a day, eventually one of these predictions will randomly be proven correct. That person will then be elevated to a genius forecaster and rake in cash for a decade or two.
Actually it is the other way around; every upstart claims that their invention is the mostest revolutionariest thing ever. 99.9% of them are not. The nay sayers are right most of the time.
Recent high-profile examples include Segway, NFT, Crypto as a whole, pre-tranformers voice assistants and various "Design Thinking" projects like those Amazon prime buttons.
My grandma (and my parents, by the way) have never heard of Segway, NFTs, or crypto, but they use ChatGPT all the time.
They would use Claude or Grok for day-to-day if they had heard of it.
And your grandma and parents pay for it, do they?
Free ChatGPT chat has made the company a household name, and helped it to persuade investors, but every single one of those free users costs the company money. Most of those free users have proved unwilling to convert to paid users, and adding ads to the free service promises to send it into the same enshittification death spiral so many other companies have fallen into.
Also, how on Earth would your grandma and parents not have heard of crypto? Crypto is frequently front page news, even in print newspapers. There have been crypto superbowl ads. Are they living under a rock?
Thiel said around last autumn that AI is a bubble and exited Nvidia. Nvidia is now falling despite good earnings.
If OpenAI keeps getting circular financing, of course they will not collapse yet.
> Nvidia is now falling despite good earnings.
I think it's still too early to tell. By what measure did you even determine that Nvidia is falling?
I don't think they are going to collapse. But it was only a couple of years ago that many people thought OpenAI had a big (some thought insurmountable) lead in a race to dominate a winner take all markee. Some people did correctly state that OpenAI had no moat in those days so credit there where it's due.
Now it's looking like a competitive blood bath where ever increasing levels of investment is needed just to main market position. Their frontier models are SOTA for 4 weeks before a competitor comes and takes the crown. They are standing on much shakier ground than they were 2 years ago.
A competitive bloodbath plus OpenAI has investment valuing it like it will achieve agi rather than (merely) being a huge advancement in computing, but not a fundamental rewriting of how all work is done.
the $30b investment from nvidia is instead of a previously-announced $100b investment from nvidia, so it's not like this is an entirely good-news story for OpenAI.
How much revenue have they generated? How about profit?
If investors keep throwing obscene money at OpenAI, sure, they can stay afloat forever. Can't argue with that. But if we're talking about a sustainable business, I still don't see it.
For Nvidia's part they're just giving money to one of their largest customers. They make money back even if they "lose" the bet
It's like government XX giving "help" or "grants" to countries at war so they can purchase weapons from XX.
Selling Shovels is quite lucrative whether there is an actual mining business or just a gold rush.
At one point Jensen Huang will be out (retired or forced by staginating sales) and can definitely look back on a very successful career. That much is certain.
Nobody saw coming the huge demand for coding agents. Not even OpenAI or Anthropic themselves. Those were side projects just a year ago and now dominate token demand. And they keep rising.
Does anyone see the demand for coding agents that aren't subsided 90% by the AI company?
Oh I do think they did see it, considering how good they are they've probably been a tuning focus for a while.
The signal the agent usage is sending though is that Anthropic is way ahead since all we hear about is Claude these days despite OpenAI spending so much more money, Antrophic is also out trialling vending machines,etc.
ChatGPT apart from generating text was a bit of a query/research tool but now that Google has their AI search augmentation shit somewhat together I'm not feeling much need for ChatGPT as a research partner.
So now the big question is, with coding and search niches curtailed, where will OpenAI be able to generate profits from to justify their insane spending?
Well, $110B a year doesn't last long if you are losing $40B a quarter.
Also Softbank invested, which is never a great signal.
That's mean.
They also invested in Uber
[dead]
Is OpenAI giving employees RSUs? What good are those under these astronomical valuations?
Presumably it’s all relative. Apple gives me RSUs with a much higher valuation (although at least it’s on the public market already).
they have PPUs
No it's RSU now. But idk if anyone would want to join OpenAI at these levels. Are they really a $1T business?
At least Anthropic has some runway in terms of valuation and isn't bleeding all over some free tier.
There's this saying that if you owe the bank a million dollars, you have a big problem, but if you owe the bank 100 million dollars, the bank has a big problem.
Is the same thing true for corporations? At some point the numbers are so wild the entire economy must help you succeed? I don't mean "too big to fail" exactly, more like "so big eventual success is guaranteed at all costs"
Those are the same thing. The whole point of saying "too big to fail" is to evoke the moment in the housing crash where governments largely threw most of their citizens under the bus by bailing out banks rather than homeowners for the banks' wildly irresponsible decisions. "Too big to fail" means the government steps in and bails you out, and that phrase became popular because for many it was the final nail in the coffin for their trust in government
Would the current administration bail out OpenAI?
They would give OpenAI anything they want if they proclaim the current guy the bestest and biggliest president of all time, ever. (edit: I meant, if chatgpt were to consistently claim that the current guy is the greatest president ever)
I wonder if there is "too big for IPO". Saudi Aramco in 2019 sold shares worth $25.6 billion in IPO. Even offering just 5% of OpenAI to public would shatter that record. Well, unless public isn't actually interested in investing such huge amounts.
And if you owe the bank a hundred billon dollars the entire economy has a big problem.
What would really help is knowing the details of such funding. The hierarchy of who gets paid first in event of going under is very illuminating and while I am not a banker I always wonder if there are caveats too complicated even for the large investors to understand
SoftBank? The music must be stopping soon, hold onto your butts.
What's the meme with SoftBank? Just that they're super bad at investments?
Ya. The WeWork debacle and investing $300 million into Wag, an imploded Uber for dog walking, surely wasn't helping.
What? SoftBank has been investing in them repeatedly for years now.
Less than a decade ago companies reaching 1 trillion was still every much "out there". Now we have an IPO at almost 1 trillion.
It's clear that the stock market cannot be considered normal anymore, held up on hopes at prayers at best.
Sure it can. The value of the dollar coincides with stock market valuations.
Exactly. A devalued dollar means higher number without adjustment
Well, it's still VC market right now, and all the investors have vetted interest into the music not stopping.
Puff puff until it pops!
The round is still open, Amazon is funding in tranches, and Sam doesnt get all the cash until he hits unknown metrics. Sounds like a down round.
> We continue to have a great relationship with Microsoft. Our stateless API will remain exclusive to Azure, and we will build out much more capacity with them.
This sounds a bit like going forward (some) OpenAI APIs will also run on platforms other than Azure (AWS)?
Anyone knows more?
Curious what is meant by "stateless".
OpenAI desperately needs to be available outside Azure. We are exclusively using Anthropic atm because it is what is available in AWS Bedrock and it works. These things are solidifying fast.
I guess Amazon would have a hard time justifying their investment if OpenAI remained Azure-exclusive...
https://openai.com/index/amazon-partnership/
Unless I'm mistaken wasn't someone at Microsoft suggesting they would just develop their own models soon?
Wow! This is circular financing. Sharknado, Altnado….
Interesting story for sure (to be clear I'm not talking about the writing by Reuters), but would you buy or skip the OpenAi IPO?
To me it feels like one of those throw some play money into it and see what happens sort of situations. Expect it will return negative due to the raw financials and outlook, but small chance the brand carries enough weight with the public that it spikes.
I'd love to hear other thoughts though
If the IPO was at 20B maybe I could throw a 1000$.
But at such numbers it's nonsense.
I don't see any moat. LLMs are commodities.
Enterprise is on Gemini/NotebookLM and Copilot as it's a natural extension of the Google and Office suite they use.
Devs are in Anthropic camp, but they will jump as soon as they can save 90% of the money for 99% of the output.
Does this mean they won't IPO this year?
Ok, I'm getting out.
Nvidia will get all that money back via GPU purchases, Amazon via cloud rental and SoftBank is being typical SoftBank - a rich but not particularly bright kid in a class :) .
"I give you $30 billion if you use it to buy $30 billion of stuff from me" doesn't sound like a very good investment. Is Nvidia expecting more back than it puts in? Enough more to make the deal profitable?
Or is it just to keep Nvidia from crashing?
"I give you 30B$ worth of hardware that costs me <10B$ to make in exchange for 30B$ worth of shares in your company" would be a more accurate description.
Well, I won't pretend I know the answer :) . But I assume that a) they are partially betting on making a normal return on investment (i.e. OAI not crashing), b) they profit from running a huge expense/revenue cycle (a company making say a million of profit and having a billion revenue is favored better than the same but with only ten million revenue), and c) even if all goes wrong, it is still better to get back most of the investment even if not everything and zero profit, compared to a possibility of just losing it all like SoftBank or other investors.
In the end it's exchanging GPUs for OpenAI shares. It's not a non-trade, and in the current market Nvidia could really sell the stuff for cash. The marginal cost is very much sharply positive.
$30B in sales is worth more than $30B in stock appreciation...
Does anyone have any ethical concerns using openai regarding money donated to the current US administration in one way or another? I will search for more accurate details about that situation. I know about several other ethical concerns with openai that people have, including copyright and other considerations regarding the work being trained on, as well as lack of action regarding users who are harmed by their usage of the product, often regarding mental health, environmental concerns, actually quite a few others, but I am interested if many people think their political donations are an issue or not.
> The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”
Incredible.
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
Tesla is a car company with relatively small, and shrinking, sales, that is worth $1.5T on the promise of [Elons_Promise_of_the_Month]
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
You forgot to mention they solved vision based autonomous driving, but I guess that doesn’t matter if Elon = bad
SAE level 2 driver assistance is explicitly not autonomous driving.
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
Gonna need a citation for that, buck.
Seems not solved:
https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...
Huh? They did not "solve" vision based driving.
Of course, but if Elon=great you can ignore that
Hope daddy sees this and gives you that lollipop.
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
A high-level language or a compiler wasn't automating end-to-end reasoning for a programming task.
[dead]
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls
Yes, this is kind of like Tesla promising full self driving in 2016
What are you talking about? OpenAI's ChatGPT free tier (that everyone uses) answers this in the first sentence within a couple seconds.
"If your goal is to get your dirty car washed… you should probably drive it to the car wash "
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
> Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.
I should note for epistemic honesty that I expected I would be able to come up with an example of a mistake I made recently that was clearly equally dumb, and now I don't have a response to offer because I can't actually come up with that example.
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
"AGI" is the IPO.
> If this comes to pass OpenAI's value is near unlimited.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
It can both be true that
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
But Anthropic is the one that is disrupting software development? So why are we not piling into that?
Exactly, the dot com bubble didn't mean that the internet was just a fad.
I'm curious how they define AGI technically. Seems like you would want that to be a tight definition.
Didn't they already define it as "a system capable of generating at least $100 billion in profit"
It just needs to be anything that will force OpenAI to IPO.
I'd love to know how they define AGI.
They've previously defined AGI as an AI that can directly create $100B in economic value.
Hmm interesting, thanks. I wonder how much value it's already created.
That number is probably negative
Altman Gets Investment?
Obviously in a way they get the $35B.
Hopefully Microsoft is selling parts of their share of this trash into these funding rounds...
Okay, I can understand investment from SoftBank, and maybe somewhat from Amazon (if they plan to use OpenAI's models), but investment from NVidia who will then sell OpenAI the GPUs with X% markup doesn't make sense to me.
I love how people think the company that basically invented ai is going out of business. Clearly OpenAI is a massive success and will continue to be
That’s what my Uber told me last night, not sure how he was able to get his hands on some stock!
"Basically invented AI" by running on principles that Minsky wrote about in the 80s, and improvements Google developed in the early 10s, on bigger and bigger computers. But "Basically invented".
That's a pretty lofty valuation for a company that has yet to demonstrate code generation anywhere near Anthropic's models if they're leaning into the engineering angle.
By what measure do you think they're not anywhere near Anthropic's models?
I dont see much of a difference betwen Claude,Codex and GLM with OpenCode. Any on them, nowadaws, works really, really, well.
"Calvinism makes pretty lofty claims for a religion who has yet to demonstrate soul salvation anywhere near Lutheranism if they're leaning into the reformation angle"
- Someone in the 16th century, probably
Many engineers use Codex 5.3 and find it better, including Hashicorp's Mitchell.
i find codex 5.3 roughly on par with (though tbh still not quite as capable as) sonnet models, which are not even anthropic's flagship model family.
And the OpenClaw guy
My guy, it's a tradeoff of autonomy vs thoroughness. You might not enjoy using the codex models, but to say they're way worse than claude is an error.
and they say its not a bubble! we saw it with oracle deal, big announcement and than nothing, same with nvidia and now same thing is going again i hpe this is cash infusion and not some credit deal.
It’s Tesla only big tech are the suckers.
I this $110 Billion more or just $110 billion historically?
OpenAI's just trading equity for GPU credits at this point?
730 Billion certainly is a bubble that will pop sooner or later.
Only $730B? Why stop there? As long as we're making stuff up, let's go big. What about $10T?
They have to save the big T for IPO.
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Rookie numbers, I say $100T. Go big or go home.
https://paintraincomic.com/comic/first-date/
You have to make it look semi-realistic
Remember when it was a huge milestone when gigantic companies like Apple and Microsoft were striving to be the first $1T company backed with decades of building actual businesses with actual profit?
Good times.
I guess no GPT on Bedrock still it seems
They announced more OpenAI models coming to bedrock.
What are they going to do with it?
Burn baby, burn!
BTW, real money or credits?
Feels like Nvidia getting in the game here might just put them at more risk. If things don't work out they'll be out their money and future sales and so on.
It is bad enough AI sucked up so much investment money, hitting companies that do make profitable things hard if AI bubble collapses would be bad...
Source: https://openai.com/index/scaling-ai-for-everyone/ (https://news.ycombinator.com/item?id=47180302)
I thought with OpenClaw they'd get more than a 3.67x multiplier of what Anthropic raised.
Source: https://openai.com/index/scaling-ai-for-everyone/ (https://news.ycombinator.com/item?id=47180302)
Our economy has turned into an ouroboros: a circle of snakes shitting in each others mouth until they get so sick we the taxpayer will get the privilege of bailing them out. I'm really fucking excited to eat shit for the 3rd time in 18 years. Super pumped.
$30B from Nvidia… so the investments are locked in circular dependency. Great for the economy.
This implies any actual investment took place, which would be an innovative break from the typical scenario with AI firms.
Oh the "investment" is definitely taking place on paper. Whether any money actually changes hands... doubtful.
This time, does the $100B actually exist?
https://www.inc.com/leila-sheridan/nvidia-is-wavering-on-its...
What's the statue of limitations for securities fraud? The current administration won't last forever.
Circular economy money
Normally, there's at least a locked suitcase full of newspapers racking up frequent flier miles...
1,000 metric tons do not fit in a airplane…
1,000 metric tons of hot air is a lot of volume.
Fits evenly on a blockchain…
Definitely with regard to Nvidia.
If you make a billion but only pay $2M for a pardon it might be worth it: https://www.independent.co.uk/news/world/americas/us-politic...
> This time, does the $100B actually exist?
Nope. That 100B is in "promises" for over several years in total.
They have $15B out of the $50B from Amazon right now.
> The current administration won't last forever.
This is why OpenAI must IPO and when it does, I won't be surprised that a crash is followed up before 2030.
By then, they will "announce" "AGI" (Which actually means an IPO)
Oh; good point. The great economic crash of 2029+ will be caused by the democrats cleaning up Trump's mess. (Sort of like "Biden's" inflation.)
> By then, they will "announce" "AGI"
It’s already a joke to call the slop generators “AI”, so giving it another fake name won’t really make much of a difference any more. Nothing short of a miracle will be able to top the “creative marketing” we already have.
Taking the circular deals up another magnitude?
There is not a single OpenAI model in the top 10 on openrouter's ranking page. The market is saying something about the comparative value of OpenAI.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
Methodology problems aside, do we have any idea how big OpenRouter is as compared to the big providers?
OpenRouter claims "5M+" users; OpenAI is claiming >900M weekly active users.
I don't really think it's possible to learn anything about the broader market by looking at the OpenRouter model rankings.
Agreed it's not really good signal (many sampling biases) but user count is not relevant, most money is from heavy API users. 900M users with free or cheap subscription are nothing compared to even 10k heavy API users.
On the other hand, big users don't use openrouter. At $work we have our own routing logic.
1. openrouter is API usage. There is obviously consumer side
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
This could just be because everyone is using direct OpenAI api keys when using OpenAI.
> The market is saying something about the comparative value of OpenAI.
Is it?
At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
That day will come. Not everyone needs a Ferrari.
Edit: I misread the parent, I think they're saying the same thing.
Model rankings are irrelevant. No one cares.
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
Including paying poets and other experts by the hour to improve the models https://conversationswithtyler.com/episodes/brendan-foody/
> At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
Yes, and that is why I used the phrase comparative value. The concept of winning business based on being #1 on the benchmarks is dead.
I agree, and most of my peers do as well. This is why most of us shifted to funding AI Applications startups back in 2023-24. Most of these players are still in stealth or aren't household names, but neither are ServiceNow, Salesforce, Palo Alto Networks, Wiz, or Snowflake.
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
Or, their customers integrate with them directly.
Sample bias.
Big number gets bigger
Kind of leaving out a lot of detail there:
- Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
- The $30B each from softbank and NVIDIA is paid in installments
So this is more a $35B fundraise, with a _promise_ of more, maybe, if conditions are met. Not _bad_, but yet more gaslighting from Mr Altman. Anyone reporting this as a closed fundraising deal is being disingenuous at best.
> - Amazon's $50B is only $15B, with the rest being "after certain conditions are met", whatever that means (probably an IPO, which isn't happening)
Startup funding is often given in increments depending on milestones being met. Most startups just don’t announce that it’s conditional.
For large funding rounds, nobody gets a check for the full amount at once.
The funding would not be conditional on an IPO because that wouldn’t make any sense. The IPO is the liquidity event for the investors and there’s no reason for a startup to take private investment money that only enters the company after IPO.
This is pretty standard. Usually the conditions are performance benchmarks, but may also include IPO. Typically its done in multiple tranches, e.g. 15B at the start, 5 more if you gain +500m users, 5 more if your profit exceeds X, and the rest for IPO (im over simplifying)
The conditions are either an IPO or achieving AGI. I’d be curious to know how the contract defines AGI. If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer, insofar as we will have AGI when the markets decide we have AGI and not when some set of philosophical criteria seem to be satisfied.
> If I recall correctly, the OAI-Microsoft deal just defined it as “AI-shaped tech that can generate $100 billion in annual profits”, which I think is actually close to the correct answer
So if they hit 100 billion annual then it's AGI but if Kellogg's launches “FrostedFlakes-GPT" and steals 30% of the market it's no longer AGI at 70 billion?
Not to nitpick but to expand, many funding deals (pretty much all above 100M) are structured like that.
You'll never get a billion dollar check from anyone.
I've even seen startups raise like 500k pre-seed with tranches in it, lmao!
nit: I think you mean tranches
Whoops, typo. Thanks!
*tranche
Circular-breathing causes the air to heat up, causing expansion. This is how a balloon can expand even when someone is breathing air from inside it.
s/breathing/investment/g s/balloon/bubble/g s/air/money/g
I performed the suggested substitution. What is the heating up of money in that analogy?
Sarcastically, it's "the vibes intensifying".
(Vibes ~ Vibrations ~ Heat)
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
[dead]
Two economists were walking down the street when they spotted a giant dog turd on the ground.
One of them wanted to have some fun, so said to the other - "I'll give you $100 if you take a big bite of that turd".
His colleague figured $100 was a good chunk of cash, so did the deed. Feeling thoroughly humiliated, he pocketed the $100 and they carried on.
Further down the street they came upon another turd.
The angry economist now wanted revenge so made the same proposal back to his colleague, who also agreed and took a bite of the turd, earning back his $100.
Later one of them said to the other "you know, I can't help but feel we both ate shit for no reason."
His collegue replied "what do you mean? We raised the national GDP by $200."
The number is irrelevant. The fact is that work was done and was repaid with work.
Money was just the means of the transaction.
work good even if work literally eating shit
surely that behavior leads to a good society and doesn't encourage nefarious behaviors
[dead]
> We raised the national GDP by $200.
Seeing this phenomenon, a silicon valley entrepreneur get an idea with the following sales pitch:
"Turd-bars that will make you the fittest version of yourself , answer all your deepest questions, and take you to the promised land (mars)."
Surprisingly, the turd-bars sell well, and GDP rockets up. Meanwhile VCs with fomo are funding its competitor: the shit-sandwich.
I did upvote, it's witty, but it's a bit of a misrepresentation of how the economy works.
In practice, people don't tend to pay people to eat shit without gain. You are paying people to help you. Money gaslights everyone into helping each other, the most selfish people become the most selfless.
Of course, real capitalism is much more complex and much uglier than this fantasy. When certain people end up with long-term control of large piles of money, the whole thing gets distorted. They get to make lots of money on interest without doing anything, and making other people eat more shit for scraps. That's the "capital" part of capitalism.
But the toy world-model that this joke is making fun of, is actually the one core positive aspect of capitalism and brings all the prosperity we have: tricking people into helping each other.
> the most selfish people become the most selfless
You reminded me of this Stewart Brand quote:
> Computers suppress our animal presence. When you communicate through a computer, you communicate like an angel.
I scratch your back for a $10M IOU.
You scratch my back for a $10M IOU.
The debts cancel out.
How is the economic gain calculated?
If OpenAI is Pied Piper, who is Russ Hanneman in all this?
I would vote for Once-CEO-Then-not-then-CEO-Again Hypeman Sam Altman
I like to point out that he was fired for egregious dishonesty.
[flagged]
[flagged]
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
https://news.ycombinator.com/newsguidelines.html
It’s not craze. It’s technology shift. Bitcoin and 3D printing were craze. It’s like a move from analog photography to digital. I am telling you this as a very conservative person. Even for me it’s helpful.
3D printing is helpful too. The infrastructure created during the dot-com bubble of the late 1990s was also helpful. The UK is still profiting from the railway infrastructure created during the railway craze of the 1840s (https://en.wikipedia.org/wiki/Railway_Mania). The question is just how much of the valuation of AI companies is because they are useful and how much is speculation...
> 3D printing were craze
That's certainly a take, industry loves it. Sure, all that "everybody will print widgets at home instead of going to the store" stuff was never going to happen, but 3d printing is nonetheless here to stay.
I do not long back to the days of reused boxes and ducktape as homes for my electronics. It's here to stay.
It's a manufacturing technique that is helpful for prototyping and makes sense some types of small scale manufacturing.
But it's not magical, and not much different to injection moulding or something in concept.
Almost everything created with home level 3d printers is plastic junk you can buy for a few dollars on aliexpress (without weird rough edges).
It can be both a craze and a technology shift. AI isn't going away, it will transform some industries. But right now it's overhyped, overfunded and due a trip back to reality.
3D printing has a CAGR of 18-25%, not exactly 'were craze'
It most definitely COULD be a craze from the perspective of scope of investment, societal impact and timing. No one surfing the crest of this wave could be described as "conservative".
So how much are you willing to pay for it?
Personally at this point my combined AI spend is the most expensive recurring monthly subscription I have, and that’s even with my company also paying for the AI tools I use at work.
If it weren’t subsidized I would pay more. Wouldn’t be happy about it but I would do it.
At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
> At this stage in the game I don’t really understand where this skepticism of the value these tools provides comes from.
Fear
I get it. I’m scared too. I’d be lying if I said I wasn’t.
Actually it is not about this stage. It is about the sustainability of this when training data runs out and there is less and less human generated content.
An echo cannot go on forever!
> Actually it is not about this stage. It is about the sustainability of this when training data runs out
This is an argument from 2024. Somehow, the models have continued to improve.
If they stopped improving today they are good enough as they already are to generate profound change.
The wave front is already visible, we’re just on the shore waiting for the impact.
When training data runs out, they usefulness will diminish quickly. They will still be useful for searching documents etc, but I guess they are not good at that even now.
When training data runs out, their usefulness will stop growing quickly. Why should their usefulness diminish?
Because they would not be up-to date with programming languages, tools, best practices etc.
May be there is some way to keep the model up-to date in less dramatic ways. But I think something gotto give..
I mean, even now the vibe coded stuff is reprehensible.
20 bucks a month
> It’s not craze. It’s technology shift.
It is a bubble with extreme levels of debt + funding from too many promises from companies that are in these sort of rounds.
People being consumed by the hype will also be completely consumed by the crash.
Comments like this is exactly how a 2000 and a 2008 style crash will happen.
"This time it's different!"
> Bitcoin and 3D printing were craze.
What bitcoin gave us essentially? Huge pump and dump schemes coordinated by big hands? Crypto investments which made 95% of investors poorer? What's left? Maybe 0.01% of it was beneficial.
Freedom from overregulated and antiquated retail banking especially wrt cross-border transfers.
I guess it isn't that noticeable from inside US, but the rest of the world is grateful.
> the rest of the world is grateful.
Maybe speak for yourself? As part of the rest of the world, I am not grateful.
And I do. Speak, that is.