To answer a few people at once: I did mention compensation as a factor in the post, but I didn't elaborate details, so easy to miss. Comp is important of course, but so are the other factors. It feels like I can't go for a day without reading about the cost of AI datacenters in the news, and I can do something about it.
Again, many comments here saying I only care about the money, and while comp is an important factor I think it characterizes me as someone I'm not, and forgets what I've been doing for the past two decades. I've spent thousands of hours of my life writing textbooks for roughly minimum wage, as I want to help others like me (I came from nothing, with no access tech meetups or conferences, and books were the gateway to a better job). I've published technologies as open source that have allowed others to make millions and are the basis for many startups. I'm also helping pioneer remote work and hoping to set a good example for others to follow (as I've published about before). So I think I'm well known for caring about a lot of things during the past couple of decades.
The issue is that you're doing lot, but not saving the planet.
What do you think is happening with the efficiency gains? You're making rich people richer and helping AI to become an integral (i.e. positive ROI from business perspective) part of our lives. And that's perfectly fine if it aligns with your philosophy. It's not for quite a few others, and you not owning up to it leads to all kinds of negativity in the comments.
I mean, I don't know you well, but, I see your posts on here from time to time and from what I gather you are very, very, exceptional at what you do.
Reality is, these AI giants are here and they are using a massive amount of resources. Love them or hate them, that is where we are. Whether or not you accept the job with them, OpenAI is gonna OpenAI.
Given how much the detractors scream about resource uses, you'd think they'd welcome the fact that someone of your calibre is going in and attempting to make a difference.
Which, leads me to believe you're encountering a lot of projecting from people who perhaps can't land the highest of comp roles, and shield their ego by ascribing to the concept of it being selling out, which they would of course never do.
It's probably impossible to prove I'm not projecting..
However. I am putting my curious foot forward here:
What were the toughest ethical quandaries you faced when deciding to join OpenAI?
To give a purely hypothetical example which is probably not relevant to your case: if I had to choose DeepSeek or OpenAI, I think I would struggle with openness of the weights..
Ignore the haters (who sadly have become extremely common on HN now).
I loved your work back when I was an IC, and I'm sure this is a common sentiment across the industry amongst those of us who started systems adjacent! I still refer to your BPF tools and Systems Performance books despite having not written professional code for years now.
Can't wait to read content similar to what you wrote about when at Netflix and Intel albeit about the newer generation of GPUs and ASICs and the newer generation of performance problems!
It would be good if the performance improvements made can be applicable across the industry so everyone benefits. But it doesn't sound unbelievable that OpenAi may want to keep some of it secret to keep an advantage over others?
Thanks for taking the risk in this environment and posting about your experience from a personal standpoint. [environment: people will come at you from all angles with very passionate opinions]
The string "compens" appears exactly once in your post:
> But there are other factors to consider beyond a well-known product: what's my role, who am I doing it with, and what is the COMPENSation?
You did it for the money; don't try to rationalize it, because no one believes you. For that amount of cash, I'd probably jump on Altman's bubble for a year or two.
Humans are complex and have multiple sources of motivation. You don't know whether he took the offer with the highest pay. He's likely wealthy enough that he can pay less attention to his income and focus on his other sources of motivation if he wants to. That's not to say pay is not a factor in his choice, but it need not be the only or primary one. This is a luxury of the privileged for sure, which can make it difficult to relate to.
Interesting. Out of curiosity, how long do you think OpenAI can survive as a company? Put another way, what would be your guesses for probability of failure on 1yr, 3yr, and 5yr horizons?
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
> I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet.
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
> ...it's not just about saving costs – it's about saving the planet
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Reminds me of when I was younger and thought of companies like Google and Tesla as a force for good that will create and use technology to make people's lives better. Surely OpenAI and these LLM companies will change the world for the better, right? They wouldn't burn down our planet for short-term monetary gain, right?
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
Right? Like what an incredibly naive thing to think, that BG is going to contain power consumption lmao. OpenAI is always going to run their hardware hot. If BG frees up compute, a new workload will just fill it.
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
I imagine there's a lot more to be gained than that via algorithmic improvements. But at least in the short term, the more you cut costs (and prices), the more usage will increase.
> I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations.
> My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren't easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won't be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.
Firstly, you would do well to read the guidelines about avoiding snark, and then actually say whatever it is you’re trying to say rather than make insinuations. As is, this response comes across as a very shallow read. It’s hard to get to the root of what you’re actually saying in your post other than it quotes two paragraphs about how it’s not fun to push through the bureaucracy of a large organisation, which - I would agree. Probably most people who’ve worked at a big company would.
So why does that make him a “big shot”? Are you perhaps envious of him?
Why does openAI deserve him or anyone? Hard to say.
Brendan, I'm a big fan of your book, and work.
I don't have a problem with you joining OpenAI; best of luck there!
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
For people who’s main computing devices are phones, this isn’t hard to believe at all.
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
> She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!
It’s super dope, and you can have it talk to people for you in the local language when you go there. I’ve busted it out to explain what I’m thinking for me. Watching travel shows on TV or reading travel magazines is sadder.
Brendan can do whatever he wants. Hes that good. If anybody seriously needed to interview him 20+ times to figure it out, then the burden is now on them to not fuck it up.
He's summing interviews across all AI giants. But the ones about to IPO can interview someone almost infinitely many times, because everyone wants on the bandwagon.
Apparently, there's this guy who's really good at optimizing computer performance and makes a lot of money doing it. At the same time, he writes mediocre school essays that are actually a bit embarrassing. Guys, if you have the opportunity to land a very well-paid job, then do it. Take the money. Live your life. But please spare us the public self-castration.
If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
I think OpenAI will IPO at 1T. I don’t want to say bubble but it could be one of these stocks super hyped that never goes anywhere after the IPO(I.e airbnb during Covid)
To answer a few people at once: I did mention compensation as a factor in the post, but I didn't elaborate details, so easy to miss. Comp is important of course, but so are the other factors. It feels like I can't go for a day without reading about the cost of AI datacenters in the news, and I can do something about it.
Again, many comments here saying I only care about the money, and while comp is an important factor I think it characterizes me as someone I'm not, and forgets what I've been doing for the past two decades. I've spent thousands of hours of my life writing textbooks for roughly minimum wage, as I want to help others like me (I came from nothing, with no access tech meetups or conferences, and books were the gateway to a better job). I've published technologies as open source that have allowed others to make millions and are the basis for many startups. I'm also helping pioneer remote work and hoping to set a good example for others to follow (as I've published about before). So I think I'm well known for caring about a lot of things during the past couple of decades.
The issue is that you're doing lot, but not saving the planet.
What do you think is happening with the efficiency gains? You're making rich people richer and helping AI to become an integral (i.e. positive ROI from business perspective) part of our lives. And that's perfectly fine if it aligns with your philosophy. It's not for quite a few others, and you not owning up to it leads to all kinds of negativity in the comments.
I mean, I don't know you well, but, I see your posts on here from time to time and from what I gather you are very, very, exceptional at what you do.
Reality is, these AI giants are here and they are using a massive amount of resources. Love them or hate them, that is where we are. Whether or not you accept the job with them, OpenAI is gonna OpenAI.
Given how much the detractors scream about resource uses, you'd think they'd welcome the fact that someone of your calibre is going in and attempting to make a difference.
Which, leads me to believe you're encountering a lot of projecting from people who perhaps can't land the highest of comp roles, and shield their ego by ascribing to the concept of it being selling out, which they would of course never do.
It's probably impossible to prove I'm not projecting..
However. I am putting my curious foot forward here:
To give a purely hypothetical example which is probably not relevant to your case: if I had to choose DeepSeek or OpenAI, I think I would struggle with openness of the weights..Brendan, your work has been transformative. I own all your books and have probably read every technical blog post twice.
I hope there will be harder problem waiting for you, than using flamegraphs to optimize GenAI Porn.
https://www.axios.com/2025/10/14/openai-chatgpt-erotica-ment...
Ignore the haters (who sadly have become extremely common on HN now).
I loved your work back when I was an IC, and I'm sure this is a common sentiment across the industry amongst those of us who started systems adjacent! I still refer to your BPF tools and Systems Performance books despite having not written professional code for years now.
Can't wait to read content similar to what you wrote about when at Netflix and Intel albeit about the newer generation of GPUs and ASICs and the newer generation of performance problems!
> Ignore the haters (who sadly have become extremely common on HN now).
Nobody is hating anyone here, I don’t know where you got that from.
They are asking Brendan to be honest.
It is fine to say it is about the money.
But it is silly, borderline patronising to tell readers that you’re “saving the world” because you got a job at OpenAI.
Ah, so you you see into the future, got it!
[flagged]
I feel like I can do something about something too but no one is picking me to do anything about anything.
It would be good if the performance improvements made can be applicable across the industry so everyone benefits. But it doesn't sound unbelievable that OpenAi may want to keep some of it secret to keep an advantage over others?
Thanks for taking the risk in this environment and posting about your experience from a personal standpoint. [environment: people will come at you from all angles with very passionate opinions]
I’m replying to your comment in the hopes of getting a response. In the blog post, you said:
> There's so many interesting things to work on, things I have done before and things I haven't.
What are the things you haven’t done before, if you could mention them?
>Did fixing it from the inside work for any of those other issues?
No, it never does. Those people somehow delude themselves into thinking it might, but...it might just work for us.
Turn them off!
> Comp is important of course,
The string "compens" appears exactly once in your post:
> But there are other factors to consider beyond a well-known product: what's my role, who am I doing it with, and what is the COMPENSation?
You did it for the money; don't try to rationalize it, because no one believes you. For that amount of cash, I'd probably jump on Altman's bubble for a year or two.
I believe him. I don’t know him personally but his blog posts pop up here from time to time and this feels genuine to me.
You believe someone taking a fat paycheck isn’t doing it for the fat paycheck?
Wanna buy a bridge?
Humans are complex and have multiple sources of motivation. You don't know whether he took the offer with the highest pay. He's likely wealthy enough that he can pay less attention to his income and focus on his other sources of motivation if he wants to. That's not to say pay is not a factor in his choice, but it need not be the only or primary one. This is a luxury of the privileged for sure, which can make it difficult to relate to.
Interesting. Out of curiosity, how long do you think OpenAI can survive as a company? Put another way, what would be your guesses for probability of failure on 1yr, 3yr, and 5yr horizons?
EDIT: possibly a corollary--does Mia pay money for chatgpt or use a free plan?
> I stood on the street after my haircut and let sink in how big this was, how this technology has become an essential aide for so many, how I could lead performance efforts and help save the planet.
Brendan.
First of all congratulations on your new job. However,
It is easier to just say to everyone it is about the money, compensation and the stock options.
You're not joining a charity, or to save the planet, this company is about to unload on the public markets at an unfathomable $1TN valuation.
Don't insult your readers.
They're Making the world a better place™
For the Benefit of Humanity®
[dead]
You gonna open source it?
> ...it's not just about saving costs – it's about saving the planet
There's something that doesn't sit right with me about this statement, and I'm not sure what it is. Are you sure you didn't just join for the money? (edit: cool problems, too)
Reminds me of when I was younger and thought of companies like Google and Tesla as a force for good that will create and use technology to make people's lives better. Surely OpenAI and these LLM companies will change the world for the better, right? They wouldn't burn down our planet for short-term monetary gain, right?
I've learned over the years that I was naive and it's a coincidence if the tech giants make people's lives better. That's not their goal.
The AI train is going with or without you, if you can be part of it and improve the situaton, why not.
Right? Like what an incredibly naive thing to think, that BG is going to contain power consumption lmao. OpenAI is always going to run their hardware hot. If BG frees up compute, a new workload will just fill it.
Sure you might argue "well if they can do more with less they won't need as many data centers." But who is going to believe that a company that can squeeze more money from their investment won't grow?
Tangentially, I am looking forward to learn the new innovations that come from this problem space. [Self-rightous] BG certainly is exceptional at presenting hard topics in an approachable and digestible manner. And now it seems he has an unlimited fund to get creative.
They're going to grow either way. Those new workloads are going to be run
Ya, we know. Just humbling the author ;)
Even a 25% reduction in resource usage will probably not be enough, AI datacenters are still a huge resource sink after all
I imagine there's a lot more to be gained than that via algorithmic improvements. But at least in the short term, the more you cut costs (and prices), the more usage will increase.
The blog author is the same guy who wrote this when leaving his previous company https://www.brendangregg.com/blog/2025-12-05/leaving-intel.h... :
> I also supported cloud computing, participating in 110 customer meetings, and created a company-wide strategy to win back the cloud with 33 specific recommendations, in collaboration with others across 6 organizations.
> My next few years at Intel would have focused on execution of those 33 recommendations, which Intel can continue to do in my absence. Most of my recommendations aren't easy, however, and require accepting change, ELT/CEO approval, and multiple quarters of investment. I won't be there to push them, but other employees can (my CloudTeams strategy is in the inbox of various ELT, and in a shared folder with all my presentations, code, and weekly status reports). This work will hopefully live on and keep making Intel stronger. Good luck.
OpenAI deserves these big shots.
Firstly, you would do well to read the guidelines about avoiding snark, and then actually say whatever it is you’re trying to say rather than make insinuations. As is, this response comes across as a very shallow read. It’s hard to get to the root of what you’re actually saying in your post other than it quotes two paragraphs about how it’s not fun to push through the bureaucracy of a large organisation, which - I would agree. Probably most people who’ve worked at a big company would.
So why does that make him a “big shot”? Are you perhaps envious of him?
Why does openAI deserve him or anyone? Hard to say.
I stopped reading just after that. “I joined PhilipsMorris to make smoking cigarette smoking safer…”
The problems are interesting and the pay is exceptional. Just fucking own it.
He interviewed everywhere and took the biggest offer. Good! Don’t piss on my face and tell me it’s raining.
It's raining anyway. If I piss on your face, I can at least try to make the experience as positive as possible for you.
[flagged]
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
https://news.ycombinator.com/newsguidelines.html
Brendan, I'm a big fan of your book, and work. I don't have a problem with you joining OpenAI; best of luck there!
However, I'm not sure your analysis is quite correct, in this case.
If OpenAI can mobilize X (giga)dollars to buy Y amounts of energy, your work there will not reduce X or Y, it will simply help them produce more "tokens" (or whatever "unit of AI") for a given amount of energy.
So in a sense you're helping make OpenAI tools better, more effective, but it's not helping reduce resource usage.
https://en.wikipedia.org/wiki/Jevons_paradox
And the consequence of burning more tokens, of course, is more widespread adoption, weaving AI more deeply into the fabric of our reality.
Strong LinkedIn vibes in this entry.
This article is so full of itself I can hardly stand to read it. I had to just sort of skim it instead. Sorry! This style just doesn't do it for me.
Not the first time either. See this person's previous blog when leaving his earlier company. Lots of Kim kardashian vibe of self inflated self worth.
It's a blog post, not an article. A narrative of events, not an interesting write-up on a topic.
Was it because ChatGPT helped you to write this post?
> Mia the hairstylist got to work, and casually asked what I do for a living. "I'm an Intel fellow, I work on datacenter performance." Silence.
How could she not know?
For people who’s main computing devices are phones, this isn’t hard to believe at all.
Interacting outside of the tech bubble is eye opening. Conversely, the hair stylist might have mentioned the brand of a super popular scissor supplier/other equipment you’d have never heard of.
You missed the sarcasm.
Lol, I did. Needed a /s!
> She was worried about a friend who was travelling in a far-away city, with little timezone overlap when they could chat, but she could talk to ChatGPT anytime about what the city was like and what tourist activities her friend might be doing, which helped her feel connected. She liked the memory feature too, saying it was like talking to a person who was living there.
This seems rather sad. Is this really what AI is for?
And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
> A small local model or batched inference of a small model should do just fine.
Or, you know, Signal/Matrix/WhatsApp/{your_preferred_chat_app}. If you're already texting things, might as well do that.
> And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.
Asynchronous human to human communication is a pretty solved problem.
A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!
I use it as something to talk to about incredibly nerdy and/or obscure things no one else would be willing to talk about.
That’s honestly just sad. Not the fact you doing it, but rather the fact you have nobody to talk to about those things.
I feel like this kind of response is a good example of why someone wouldn't talk to others about things.
It’s super dope, and you can have it talk to people for you in the local language when you go there. I’ve busted it out to explain what I’m thinking for me. Watching travel shows on TV or reading travel magazines is sadder.
> it's not just about saving costs – it's about saving the planet.
You're in for a surprise buddy.
Performance and efficiency are important, but we need you to invent the monitoring tools and visualisations that will underpin alignment!
> save the planet
> I'd been missing that human connection
At OpenAI.
Brendan can do whatever he wants. Hes that good. If anybody seriously needed to interview him 20+ times to figure it out, then the burden is now on them to not fuck it up.
The article says "I ended up having 26 interviews and meetings (of course I kept a log) with various AI tech giants."
I don't think that indicates that any one company interviewed him 20+ times.
Seriously. I would expect him to be more of an offer-only scenario.
He's summing interviews across all AI giants. But the ones about to IPO can interview someone almost infinitely many times, because everyone wants on the bandwagon.
Mia was right. Listen to Mia
Apparently, there's this guy who's really good at optimizing computer performance and makes a lot of money doing it. At the same time, he writes mediocre school essays that are actually a bit embarrassing. Guys, if you have the opportunity to land a very well-paid job, then do it. Take the money. Live your life. But please spare us the public self-castration.
If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.
No single person other than Sam Altman can stop them from using anonymized interactions for training and metrics. At least in the consumer tiers.
It's a little too late for that, all the models train on prompts and responses.
TL:DR $$$$$
[flagged]
TLDR: Money, Fame and A Glorious IPO (AGI)
Just say you joined for the money and that Intel's stock didn't do a 10,000x run like Nvidia did and he completely missed it.
So the best chance at something like that again is OpenAI when they achieve a 1TN valuation with AGI.
Unless OpenAI goes with a very liberal definition of AGI, he's going to wait decades for AGI.
They’re already trying to redefine the AGI playing field by doing so.
I think OpenAI will IPO at 1T. I don’t want to say bubble but it could be one of these stocks super hyped that never goes anywhere after the IPO(I.e airbnb during Covid)
I believe that OpenAI wants to IPO at that valuation. I don’t think it can IPO.
[dead]