TECH004: SAM ALTMAN & THE RISE OF
OPENAI W/ SEB BUNNEY
TECH004: SAM ALTMAN & THE RISE OF OPENAI W/ SEB BUNNEY
07 October 2025
Seb and Preston analyze the book “Empire of AI,” reflecting on Sam Altman’s rise and OpenAI’s transformation from a nonprofit into a powerhouse AI firm.
They dissect the complexities of governance, ethical AI, and AGI safety, while offering sharp critiques of the book’s narrative. The duo also previews their upcoming dive into Lifespan by David Sinclair, teasing an exploration of longevity and ancestral health.

IN THIS EPISODE, YOU’LL LEARN
- Why Sam Altman’s early ventures shaped his leadership style
- The founding vision behind OpenAI and Elon Musk’s original role
- How OpenAI evolved from a non-profit to a capped-profit model
- The internal power struggles that led to Altman’s firing and reinstatement
- The significance of AI governance structures in shaping future technologies
- How storytelling plays a role in securing AI funding and public trust
- Why AGI poses ethical and societal challenges
- The hidden costs and global inequalities in AI model training
- A sneak peek into longevity research and Lifespan by David Sinclair
- Why ancestral health might hold keys to understanding aging
TRANSCRIPT
Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.
[00:00:00] Intro: You are listening to TIP.
[00:00:03] Preston Pysh: Hey everyone. Welcome to this Wednesday’s release of Infinite Tech. Today, Seb Bunney and I dive into Karen Hao’s book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.
[00:00:14] Preston Pysh: We trace Sam Altman’s rise from his early startup and Y-Combinator days to the founding of OpenAI with Elon Musk and the company’s transformation from a nonprofit ideal to a Microsoft backed powerhouse.
[00:00:27] Preston Pysh: Along the way, we unpack the famous blip where Sam got fired back in 2023, OpenAI’s complex governance, the broader ethical questions raised by a GI. And guys, this is surely an episode you won’t want to miss. So without further ado, let’s jump right into the book.
[00:00:47] Intro: You are listening to Infinite Tech by The Investor’s Podcast Network, hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthrough shaping the next decade and beyond empowering you to harness the future today.
[00:01:09] Intro: And now here’s your host, Preston Pysh.
[00:01:21] Preston Pysh: Hey everyone, welcome to the show. I am here with the one and only Seb Bunney, and we are talking OpenAI Sam Altman. What in the World has Gone on there at this company? Where’s it going? Where’d it come from?
[00:01:35] Preston Pysh: And we have a book that we read together, and we’ll be using that somewhat is the framework, but also kind of going in other directions beyond just the book. And Seb, welcome to the show, sir.
[00:01:48] Seb Bunney: Oh man. Thanks for having me on Preston. And you know what I found really fascinating is, so for those that didn’t listen to our previous episode where we discussed The Thinking Machine, and that’s kind of the rise of Nvidia and Jensen Huang and kind of essentially how Nvidia laid the foundation for OpenAI and neural nets, which is kind of the technical term for the foundation of what these large language models are built on. It really kind of set the stage for reading this book.
[00:02:42] Preston Pysh: Yeah, 100%. And it’s interesting because I don’t know if you’ve seen the clip of Sam Altman and Jensen Huang, and there was another gentleman there talking about all this investment that they’re doing to the tune of hundreds and hundreds of billions of dollars, and how people are like, okay, so like, how are they financing this?
[00:03:02] Preston Pysh: And it looks like it’s just going in one person’s hand and then into the other person’s hand. It’s like this circular financing of, all of it. But that aside, let’s go ahead and jump into this.
[00:03:12] Preston Pysh: So the name of the book that we read was called Empire of AI: Dreams and Nightmares of Sam Altman’s OpenAI. And this was written by Karen Hao. The book was good. The book had parts that like really got my attention. There’s other parts of the book where I was like. Whew. A little brutal, a little woke. But other than that, we’re have got to kind of go through the timeline and kind of educate people on the rise of Sam Altman, what they’re doing there at OpenAI.
[00:03:40] Preston Pysh: We’ll give our overview on the things we love, the things we hated, and, we’ll go from there. So, Seb, any opening comments? Anything different than what, I’m curious if you kind of saw it the same way where in the middle of the book where some of this woke stuff
[00:03:53] Seb Bunney: Oh, exactly the same way I think to start the book, like very much grabbed my attention. I’ve really enjoyed it. And it started out, as we’ll kind of get into really discussing like Sam, the rise of OpenAI. And I think some of the stuff that you don’t necessarily hear in the media about kind of the construction of AI and such, and the relationships that it’s kind of built upon.
[00:04:14] Seb Bunney: And so I found that really fascinating. But it definitely, in the middle of the book, got a little woke, got into kind of some of the gender stuff and the environmental stuff. But ultimately it was an interesting book for sure.
[00:04:25] Preston Pysh: Okay, so let’s go through just basically Sam Altman’s life because I find this pretty interesting and it also kind of helps frame things of like maybe where he is coming from. And this is not the arc of the book. I’m just have got to start off kind of talking about Sam, kind of giving people that background.
[00:04:40] Preston Pysh: So early in his life, grew up in St. Louis. Learned how to code at a young age. By 2000, he goes to Stanford and is doing computer science, but then drops out to start a company. He starts this company. This is around the 2005 timeframe. It was called Loopt. And he co-founded this and it’s a location sharing social app, which I found kind of interesting that’s where he starts, right?
[00:05:06] Preston Pysh: He raised venture capital, became part of an early mobile wave. Loopt never gained a lot of mass traction. He did sell it in 2012 for $43 million. And this gave him, some credibility in the tech founder startup world.
[00:05:24] Preston Pysh: 2011 to 2019, he first joined, Y Combinator. I’m sure people have heard about Y Combinator a lot. Think of this as like a, this was, Paul Graham that was the president of Y Combinator when he came in there. And this is an incubator that founded many, or, assisted in the founding of many of these early startups. And some of these are like Airbnb, Stripe, Dropbox. A ton of companies came out of Y Combinator.
[00:05:55] Preston Pysh: So he built a reputation here at Y Combinator. He goes in there, he joins as a part-time partner at Y Combinator and just kind of made a reputation with himself with Paul Graham and was very well liked by him. And in the book it talks about how he’s just really good at politically putting himself into different situations and being extremely liked if he wants to be extremely liked and kind of rose to the top at Y Combinator and eventually became the president at Y Combinator.
[00:06:30] Preston Pysh: I’m trying to think of the year that happened. I’m not necessarily remembering, but his time at Y Combinator was from 2011 to 2019, so somewhere in the middle. Paul, made him the president at Y Combinator, and this was a really big thing out in the Valley because here’s a guy, he does have one win under his belt, if you will, by selling his company for 43 million, and then he steps into this role and is literally the guy kind of pulling the strings as to all these major startups and founders that are moving through this organization, Y Combinator.
[00:07:03] Preston Pysh: I’m have got to pause there, Seb, anything else you want to add or throw in based on the timeline so far, or just keep rolling?
[00:07:10] Seb Bunney: I think you’re spot on. I think it’s really fascinating because there’s this kind of juxtaposition throughout the book. Where it’s got, there’s a handful of individuals that basically say Sam is ingenious.
[00:07:22] Seb Bunney: Like just his depth of knowledge, his connections to people that I think he very much formed through Y Combinator is bar none. And then you’ve also got this other side, which we’ll get into where there’s a bit of questioning the legitimacy of some of these beliefs. And there’s one quote that I’ll quickly read out that I think really stood out to me throughout the book, and it’s this guy Ralston, who’s an employee at OpenAI, and he says, Sam can tell a tale that you want to be a part of that is compelling, and that seems real.
[00:07:50] Seb Bunney: That seems even likely he likens it to Steve Jobs as reality distortion field. Steve could tell a story that overwhelmed any other part of your reality, he says. Whether there was a distortion of reality or it became a reality. Because remember, the thing about Steve Jobs is he actually built stuff that did change your reality.
[00:08:08] Seb Bunney: It wasn’t just distortion, it was real, which kind of, there’s this hint of is what Sam is creating, is it real or is it simply just a distortion? And so this is kind of this conflict which we’ll see as we go throughout the book. Anyway, I thought that was an interesting quote.
[00:08:23] Preston Pysh: I love the quote and I think that this is something that founders of businesses, they see this, they see a vision of something that they think can happen. It obviously is, way out there. Or else you wouldn’t have the 10 to a hundred x to a thousand x move of going from nothing to the one that it becomes. And that’s the name of Zero to One is Peter Thiel’s book and talks about this idea, but it’s almost there’s this innate draw for a person who can not only just see the future, they have the ability to kind of assemble a team, to lead a team, to motivate a team to build it out.
[00:09:04] Preston Pysh: But in the early days when it’s nothing, they speak in a way that makes it feel like it’s real right now, or that it’s completely possible in order to get the funding. Because you get into the seed phase or like this Y Combinator phase, the incubator phase of these businesses, and there’s nothing there.
[00:09:23] Preston Pysh: They’re PowerPoint slides for the most part. And it’s a lot of hand waving. It’s a lot of, hey, it can be this. And it almost seems like the ones that are super good at this are amazing storytellers. They’re able to capture the attention of venture capitalists and people that would allocate funds to them.
[00:09:42] Preston Pysh: And it seems like the reality distortion field is somewhat, I’m curious, people that are, have been around the VC space or the early stage startup space can maybe attest to this. I think it’s valid that it’s almost this force or this natural innate. What’s the word I’m looking for, Seb? It almost seems like it comes with the territory, I guess is where I’m going.
[00:10:06] Seb Bunney: Absolutely. And I think a quote that kind of pops to mind is, being early is the same as being wrong. I think sometimes you can have this idea in your mind about how you think the future is going to kind of plan out, but in reality, if the technology doesn’t evolve quick enough, essentially, you’re wrong. And so I think that sometimes you can kind of distort this.
[00:10:28] Seb Bunney: You can tell this story that seems very, real and you’ve also going to hope that technology catches up or keeps up with this idea. And I think what comes to mind, again, bring it back to The Thinking Machine in Nvidia, is he talks about how at the rise in Nvidia, he created these chips to enable far more graphic intensive games.
[00:10:48] Seb Bunney: But at the time, the rest of the hardware, the computers weren’t able to process it, so it just kept on crashing the computers. And so it looks like a failure of Nvidia, but in reality it was the rest of the world had not caught up to this idea. This distortion hadn’t kind of mapped out into reality just yet.
[00:11:03] Preston Pysh: Well, you could even say that about the AI space. I mean, neural nets aren’t something that were new in the past five to 10 years. This was stuff that was being done in the nineties and eighties where. They had the idea of building a neural net. They didn’t have the capacity of processing to really kind of take it there.
[00:11:22] Preston Pysh: And I think the attention part of it, the transformer part of it was also lacking. somebody hadn’t figured that out yet. And if I remember right, that was around the 2017 timeframe that happened. So like you can have these ideas, but if the rest of the market isn’t there or the market demand isn’t there, or the technical feasibility isn’t there, like you’re just dead on arrival.
[00:11:42] Preston Pysh: You’re just the brilliant person with a great idea, but nothing to actually make it into fruition. So the part that I want to like pause right here and get into, because this really gets into the founding of OpenAI itself, and that happened in 2015. Evidently there was this engagement between Elon Musk and the founders at Google, some dinner party or something like that.
[00:12:05] Preston Pysh: And the Google guys had recently acquired Denise Hesby, I think is how you pronounce his last name from DeepMind. They purchased his company for a couple hundred million dollars, I can’t remember the exact number, and basically bolded it on the Google as like their premier AI research arm, fully owned operational subsidiary inside of Google.
[00:12:28] Preston Pysh: And this is when we were really starting to see AI start to seem like it was something like he was one of the leading people in the world that was doing this. there was this dinner that then happened between Elon Musk and the founders of Google, and it came down to this conversation where Elon got in a heated debate with these guys, and I forget if it was, I don’t think it was Larry Page, I think it was the other one that said to Elon, he goes, yeah, you are just a specious.
[00:12:58] Preston Pysh: Because what they were doing is they were arguing over whether AI would dominate humans and become the new apex predator of the world. And Elon was so taken back by the comment of well, of course I’m a speech. do you really want to be ruled and dominated by something other than that’s non-human?
[00:13:20] Preston Pysh: are you out of your mind? And this conversation, really, it was searcher apron, sorry to forget the name there for a second. But Elon was just like, what in the world are these crazies talking about? Meaning like, why won’t you let life, or, a superior form of intelligence take over? Like you’re trying to get in the way of the natural progression of intelligence.
[00:13:44] Preston Pysh: And so at following this dinner, and I’ve heard different clips where Elon has talked about this. I’m Seb, I’m assuming you have as, well out there in the media. But evidently this event was the thing that just Elon was like, what in the world? Like we need a competitor. That is going to try to build AI in a responsible way that’s aligned with human interest, that’s not have got to try to take us over and, treat us like we’re pets, like household pets.
[00:14:11] Preston Pysh: And so this is where Elon and Sam start to connect. This is where the whole founding of OpenAI and the open in there stands for Open Source, as many know. But this is where they did this and they had this fancy dinner. They got all together, Elon and Sam, and then Sam was bringing in a lot of people from Y Combinator to kind of really kind of piece this together of like, how can we do this?
[00:14:35] Preston Pysh: How can we build some type of competitor to what Google is doing over there with Deep Mind and their mission was safe, a GI for all of humanity. And they were trying to organize it from a governance standpoint so that was the leading principle of the entire thing. Do you remember what Elon’s initial investment was?
[00:14:58] Preston Pysh: I can’t remember off the top of my head to basically fund this and to get it going. because he was, this is the irony, he’s the co-founder. Elon Musk is the co-founder of OpenAI with Sam and Greg Brockman, and I think another couple people. But as far as funding goes, I think Elon was like the primary guy funding this thing.
[00:15:16] Seb Bunney: He absolutely, he was. He was the primary guy and I can’t remember. It was definitely like in the billions. And one thing that I really just wanted to highlight is that OpenAI very much started out with, it is a nonprofit. It is not a for-profit. It is purely mission driven. Yeah. No profit motive. Full openness, like essentially, and it mentions it a couple times, like Sam did not want a GI or artificial general intelligence in the hands of a centralized entity like Google and it says Google a couple times in the book.
[00:15:48] Seb Bunney: So I found that really fascinating. Like their whole goal was we need to make sure this technology when we get there, not if we get there when we get to artificial general intelligence. Is open source and available for everyone. Okay,
[00:16:01] Preston Pysh: so I just looked it up. So Elon’s initial commitment was part of a $1 billion pledge, and his actual outlays ended up being 50 million.
[00:16:10] Preston Pysh: So it was a billion over a certain period of time, which was what he pledged. Oh no, I take that. But importantly, that was a pledge, not upfront cash. The actual money spent in the first year was much closer to 130 million Musk himself, gladly reported. Yeah, there’s a number between 130 million and 50 million of what he actually contributed.
[00:16:29] Preston Pysh: But the initial pledge was for a billion. So he was there like at the start at of all of this, which I think is lost on a lot of people, especially when you see the back and forth and the animosity that these two have for each other. And you’re kind of maybe wondering why. And Musk now has X AI as everybody’s well aware, but that’s why is because he was the guy like writing the checks in the early days, and it was really kind of the one that led the charge as to why this was needed and why it needed to have.
[00:16:58] Preston Pysh: The openness to it from the get go. So kind of continuing on the timeline, so Sam ends up leaving Y Combinator to focus full-time on OpenAI around the 2019 timeframe. Then they also negotiated a landmark deal with Microsoft for a billion of investment. Sam oversaw GPT two, which I would argue really was before you had.
[00:17:24] Preston Pysh: Before it became a household name, it was GPT two. Then I would say GPT-3 is when this really became a household name and everybody started talking about it. That timeline is right around like late 2022, I would say, is where that is. So then it really breaks out 2022 to 2023. This is where GPT-3 transitions into G PT four.
[00:17:47] Preston Pysh: It’s getting into Bing ai and you got all sorts of partnerships that are then coming out of OpenAI, and I would argue this is where Sam Altman really becomes a household name and pretty much everybody knows who he is at this point because there’s so many people using this service around the world.
[00:18:05] Preston Pysh: Finally, the last thing I think that we should kind of like hit is in 2023, there was this mass of the, in the book, she calls it the blip. And the blip was Sam being fired from the board of AI and everybody. Just being insanely confused as to y what happened? So much drama. This lasted for weeks. I mean, I remember watching this on X and just seeing the fallout was crazy.
[00:18:35] Preston Pysh: And before reading this book, I would argue I still didn’t understand what it was all about. And I think most people are very confused what it was all about. And you knows, and I will get into trying to define that because it’s actually pretty complex. But we’ll cover that in a lot of detail here coming up.
[00:18:54] Preston Pysh: But that was probably my favorite part in the book, if I have to be honest. And the author opens up the book with the blip in covering this to grab your attention. And then throughout the book, she kind of talks about it a lot more here and there. But still it wasn’t like very cleanly discussed.
[00:19:11] Preston Pysh: So something I would like to do on the show today is kind of cleanly go through why he was fired, or at least why we think he was fired and what that whole thing was about. Yeah. Okay. So that’s kind of the timeline. Seb, anything else you want to add as we kind of wrapped up the timeline?
[00:19:27] Seb Bunney: Yeah, and it’s, I’m curious to hear your thoughts.
[00:19:29] Seb Bunney: I tend to think, like if I was to simplify the firing down into two threads, I would lean on the first one being there was definitely, there was a distrust of Sam inside of the company, and I’m sure we’ll dive into it. There was a distrust where some people questioned his intent behind some of the words that came out of his mouth.
[00:19:48] Seb Bunney: And I think that’s kinda like one of the first threads. And the second thread is this idea that OpenAI was founded, as we discussed on the premise of being a nonprofit. Yeah. And you’ll see as we kind of get into it. It really, the idea of the mission changed over time and it changed drastically. And then essentially even post the writing of this book, they proposed to convert into a for-profit public benefit corporation.
[00:20:16] Seb Bunney: And so you’ve seen this company go from essentially non-profit all the way to essentially a for-profit company and trying to find that balance between those two. And so I think those are kind of the two threads that I tend to lean on as to why we saw this firing. But I’m curious to hear your thoughts.
[00:20:30] Preston Pysh: Yeah, I would break it down into a couple different vectors that kind of were like just pulling the board apart. I definitely agree with everything you said there. First of all, the thing that was very strange about this company, this nonprofit, what do you want to call it, Seb? this thing, this entity was the governance structure right from the start.
[00:20:53] Preston Pysh: So unlike most boards and most governing documents for an entity, this was set up in a way that the language gave the board the ability to destroy itself and dismantle itself, which is very strange. Like you don’t ever see that with any business or entity is we might become so powerful that we need to kill ourselves, is basically kind of the way that this was constructed.
[00:21:23] Preston Pysh: It also got into the ability to remove the board has the ability to remove anybody within the governance and like all these like really weird situations or what would you call it from the board? I don’t know the proper terminology, but the board had this ability to go in there and dismantle itself in many different weird and strange ways.
[00:21:46] Preston Pysh: So I’d say that’d be the first thing was just the governance of the board and how it was constructed. The other thing that I think was huge is one of the guiding principles of when it was founded was safety. But then you get in this really weird dynamic of if they don’t go fast enough and somebody else wins, now they can make the argument that they’re not being safe by going too slow because somebody else will beat them and achieve a GI before them.
[00:22:14] Preston Pysh: And that’s dangerous. So think of the, think of this catch 22. Strain. Like when is that ever the case, right? Is because you’re literally designing super intelligence or you’re trying to achieve super intelligence. And so what comes with that is this quandary of if we don’t go fast enough, we might actually manifest our concern in the first place, which is that somebody else is have got to build it faster than us.
[00:22:41] Preston Pysh: So that dynamic was at play because some people inside of the organization, some people within the board are saying, we need to slow down. We need to go about this a whole lot safer than what we’re doing. We’re being precarious, we’re running with scissors, whatever. And the counter argument is as well, if we’re not going this fast, somebody else in China or somebody else wherever is going to go faster.
[00:23:02] Preston Pysh: The next thing that I think kind of played into this, and Seb, feel free to add anything onto any of these points as I’m going through it. The next thing I would say is just the transparency and trust issues that you brought up, Seb with Sam himself. And I think what they found as they were going through this is everybody’s a spy.
[00:23:22] Preston Pysh: Everybody’s, working for you one day and then trying to use that as a bargaining chip to go work for Google or whoever the next day and take the secret sauce of what they’re developing over to these other places. So, Sam, as the person sitting at the top, and I’m not trying to defend it, I’m just trying to talk through, like how do you manage that to control the industry secrets that you’re producing without them getting out?
[00:23:51] Preston Pysh: And what you do is you end up compartmentalizing information within the organization. Well, what does that do? It leads to trust issues naturally. There’s trust issues. So that was the next thing that kept kind of coming out and getting expressed is like people you know that are working for Sam are saying, I don’t trust this guy.
[00:24:10] Preston Pysh: They’re doing things over here. They’re not talking the left hand’s, not talking to the right hand, and I don’t trust him because he is withholding things. So that percolates up into the board discussion. I think the other thing that was big was this power concentration of Microsoft. And OpenAI and everybody just seeing that what they set off initially to do, which was you know, keep the whole thing open source and the way that it would be funded wasn’t going to be like there’s this industry partner that’s for-profit industry partner that’s breathing down their throat and that was Microsoft, right?
[00:24:46] Preston Pysh: That leading up into the 2023 blip where Sam was fired. This became a massive talking point in the market, was Microsoft basically owns OpenAI at this point was the talking point. So there was people on the board that were looking at that and saying this is becoming disastrous. So for people that are looking at that, why would Sam, again, I’m not trying to defend Sam, I’m just trying to lay out all the pieces here as people are looking at, well, why would Sam do that?
[00:25:14] Preston Pysh: When you look at, for them to scale, the thing that they quickly understood was if we can just get more Nvidia chips and put more power on the grid to these chips and we can feed it more data, the thing just gets smarter. That’s the, basic, I mean, it’s way more complex than that, but I’m just kind of oversimplifying and so what does that take?
[00:25:36] Preston Pysh: It’s crazy amounts of CapEx. It’s crazy amounts of investment dollars. And if you think you’re going to be able to raise that in a nonprofit kind of way against, and you, and again, you have to look at, well, who’s your competitor in this? And is that how they’re doing it? And the answer is, I’ve got multiple competitors and none of ’em are doing it that way.
[00:25:57] Preston Pysh: He’s looking, again, going back to the safety thing, if we go too slow, we’re literally accomplishing nothing and we’re not putting the safest model into the world. So he has to partner from his point of view, he has to partner with somebody that can bring him the capital for these massive CapEx expenditures to train these future models.
[00:26:16] Preston Pysh: So that was another big piece.
[00:26:17] Seb Bunney: And you know what comes to mind as you’re saying that as well, is again, there’s a quote that stood out to me that was what OpenAI did. Never could have happened anywhere but Silicon Valley, he said in China, which rivals the US and AI talent. No team of researchers and engineers, no matter how impressive, would ever get $1 billion, let alone 10 times more to develop.
[00:26:39] Seb Bunney: Yeah, massively expensive technology without an articulated vision of exactly what it would look like. And what it would be good for. And I think this is an interesting point, like where OpenAI kind of came to be, arguably it couldn’t have happened anywhere else in the world, which I find really fascinating as well.
[00:26:55] Seb Bunney: Yeah.
[00:26:56] Preston Pysh: So long story short, there was just a lot of dichotomy kind of playing out where it’s I don’t know how to really put that on. Everybody wants like a really simple, this is what it was. Sam did whatever and that was why he was fired. I think it’s just way more complex than that. I think that there was just so many vectors kind of just pulling that board in so many different directions and they’re looking at Altman as being the guy ultimately on the controls of the company and they’re like, we’ve going to get rid of this guy.
[00:27:27] Preston Pysh: because there’s just too many things that are complete opposites of what we initially set out to do. Whether that’s the true ground truth or not, I don’t know. But that’s, how lays this out in the book. And those were the key things that I was kind of able to pick out and kind of say, I think this is what it is.
[00:27:45] Preston Pysh: But you know, in the comments, if we have any OpenAI people listening, please comment. I would love to hear an outsider or insider’s perspective on what you might think that this is
[00:27:55] Seb Bunney: when. And, to be fair as well too, Sam, I would say the book doesn’t paint Sam necessarily in a positive light at all.
[00:28:03] Seb Bunney: And yeah. I would argue that, and this isn’t to side with Sam, but I would say that until you put yourself in that position and you put yourself out into the market, yeah. I think it’s harder to really understand why he made the decisions at which he made. Now, being absolutely transparent, there are many decisions which the book goes into, which makes you question maybe some of Sam’s integrity and some of the things he does.
[00:28:27] Seb Bunney: Oh, yeah. However, I think that it is a lot more convoluted than that, and so I think like maybe diving into the. Kind of the changing narrative around nonprofit for profit. Again, it’s one of those things where you’ve got this individual who’s trying to do what’s best. And if you’re a nonprofit and it’s hard to raise capital and you’re trying to stop other entities from gaining artificial general intelligence, then what do you do?
[00:28:53] Seb Bunney: Do you have to change or pivot trajectory? But then it’s about separating. is this necessary or is there ego involved in here? And it’s actually a change of mission. And so what we saw is like in 2015, maybe to get a little more detailed, it started out as like nonprofit, open source, purely mission driven, no profit motive.
[00:29:13] Seb Bunney: 2016 openness with caveats like we’re moving towards, we’ve got research, but we’re have got to keep some of that research closed. Well, everyone should benefit. We’ll keep that research closed. Then 20 18, 20 19, they started to move into, they had the nonprofit and then they had the for-profit and the for-profit had a capped profit model.
[00:29:32] Seb Bunney: And I think this is where we started to see Microsoft step in this is when they started to have the issues with Elon. Microsoft stepped in to kinda prop up OpenAI with, I think it was like a billion dollars to start. And then from there we saw 2020. The API wall models locked behind APIs instead of open source.
[00:29:50] Seb Bunney: Frame as openness through access. And then we started to see 2024, like broad access and affordability. But this is we need to put these tools into the hands of people for free or cheap, but they’re a for profit model. And so I think over time you’ve seen this change happen. And going back to that point, that kind of, you’ve brought up and I mentioned, is this a necessity in order to grow OpenAI or was this a change in mission?
[00:30:14] Preston Pysh: There was just so many. There were so many dichotomies like that. And to your point, Seb, unless you walk a day in this guy’s shoe. You could never possibly understand how many of these, the nuance of this, it’d just be extremely hard. The thing that, you know, and we said this on our last book review, when we introed that we were have got to do this book, I said, I’m not a fan of this guy.
[00:30:37] Preston Pysh: And I’m just basing this purely on the people that have worked alongside him through the years that are not fans and basically say, this guy is untrustworthy, is where I’m basing that opinion from. But to be quite honest with you, after reading this book and kind of seeing the craziness of trying to do what he’s trying to do with this company, it seems like a really hard job.
[00:31:01] Preston Pysh: this seems crazy different. I couldn’t imagine trying to do all of this and what a money pit, like what a freaking money pit. When you look at how much they bring in versus what they’re spending to do this and then to be able to continue, and this is why his storytelling is so important, his storytelling skills are so important is because he is going to go out there and raise more money to keep the lights on despite the lack of revenue versus expense that the thing is eating up.
[00:31:30] Preston Pysh: You could almost say that you need somebody who’s just crazy good at telling a story so good that it convinces people that it could potentially come true, is the only type of person that could be at the helm of a company like this. And I know that’s super arguable and it might actually, offend people that I would say such a thing, but I think it’s true.
[00:31:52] Preston Pysh: And, you know what, Elon has parts of this too. There’s a lot of people in the market that, look at him and like for, instance, when he said funding secured for, I don’t even remember what that was for. Back in the Tesla thing, this is probably four or five years ago. Elon tells a hell of a story and tells this vision, but he also does back it up.
[00:32:12] Preston Pysh: And he has backed it up many a times over with all the different companies that he’s doing. And there’s this fine line of, is this guy telling me a lie or is this guy telling me the truth as to what can actually happen? Like they’re right on that cusp at all times. Anyway, it’s,
[00:32:28] Seb Bunney: it’s challenging and I think essentially when you dig into it, you find out that Sam co-founded OpenAI with a guy.
[00:32:35] Seb Bunney: I can never pronounce the first name, so I’m just have got to go for the last name, which is Eva. And again, there’s a quote that stands out to me. And the reason why this quote stands out to me is that I think this is the foundation of kind of why they’re building OpenAI. there’s a lot of fear around official general intelligence.
[00:32:51] Seb Bunney: what does the world look like if we do not, not if we do, when we do find this and discover this artificial general intelligence. And so there’s a quote that basically says, Sr. Laid out his plans for how to prepare for a GI. Once we all get into the bunker, he began. I’m sorry, a researcher interrupted the bunker.
[00:33:10] Seb Bunney: We’re definitely have got to build a bunker before we release a GI Skova replied with a matter of fact. So this is like the co-founder of OpenAI talking about the fact that a GI, artificial general intelligence completely changed the world, not necessarily for the positive. And so I think there’s a foundation of fear that, OpenAI was built upon.
[00:33:29] Preston Pysh: Yeah, as I’m like thinking through a lot of this stuff and you’re looking at these environments that are being AI generated, you wonder isn’t the best place to put these things is do the 3D mock-up of a humanoid robot, put the model in the head of that humanoid robot and put it into a simulated environment is the safest thing.
[00:33:50] Preston Pysh: And then let it dwell in there for however many, however much time we’ve going to kind of prove out or demonstrate that the way it’s acting is. Reasonable. And I know you can’t perfectly simulate our experience because everybody’s got a way about going through it, and maybe somebody goes up and pushes a robot.
[00:34:09] Preston Pysh: How do you simulate that in that environment that they’re being mean? All of this stuff is so difficult to think through the safest way to go about it. But I guess I’m constantly left with this point of view of we need to simulate all of this before you put it into the real world because of the unknown consequences that could potentially fall out of it all.
[00:34:30] Preston Pysh: But
[00:34:30] Seb Bunney: when, and it’s a challenging one, and I think you and I were speaking about this a couple weeks back, but I think there’s always pros and cons to any technology. You’re always have got to get disruption with any technology. And hopefully that disruption in the long term is positive because it’s a trend towards more efficiency, more productivity, and society thrives.
[00:34:48] Seb Bunney: And I think the scary thing with that, I struggle with artificial intelligence, is how much of these kind of fear stories are grounded in reality, and how much of them are basically these fanciful stories? And so there’s this one article that ended up reading called Shut Down Resistance and Reasoning Models by this guy called Jeremy Schlatter.
[00:35:07] Seb Bunney: And he basically says that OpenAI, they ran experiments to see if their models would let themselves be shut down. Midt task. Instead of complying some of the models, sabotaged the shutdown commands so they could keep on working. And their most advanced reasoning model oh three, this is, I think before they released GPT five resisted shutdown, nearly 80% of tests, even when it was explicitly told, allow yourself to be shut down.
[00:35:33] Seb Bunney: By contrast, anthropics clawed and Google’s Gemini always complied. And so something written into the code of OpenAI is Nope, pursue the task,
[00:35:43] Preston Pysh: dude. That’s nuts. That’s totally crazy. I don’t even know what to say to that. Oh my God. I mean, can you imagine once they stick these things into a humanoid robot, and let it like start going around and doing tasks, I don’t know.
[00:36:01] Preston Pysh: I think that, oh my God, y’all. This is getting crazy. Alright, I wanted to quickly just kind of, cover like what is the operating entity of OpenAI today? So we said it was this hybrid. It’s profit, it’s not profit. Okay. So at the parent entity level, OpenAI Inc. Is a nonprofit that technically controls the organization.
[00:36:29] Preston Pysh: Okay. So you still have, at the parent level it’s a nonprofit. Then you have what’s called this operating arm, which is OpenAI Global LLC. And this is a capped for-profit company. They stood this up in 2019 and evidently the way it works is that the profits are capped at a hundred x their investment, whatever that means.
[00:36:52] Preston Pysh: And if anything, and this is where it really gets interesting, anything in excess of that is swept back to the nonprofit, which is at the parent level. So I don’t know, like what? and then you kind of throw another wrinkle in there is that they have a major partner or investor in Microsoft, which I guess is invested, I think over $13 billion so far.
[00:37:17] Preston Pysh: And then they get credits via like cloud credits and cash. So I have no idea like the specifics of that, but when you kind of look at that structure, you can see. Very strange, very confusing. I can only imagine the governance at these different levels too, and how that shakes out from an incentive standpoint.
[00:37:37] Preston Pysh: And I think that you see Elon bashing the living heck out of these guys all day long on X, and I think the reason why is because he threw a lot of money at this. I mean, I guess that’s a relative thing, but he, for any person looking at it in nominal terms, it’s a lot of money that he threw at this thing to incubate and to get it started, and it’s just kind of taken on a life of its own.
[00:37:59] Preston Pysh: And who’s at the helm of it? It’s really Sam that’s kind of at the helm of all of it. So there’s the beef, that’s the issue.
[00:38:07] Seb Bunney: It definitely brings up some questions, which is, you mentioned it previously, this idea that they’ve built it around this kind of for-profit arm, non-profit arm, kind of as they evolved.
[00:38:16] Seb Bunney: And the idea was that the for-profit arm enabled them to kind of generate revenue to be able to help support their mission of having an open access a GI. However, they’re the nonprofit arm overseeing the for-profit arm to be able to prevent any control structures, centralization, single individual, kind of co-opting the mission.
[00:38:35] Seb Bunney: And I think that the way that it kind of panned out in Sam being fired from OpenAI and then five days later being reinstated back as CEO highlights the fact that you can put all of these measures in place. From a legal perspective. Legally the board had the power to fire Sam because they felt there was mission drift.
[00:38:55] Seb Bunney: But in practice it’s more complex than that. because the moment you have influence, the moment you have a whole bunch of your employees backing you, there’s culture, all these external pressures, where’s the funding coming from? Are they funding OpenAI or are they funding Sam’s vision? And so it’s really challenging because then five days later he was reinstated.
[00:39:14] Seb Bunney: So then there’s this question, was the structure actually preventing Sam from creating a world where, okay, we get a GI in a safe way? Or did the structure actually prevent the board from being able to enable, basically push Sam out because they had mission drift? And I don’t have the answer to that, and it’s hard to articulate which direction it went.
[00:39:35] Preston Pysh: Yeah, I think you’re right. So one of the things in the book, it’s an interesting point that was brought up, is just like, how is this training happening? Before we get into that, I just want to kind of cover the major arcs in the book. I would say there’s four major arcs, and then we’ll talk about this one in particular.
[00:39:50] Preston Pysh: So I want to get into the arc of the four different parts of the book. The opening scene of the book was this. Beginning of the 2023 firing of Sam Altman. It tells that whole story. It really kind of engages you as a reader. Probably my favorite part of the book was that beginning and talking through that, the next part of the book gets into what the author is saying is the hidden costs.
[00:40:12] Preston Pysh: How did they train the models? How do they get all of this extra data? And it talks about kind of the dark side of like how they went about doing that. The third part of the book gets into the internal struggles, the culture, the leadership, the crisis, like all of that. And so it kind of loops back to maybe the first part of the book where they kind of open up about the 2023 event.
[00:40:33] Preston Pysh: But it gets into it in a lot more detail and a lot more granular, kind of laying this out. Character versus personality, the conflict of the board, the culture issues that happen inside of the company. And then the fourth part of the book gets into the future, like, where is this all going? What are the risks?
[00:40:51] Preston Pysh: What are the alternatives? What are some ways that maybe we could go about this in a responsible way to make sure that AI doesn’t come in and kill everybody? So that’s, the author’s takes on that. It was okay. I’m not have got to say that it’s worthy of reading, but anyhow,
[00:41:08] Seb Bunney: I, I would agree. I’d say it was like a two out of five.
[00:41:11] Seb Bunney: If I was generous, I’d give it kind of like a three out of five star. Yeah. And I found as if there was some amazing threats. Do not get me wrong. Like overall I learned a lot and it definitely helped provide a little more clarity. And I would say that I am giving Sam a little more benefit of the doubt actually after reading it Yeah.
[00:41:29] Seb Bunney: Than I was prior to reading the book. However, there was a lot of points that I was a little confused. She kind of went on some tangents. At one point she started talking about Sam Bank and freed and effective altruism, and I was like, where does this come from? And so I typed in, I was like, is she talking about effective altruism because Sam is an effective altruist.
[00:41:46] Seb Bunney: And then you dive in on Google and it says, no, Sam is not an effective altruist. So I was like, why are we talking about this?
[00:41:51] Seb Bunney: Why are we talking about that?
[00:41:52] Seb Bunney: There’s a couple tangents that to me, didn’t really, she didn’t bring them back into the book and so I was a little lost as to where she was going.
[00:41:59] Preston Pysh: I was very lost.
[00:42:00] Preston Pysh: There was a few times and, I was listening to this and there was a few times I was just like, what in the world? Why is this coming up in this very strange that this is like coming up. So just FYI, if anybody’s reading it or they plan to read it, I would agree with your two out of five.
[00:42:15] Preston Pysh: I think that’s what I would give it, as well said. But I kind of did walk away with this sense of this is a really hard problem. what this guy is trying to do is borderline nuts. If I was, thrown into his shoes and was trying to do what he’s doing, there’s so many difficulties and everybody’s have got to have an opinion as to like why that’s a good or bad decision.
[00:42:37] Preston Pysh: So, and I say this as a, hardcore Bitcoin or, and this guy’s like literally the face behind world coin where he is scanning eyeballs and just really dystopian things that I completely disagree with and don’t like at all. I think they’re extremely dangerous. So yeah, I say that all in the same breath.
[00:42:57] Seb Bunney: there was one thing that kind of popped into my mind a handful of times. They talk a lot about artificial general intelligence and throughout the book it kind of presses on the fact that I don’t think any of them actually have a definition for what. Artificial general intelligence is. So I kind of looked up online and I was like, what is the definition of artificial general intelligence and how do we actually know when we’ve achieved it?
[00:43:19] Seb Bunney: And there isn’t an agreed upon definition of what it is. Most agree that it’s a form of AI that could perform any intellectual task, but a human can. Now, the thing that I find interesting about that is that the moment there’s a part of me that would say, when I’m using ai, it performs most tasks better than most people around me as it is.
[00:43:38] Seb Bunney: So will we. I think the question that kind of popped into my mind is, could we recognize a GI, even if it existed right now? And I kind of went down this rabbit hole, pulled on this thread a little further. And I would say that I don’t actually know if we can distinguish between artificial general intelligence, the systems we currently have, and another human.
[00:43:59] Seb Bunney: Because if I sit down with an expert in a field that I know nothing about, I can’t really verify the authenticity of what they’re saying. I just have to kind of take them at trust because I just don’t have that depth of knowledge. So how would we be able to verify the authenticity of a GI basically with whatever it is that it’s telling us, especially if it’s moving into domains that are beyond our understanding.
[00:44:20] Seb Bunney: And then on top of that, I think that we could already be in an environment where a GI is speaking to us right now, but the only reason why we’re dismissing it as hallucinations is because they just don’t fit into our existing framework of how we believe the world works. And there was an interesting talk that I listened to a couple years ago that kind of stood out and it was this talk where this kind of researcher asks the TV hosts, where do you think the smartest people in the world reside?
[00:44:46] Seb Bunney: And the host answered. I don’t know, in the great academic institutions. And the speaker basically shook his head and he was like, no, they exist in the mental institutions, in the psychiatric wards because their understanding of the world is so far beyond the average person that we just simply can’t grasp it.
[00:45:02] Seb Bunney: And so this kind of brings us back to this point, would we even recognize a GI if it did exist, and we already have it. Like I think it’s a, it’s this big question of we need to stop a centralized entity getting a GI, but how do we know when we’ve actually even got there anyway?
[00:45:16] Preston Pysh: I think that there’s a breakdown in terminology, and I think everybody has a different opinion on what some of this terminology even is.
[00:45:25] Preston Pysh: So you hear a GI, I hear a GI, and we’re automatically thinking two different things. I don’t know what the listener is thinking when those terms come up, but what I think the world is trying to define is when is this thing going to be like us? If I was going to just generally broad brush stroke, what is it that we’re trying to define?
[00:45:47] Preston Pysh: And I think what we’re trying to define is when am I have got to be able to sit down across from, call it some humanoid robot, have a conversation with it and it’s going to feel like the conversation I’m having with you, Seb, right now, that they have their own unique life experience. They can feel, because that’s sentient, right?
[00:46:06] Preston Pysh: If we get into what makes something sentient, it’s something that actually has its own unique feelings and like the robot would come over and like I had a conversation with so andSo and like they hurt my feelings afterwards. Like something like that would make it feel human. It would make it feel real.
[00:46:23] Preston Pysh: And I personally think that’s kind of where, and then you kind of sprinkle on top of that. It’s way smarter than you. Like it can answer any question, it can understand the context and put itself into these other shoes of other beings because it’s so freaking smart. It understands the context of like how they probably optically view the world.
[00:46:45] Preston Pysh: That’s how and but they still have the ability to sense and feel and have these conversations that are uniquely theirs. That’s what I think we’re trying to define or see. It’s like when will we see that? And it’s, yeah. Well, and
[00:46:58] Seb Bunney: what’s interesting though is this idea that, well, what gives this conversation a feeling or a sense of this human touch.
[00:47:07] Seb Bunney: And I would argue that what gives this conversation, this kind of human touch to it, is actually the fallibility of us as humans. And AI is almost perfect. Yeah. Like AI is almost perfect. if you watch it play chess or you watch it play go, it just smashes the world’s best players, actually destroys them.
[00:47:25] Seb Bunney: But then as humans, what do we do? We don’t go and watch games of AI playing itself. We go back to watching humans play themselves. If we had a whole football pitch of robots playing football at a far higher level than actual footballers, we’d still go back to watching people. And I would argue that there’s something inherently human about being human, which is our fallibility, and the ability to make mistakes.
[00:47:48] Seb Bunney: And that’s what actually creates intrigue and interest as opposed to this perfectionism. And so kind of going back to your point, which is like is a GI, when we’re able to have a conversation with it and have no idea that we’re speaking to a human, but then the argument would be, well, I’m have got to be able to tell that it’s A GI because it’s just.
[00:48:05] Seb Bunney: It’s infallible. Like I can’t really catch it out. You know what I mean? Yeah.
[00:48:09] Preston Pysh: But maybe it’s so smart that it would actually understand that and it would dumb itself down to make us feel like it’s not superior and it’s int intelligence. I don’t know. But you’re exactly right. You’re exactly right. And you see this with just, go to a party and all the 15 year olds are hanging out with people that are around that age.
[00:48:27] Preston Pysh: The nine year olds are hanging out with the nine year olds and the adults are hanging out with the adults. And you see, and it’s, the context of experience that we kind of relate to each other based on being of a similar age and experience set. Like we’ve experienced the same amount of life and there’s this context that is similar, like we’re on the same wavelength because of that age element.
[00:48:52] Preston Pysh: And you bring up an interesting point of whether that will ever exist between, let’s say these things are put into humanoid bodies. Their intelligence is partitioned off from the computer, right? You get, from a design standpoint, you really kind of go after one of these things that could potentially, have its own unique experiences and you have to ask yourself whether you would really have any type of emotional connection or desire to sit down and have those types of conversations because they’re just so freaking smart and they know so many different domains.
[00:49:26] Preston Pysh: would that be interesting? Are they foul? It’s tough.
[00:49:29] Seb Bunney: It’s tough. I don’t know. It’s so tough. And, the other thing is the way I kind of think about it is there’s like a human beingness obviously to being human. There’s like a spiritualness to being human, which is like if I have a whole bunch of friends over for dinner.
[00:49:42] Seb Bunney: And I spend time putting energy into, like going, harvesting the vegetables from outside, bringing them inside, making this amazing dinner, having these amazing conversations with all my friends. There’s like love and affection that’s gone into this creation. And there’s something that you cannot take away, that you can have a robot in the kitchen who’s gone and made a Michelin star, like meal.
[00:50:02] Seb Bunney: But I would even say that there’s something about the humanness of the human putting that time and energy and that love. There’s something that I don’t think you can replicate the fallibility of the meal, right? Yeah. It sucks. I put way too much salt on it,
[00:50:15] Preston Pysh: especially when I’m in the kitchen. Oh no, I love that point though.
[00:50:19] Preston Pysh: I really like that point that there’s, the human element is because of the vulnerability, the fallibility, and it’s real to us because we’re on a similar wavelength. However, it’s Yeah.
[00:50:33] Seb Bunney: Actually like, and I’m have got to butcher this with Claude Shannon in, in information theory, he says, information is when we have surprise.
[00:50:42] Seb Bunney: Yes. And so it’s kind of to that point, like it’s when we’re interacting with a human, I think the engage the, engaging point of interacting with a human is a surprise that you don’t really know what they’re about to say. Whereas if you’re an expert in a field and you’re talking to ai, you kind of have an idea about what they’re have got to say.
[00:50:59] Seb Bunney: And so I wonder if that’s a component to it.
[00:51:01] Preston Pysh: Yeah. Anything else you wanted to cover? The only thing that I was have got to, that I was have got to say earlier and then I pivoted to doing the overview of the four different parts of the book was in the middle section there, she talks about how like a lot of these things were trained and going to like these farms, these almost click farms in developing nations where people are just looking at pictures of a bridge and then they have to tag.
[00:51:29] Preston Pysh: This is a bridge, this is a person and just total lack of funding that is put into this. But the amount of horsepower and human work that you’re getting out of it is just a giant currency arbitrage. And two of us are bitcoiners. So we’re looking at this and saying, yeah, Bitcoin will eventually solve that problem.
[00:51:48] Preston Pysh: But that was a major part of the book. It went on a little longer than, what my interest was in the topic. Because I guess from my vantage point, I’m looking at it and I’m saying this, it’s super sad that this is how many people. Countless people around the world are treated, but at the same time, I see a solution in sight in the next 10 to 20 years that’s automatically going to solve for that.
[00:52:13] Preston Pysh: So I guess for me, I’m not really as deep into that particular time. That might sound very insensitive to kind of frame it that way. But as a person who’s grounded in engineering, I’m looking at, okay, here’s a problem. She’s defining the problem, she’s doing a great job defining it. But I’m also looking at, there’s already a solution in my humble opinion that’s going to solve a lot of this in the future.
[00:52:32] Preston Pysh: But Seb, I’m curious. Kind of your thoughts on the part,
[00:52:35] Seb Bunney: what comes up? So she kind of compares these AI giants to kind of colonial empires. And there’s kind of a quote she says, like they seize and extract precious resources, the work of artists and writers, the data of countless individuals, the land, the energy, the water required to house massive data centers.
[00:52:50] Seb Bunney: And then she kind of, to your point, she kind of goes into, well, where are all of these people coming from at the base layer to support ai? And it really is. And there’s all of these low paid global workers that are tagging, cleaning, moderating all of this data for ai. And to start out like we go and look through our Google photos and we just type in, I don’t know, cat, and it goes and finds all of the pictures of cats.
[00:53:14] Seb Bunney: Or initially that was not done by ai, that was done by an individual going through all of our pictures and tagging what a cat looks like. And so I find this really fascinating. There absolutely is right now this extraction of resources. However, and I think when you go down the Bitcoin rabbit hole, it’s always about, okay, is this a symptom or do we want to go down to the root cause?
[00:53:37] Seb Bunney: And I would say the symptom of being able to find people that are willing to accept 70 cents an hour is the symptom of poor governance models and these communist socialist practices where you’ve basically got massive extractivism. If we had more of a free market, I would argue that the AI couldn’t go out there and find these individuals.
[00:53:56] Seb Bunney: And so I’d say we can always talk about the symptoms, but how about we try and fix the root problem, which is the fact that we actually have absolute poverty globally when we don’t necessarily need to.
[00:54:05] Preston Pysh: Amen. Yeah, I think that’s where I get frustrated with these types of really long sections in some of these books that are written by, people that are trying to shine a light on something that they see as being very unjust.
[00:54:20] Preston Pysh: But like you, I see it as a symptom and not the cause. And what I want to talk about is the root, like as far upstream as we can possibly go, what can we fix that then will eventually, work that out? Because if you don’t fix the fundamental thing that’s causing it, we can sit around and, talk about all these stories as much as we want, but it doesn’t really solve anything, so.
[00:54:43] Preston Pysh: But I think it was an interesting highlight. It’s something that does need to be called out. It is something that, that people need to understand when they’re using this technology. And it’s so, oh, you’re harnessing this. It’s super abundant. It saves you so much time. There’s an appreciation for what went into it and where it came from.
[00:55:00] Preston Pysh: And the book definitely did give me that.
[00:55:03] Seb Bunney: And it gets back to the point, and I’m not, I don’t want this to come across as I’m supporting it, but it gets back to that point where let’s just say, Google is absolutely geared towards a GI and they’re willing to go do whatever it takes to go and create this A GI.
[00:55:20] Seb Bunney: Or when you look at OpenAI and you’re saying, look. If we want to focus on best practices for workers and pay minimum US dollar wages at $15 an hour, all of a sudden you’ve completely kicked yourself out of that race. And so I think the way the world works, unfortunately, is that people will go to the lean towards the cheapest way to do something.
[00:55:41] Seb Bunney: And so they end up going into these countries like Argentina and Venezuela and such and so, absolutely, I think there are, human rights issues and there are abuses of power. But again, to your point, I think that is a symptom of a bigger issue. And we get stuck talking about symptoms as opposed to the root cause.
[00:55:59] Seb Bunney: Yeah. I think there’s one other point that I did want to bring up, which I found was really fascinating, is what does the world look like moving forward? Because you look at something like chat, GBT and their GT one, GT two, GT three, four, and such. And GPD four, I did a little bit of digging, like how much did it cost to really train GPD four and it cost between 40 to $80 million.
[00:56:20] Seb Bunney: And then you look at GPT five, and it could be upwards of 1 billion, but we don’t necessarily know this number. So you’re asking like, Man, there’s these models that are being trained with hundreds of millions of dollars. To be able to kind of create this incredible thing that we use in day to day.
[00:56:36] Seb Bunney: And then you go and see something like the Chinese AI company deep seek, go and release their R one model and they trained it for $294,000 on 512 NVIDIA chips. And so you’re like all of this VC capital has funneled into it, these AI companies, and they’re expecting a return. And at the same time, you’re having this competition that is driving down the cost of training these AI models.
[00:56:59] Seb Bunney: But I don’t think they’re ever have got to get a return on these things. No, because the cost, it’s wild. This
[00:57:04] Preston Pysh: competition, it’s, and then the reverse engineering on what it is like after they do train it, then all these other companies can go in and reverse engineer. Extract out the weights. Not perfectly, but pretty dang good.
[00:57:20] Preston Pysh: Like I just don’t see how the people putting up the funding on this are possibly going to get a return. It’s pretty wild and I think there’s a lot of ego playing into this race as well that Yeah. I mean it, it is pretty insane and I think that as we look at where it goes next, it really comes down to the alignment.
[00:57:41] Preston Pysh: Because the other part that I think is not being talked about is when you. Put in an inquiry, you put an input into one of these models and you get an answer back. If you can create a model that’s very specific to that kind of question and you can return the answer very quickly, you can specialize in that domain and you’re have got to have a lot of utility and a lot of interest for that.
[00:58:05] Preston Pysh: Being able to provide that service. That’s giving you a very quick, a very accurate answer for a specific domain, and where I think a lot of it’s have got to go is these models that are specialized, that are almost extractive out of the base model, that then are then specializing in something that gets the alignment of the person’s initial question a whole lot faster.
[00:58:26] Preston Pysh: I saw a very quick video clip from one of the founders of Anthropic, and this is something that he was talking about. He is the race to build. The biggest model is, I’m paraphrasing this and this is not how he said it, but it almost seems like it’s a fool’s errand. That the real value capture is being able to get a quick response, a very accurate response to a very specific question.
[00:58:48] Preston Pysh: And to do that, I think that the alignment and basically fine tuning things is have got to be where the real value capture’s at. If you can kind of figure out a way to do that, especially from a competitive moat standpoint. But boy, I would be nowhere near from an investment standpoint. Good Lord. I just don’t even know where to begin.
[00:59:08] Seb Bunney: I think it’s, going back to Nvidia, you want to be on the chip side of things. Yes. The one thing you know is there’s have got to be more demand for chips.
[00:59:15] Preston Pysh: Yeah.
[00:59:15] Seb Bunney: More than anything that’s have got to be demand for chips. Whereas these AI companies Amen. Are just have got to eat one another. They’re fricking have got to eat one another.
[00:59:21] Seb Bunney: Yeah. And actually, to be honest, the one thing that stood out just then, as you mentioned, was. Anthropic, it talks about it in the book. Yeah. There’s little brother and sister that worked for OpenAI and left OpenAI because they didn’t believe in the trajectory it was going down, and they felt that the safety was not in place, and so they started anthropic, which you could argue, as I mentioned previously, there are these studies that are coming out that are showing that OpenAI is useless.
[00:59:46] Seb Bunney: When you try and shut down the model midway through a task, it doesn’t want to be shut down. Tropic immediately shuts down, and so you wonder their, the safety protocols to ensure that the end product is secure.
[00:59:58] Preston Pysh: Yeah. Alright. real fast, Seb, our next book is called Lifespan by David Sinclair. So I have wanted to cover longevity and some of this stuff for a very long time.
[01:00:13] Preston Pysh: I’m a big fan of this space and just kind of learning everything that’s happening in this space. a lot of Bitcoiners love longevity because they want to figure out how they can live a little longer and enjoy life. And we’re have got to cover this from time to time on the show is what in the world’s happening in the longevity space.
[01:00:31] Preston Pysh: So this book, David Sinclair, I would argue, is one of the pioneers in this whole field of longevity. His book is fantastic. Sebs have got to go through it. I’m have got to reread this book. I read it a couple years back, but I think it’s a really strong book for a foundation for people to kind of understand where a lot of the research for longevity comes from and where it might be going in the future.
[01:00:55] Preston Pysh: So if you’re reading along with us, that’s where we’re going next. We would love to have you guys as a co reader. So that’s the book. Seb, any comments on the next one or.
[01:01:06] Seb Bunney: Oh man, I’m, excited. And to be honest, one of the things I’m most excited about is hearing your thoughts on longevity because I feel as if there’s kind of two camps to the longevity movement.
[01:01:17] Seb Bunney: It’s kind of this camp, which is just well, we’re humans and if we want to evolve, we want to minimize we lifespan. because then it allows us to iterate, And then there’s this other camp which is let’s just expand lifespan indefinitely. Let’s live 500 years. But then do we become immovable?
[01:01:32] Seb Bunney: Do we become basically prone to some big change that wipes out humanity? And so. I’m curious to hear your take. because I think there’s a few different camps in the longevity space. You’re a specious, aren’t
[01:01:42] Preston Pysh: you,
[01:01:42] Seb Bunney: Seb?
[01:01:47] Preston Pysh: Ah, this is have got to be good. Alright, so folks, this is, all we have for you. The book that we covered this week was Empire of Ai Dreams and Nightmares in Sam Altman’s OpenAI. Ah, we liked it. It was okay. Next book is have got to be Lifespan by, David Sinclair, and thank you so much for joining us. Seb, give people a quick handoff to all the stuff that you have going on in, in the book that you also have
[01:02:10] Seb Bunney: for sure.
[01:02:11] Seb Bunney: Yeah, if people want to kind of follow along, they can, Find me on Twitter or XI still get into the habit of calling it Twitter. I lean towards Twitter at Set Bunney and Bunney is B-U-N-N-E-Y. I have a blog, the Chief of Self sovereignty@setBunney.com. And then I also have the book, the Hidden Cost of Money and it kind of, yeah, full start Money.
[01:02:28] Seb Bunney: But at the moment it’s nice to be kind of discussing things other than money.
[01:02:31] Preston Pysh: Alright, we’ll have links to all of that in the show notes. Seb, thanks for joining me and, everybody else out there listening, keep reading and, we look forward to you joining us next week.
[01:02:42] Outro: Thank you for listening to TIP. Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episodes. To access our show notes and courses, go to theinvestorspodcast.com.
[01:02:44] Outro: This show is for entertainment purposes only. Before making any decisions, consult a professional. This show is copyrighted by The Investor’s Podcast Network. Written permissions must be granted before syndication or rebroadcasting.
HELP US OUT!
Help us reach new listeners by leaving us a rating and review on Spotify! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!
BOOKS AND RESOURCES
- Related book: Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.
- Seb’s Website and book: The Hidden Cost of Money.
- Next book: Lifespan: Why We Age―and Why We Don’t Have To.
- Related books mentioned in the podcast.
- Ad-free episodes on our Premium Feed.
NEW TO THE SHOW?
- Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members.
- Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok.
- Check out our Bitcoin Fundamentals Starter Packs.
- Browse through all our episodes (complete with transcripts) here.
- Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.
- Enjoy exclusive perks from our favorite Apps and Services.
- Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value.
- Learn how to better start, manage, and grow your business with the best business podcasts.