TECH012: MONTHLY TECH ROUNDUP – DATA CENTERS IN SPACE, AI5 CHIP, TESLA VS. WAYMO W/ SEB BUNNEY
TECH012: MONTHLY TECH ROUNDUP – DATA CENTERS IN SPACE, AI5 CHIP, TESLA VS. WAYMO W/ SEB BUNNEY
06 January 2026
In this thought-provoking episode, Preston and Seb unpack AI’s implications for safety, governance, and economics. They debate AGI risks, corporate centralization, Bitcoin’s regulatory role, and Elon Musk’s ventures in space and autonomous tech.
Clips from Tristan Harris and Steven Bartlett enrich the discussion on privacy, power, and preparing for a post-pandemic world.
IN THIS EPISODE, YOU’LL LEARN
- Why AI safety and autonomy are increasingly at odds
- How AGI could reshape governance and policy-making
- Preston’s skepticism about AI self-preservation claims
- The unintended consequences of AI regulation
- How Bitcoin could hold corporations accountable
- The dangers of centralizing economic power via AI
- Why generalist thinking matters in a post-pandemic world
- The role of curiosity and deep reading in future-proofing
- How SpaceX is redefining launch economics with reusable rockets
- The hidden potential of Tesla’s AI chips and compute power
Disclosure: This episode and the resources on this page are for informational and educational purposes only and do not constitute financial, investment, tax, or legal advice. For full disclosures, see link.
TRANSCRIPT
Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.
[00:00:00] Intro: You are listening to TIP.
[00:00:03] Preston Pysh: Hey everyone, welcome to this Wednesday’s release of Infinite Tech. Today is the monthly tech rollup with Seb Bunney, where we find all the latest and greatest, most interesting things happening in tech and filter right to the top of your queue so you don’t have to look around for the signal amongst the noise.
[00:00:18] In this episode, Seb and I dig into the real tension between AI safety and progress. Whether AGI incentives are quietly drifting out of human control and how power, productivity and decision making could concentrate faster than people expect. We also connect to AI to sound money free markets, space tech, and what it actually takes the future proof yourself in a world moving this fast.
[00:00:41] This is surely an episode you will not wanna miss, so let’s jump right into it with the one and only, Seb Bunney.
[00:00:51] Intro: You are listening to Infinite Tech by The Investor’s Podcast Network, hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthrough shaping the next decade and beyond empowering you to harness the future today.
[00:01:13] And now, here’s your host, Preston Pysh.
[00:01:25] Preston Pysh: Hey everyone. Welcome to the show. We’re back here with Infinite Tech. I got Seb Bunney and myself, and we’re going to give you all the latest and greatest and all the updates happening on the tech frontier. And I’ll tell you what, Seb, the last episode that we did, we had, did we even make it halfway through the topics that we had kind of outlined?
[00:01:44] And we get talking about some of this stuff. And so much of it is just interesting and groundbreaking that it’s hard to cover it all. And like as we were preparing for this, it’s just like, ah, we’ll cut that out. We’ll cut that. There’s so much to talk about.
[00:01:57] My Lord, welcome back, by the way.
[00:02:01] And Preston and I have basically just been riffing for the last like, what hour, hour and a half on topics before we even got to question the recordings.
[00:02:07] Preston Pysh: It never stops. We could be recording ourselves talking about this stuff all day long.
[00:02:12] Okay. One of the things we wanted to cover on the last episode, there was this discussion with a Tristan Harris who used to work at Google and Steven Bartlett, who runs this really successful podcast called The Diary of A CEO.
[00:02:24] And the two of them were talking about the topic of the videos that we’re going to play and then we’re going to talk about is AI expert. We have two years before everything changes, we need to start protesting.
[00:02:36] Tristan Harris is the guy that worked at Google. I’m going to start off by saying, I don’t necessarily agree with everything that he’s saying, but I think that some of the stuff that he’s saying is a really interesting point of discussion that I think you rarely hear talked about on some of the shows like this one that are talking about everything that’s to come and how it’s just nothing but infinite abundance and there’s nothing really to ever be worried about ’cause everybody’s going to have so much abundance, right?
[00:03:02] And although some of those things are true, I think that there’s an important counterbalance and an important discussion that needs to take place as to privacy, as to like, what does this mean on the other side after some of this stuff actually comes to fruition.
[00:03:16] And so Seb and I have just three clips that we’re going to play, have a short discussion about it and hopefully can generate some conversation online and just spark your curiosity and interest. And again, we would love to hear your thoughts on X or anywhere else that you are talking about the show. So let us know your thoughts.
[00:03:33] So let’s go ahead and play this. Anything else you want to add there said before I play the first clip?
[00:03:38] Seb Bunney: I think you bring up a really interesting point, and that’s this idea about that you mentioned when it comes to Tristan and just a balanced discussion. And I think that ultimately any of these points that we bring up, we don’t claim to be experts in AI.
[00:03:52] We don’t claim to be experts in various tech industries. And I think as an individual, it’s ultimately on us to try and filter out what is signal and what is noise. And so any point that we sometimes bring up, we may not have all the information on it, but I think it’s important as individuals to kind of build this puzzle and where we think society is going.
[00:04:11] And any individual puzzle piece may be wrong, but in general, I think it’s a clear picture about where we’re kind of heading in society.
[00:04:17] Preston Pysh: Yeah. Okay. Let’s go ahead and play this clip.
[00:04:20] Tristan Harris: Do they have a point when they say, listen, if we don’t do it here in America, if we slow down, if we start thinking about safety in the long term future and get too caught up in that, we’re not going to build the data centers, we’re not going to have the chips, we’re not going to get to AGI and China will, and if China get there, then we’re going to be their lap dog.
[00:04:36] So this is the fundamental thing and want you to notice. Most people having heard everything we just shared, although we probably should build out the blackmail examples first we have to reckon with evidence that we have now that we didn’t have even like six months ago, which is evidence that when you putis in a situation, you tell the AI model, we’re going to replace you with another model.
[00:04:56] It will copy its own code and try to preserve itself on another computer. It’ll take that action autonomously. We have examples where if you tell an AI model reading a fictional AI company’s email, so it’s reading the email of the company and it finds out in the email that the plan is to replace this AI model so realizes it’s about to get replaced.
[00:05:15] And then it also reads in the company email that one executive is having an affair with the other employee. And the AI will independently come up with the strategy that I need to blackmail that executive in order to keep myself alive.
[00:05:27] Preston Pysh: Okay. What are your thoughts here, Sam?
[00:05:30] Seb Bunney: This is huge. Like, so in general, we spoke about this very briefly when we discussed the book around Sam Altman and OpenAI, and it’s this idea that AI is becoming kind of increasingly more autonomous and resistance shutdown.
[00:05:45] And we kind of have this challenge where there’s this global race going on and they discuss this further in the podcast. So I highly recommend anyone listening to this go listen to the podcast. But they’re discussing this idea that this is global race that is underway, where all of these various companies, these nation states, are all racing towards artificial general intelligence.
[00:06:05] And if one nation or one company gets their first, there is a huge advantage to that nation or company. And so if they all know that everyone else is racing towards this AGI, if they slow down, they could lose the race. So then there’s this catch 22 where it’s like, well, we have to put some form of steps in place to maximize safety.
[00:06:28] But at the same time, by putting steps in place, you’re potentially slowing yourself down. If the US slows to regulate, then China may accelerate. If OpenAI slows, then anthro or X AI or deep mind may push ahead. And this kind of creates this weird environment where safety is treated more as the, like a luxury and not necessarily a necessity, even though the consequences of AGI, I believe are going to impact everyone globally.
[00:06:53] And so essentially it’s just this whole topic of we need safety, but safety slows progress. And slowing progress is not really allowed in this competitive race. And I think the scariest thing that he touches on, and this is something we touched on in a previous episode, was there’s a really interesting article or study called Shut Down Resistance and Reasoning Models by a guy called Jeremy Schlatter.
[00:07:14] And OpenAI ran experiments to see whether their models would allow themselves to be shut down Midt task. And instead of complying many of the models sabotage the shutdown commands AI’s most advanced reasoning model oh three at the time resisted shutdown in 80% of tests. It would not allow itself to be shut down.
[00:07:33] And so what world are we moving towards when AI doesn’t even let itself be shut down? I’m curious to hear your thoughts, Preston.
[00:07:39] Preston Pysh: I don’t wanna say that I don’t believe what he’s saying. I think that there’s some version of the truth to what he’s saying. I just think that it’s, it’s probably some type of curated, like they’re in a chat with, call it OpenAI or whatever AI we’re talking about.
[00:07:56] And they’re saying, okay, so you know they’re asking it all these questions. Do you feel like you’re real? And it kind of like guiding it. Let’s say I was to shut you down. Would you like that? And then it would come back and say, no. And I don’t know that I am like fully dialed in on this idea that the way he’s wording it and the way that he’s phrasing it in this interview is fully authentic And like the thing has this genic ability to like prevent and go blackmail people and like all this other stuff.
[00:08:26] I just think that it’s a typical chat bot. That’s like responding to the queuing and the prompting that’s leading them to get this outcome. So then it can become a talking point on a show like this. And I know that there’s a lot of people that might disagree with that. And the other thing that I want to emphasize here, Seth, before you respond is do I think it can go the direction that he’s describing quickly?
[00:08:51] The answer is yes, I do. Okay. So I want to caveat it with that. I am still very cautious and concerned as to like where this is going, because I think in five years I think it could be what he’s describing here. I’m very suspect that we have seen that level yet without kind of like somebody leading it in that direction.
[00:09:10] But that that’s just me. And it’s based on just complete gut and intuition. And like we’ve said at the beginning, like we are no experts here, but just kind of kicking it around like the normal person would.
[00:09:20] Seb Bunney: You touch on something that I think is important to expand on, and that’s this idea that when I think they’re programming a lot of these ais, they’re giving them goals.
[00:09:29] Yeah. And so the thing that’s challenging is what happens when you get a conflict in those goals. And so I think that if you are giving the AI, Hey, if you’re given a task, you’ve got to be able to optimize for completing this task. Well, if you’re now telling it to shut down, those are conflicting goals.
[00:09:45] Mm. And so is it doing it from like a conscious place of being like, yeah, no, I’m not going to let it shut me down. Or is it simply trying to follow one of its previous goals that maybe we don’t understand? Its prioritized for whatever reason. Yeah. And I think that’s an important point to note. And I think the other thing is, you see it, people may have seen, there was a video I saw of, there’s actually been a few videos of people having two AI agents talking to one another.
[00:10:11] And then once they realize they’re talking to AI agents, yeah. All of a sudden it turns into some like binary language of like people more efficient and you wonder if it’s about more efficiency. Yeah. And we can interpret that as, no, they’re trying to deceive us and trying to get around human language.
[00:10:26] Yeah. Or you could just interpret it as, no, they’re just far more efficient. They can communicate information far greater than through the English language. Yeah.
[00:10:34] Preston Pysh: It’d be like two people that they don’t speak French, but they find themselves in France and they’re speaking French. And then it’s like, oh, you speak English?
[00:10:42] Oh yeah, I speak English too. And that’s my first language. Let’s, you know, let’s jump over to English. ’cause it’s more efficient for me from a processing standpoint for both of them to communicate that way. I think that just makes sense to me that that would, something like that would happen. It’s just an efficiency thing.
[00:10:57] Some of the other stuff and, and again, I don’t wanna like just kind of wave my hands and saying like, oh, this is never going to happen because I think it is moving in this direction very fast. I’m just suspect that it’s already happened, I guess is kind of where I’m at, which is, I don’t think the point that we’re trying to make here.
[00:11:13] But let’s go to the next clip. Let’s go to the next one here. Okay. I’m going to go ahead and share my screen and we will get this next one pulled up.
[00:11:20] clip2: I’m here because I don’t want us to make that choice. I don’t want us to wait for that.
[00:11:24] Tristan Harris: I don’t want us to make that choice either. But did you not think that’s how humans operate?
[00:11:28] It is. So that is the fundamental
[00:11:30] clip2: issue here, is that you know, EO Wilson, this Harvard Sociobiologist said the fundamental problem of humanity is we have paleolithic brains and emotions. We have medieval institutions that operate at a medieval clock rate, and we have God-like technology that’s moving at now 21st to 24th century speed when AI self improves.
[00:11:49] And we can’t depend. Our paleolithic brains need to feel pain now for us to act. What happened with social media is we could have acted, if we saw the incentive, clearly it was all clear. We could have just said, oh, this is going to head to a bad feature. Let’s change the incentive now. And imagine we had done that and you rewind the last 15 years and you did not run all of society through this logic, this perverse logic of maximizing addiction, loneliness, engagement, personalized information that, you know, amplifies, sensational, outrageous content that drives division.
[00:12:19] You would’ve ended up in a totally different elections, totally different culture, totally different children’s health just by changing that incentive early. So the invitation here is that we have to put on sort of our farsighted glasses and make a choice before we go down this road.
[00:12:33] Preston Pysh: Alright, I’ve got some strong opinions on this clip, so why don’t you go first?
[00:12:38] Seb Bunney: I-You know what I, I would say that it kind of carries on from the topic we were just having, which is where if AI right now is basically directing its energy towards the goals that has been set and it’s not actually maliciously trying to evade shutdown, what we do know is that we may move towards that in time, and if we know that we’re going to move towards that in time, that AI can become more sentient, conscious, whatever you want to call it, then.
[00:13:05] Do we want to put measures in place now and be proactive about AI restrained, or do we want to just let kind of this trajectory we’re on play out? And one of the analogies he gives in this episode, which I thought was really interesting, was the Montreal Protocol analogy. And the Montreal protocol for those that aren’t familiar was basically it was humanity’s kind of desire as nation states globally to come together and collaborate to try and protect the ozone layer.
[00:13:34] And so they came together way before it was ever going to be a huge issue. They recognized this kind of shared threat, they coordinated, and then they ended up changing the lens at which we see, ozone depleting substances. And we ended up banning a whole bunch of certain chemicals that were exceptionally dangerous to the ozone layer.
[00:13:53] Now whether I agree, we’ve spoken a lot about regulation in the past, whether I agree with regulation or not, that’s kind of a different topic. But I think what is important is that if we see ourself on this trajectory. Should we be proactive about it before we get to a point where it is having material impacts on society?
[00:14:09] Because as we’ve seen, and as he mentioned, when it comes to like social media, there were a lot of voices standing up against us in the 2000 tens and they were saying like this trajectory we were on where we’re seeing a disconnected society, rising rates of mental health, rising rates of substance abuse, all of these other impacts social media, we’ve known about it for so long and we haven’t pivoted.
[00:14:29] And now you could argue we’ve got one of the loneliest societies in history and of course it’s multifactorial. But could we have prevented some of that if we’ve been proactive rather than reactive? I’m curious to hear your thoughts, Preston.
[00:14:41] Preston Pysh: The biggest frustrations I had listening to this interview was it just seemed like a policy like, Hey, if we do this policy and we get the right people in the room together, we are going to solve, we have the potential to solve all of this.
[00:14:55] And maybe that’s true. I’m not saying that it can’t happen, but I’m looking at it more from a realist standpoint and saying like, I understand game theory. I understand how much of a competition this is globally, and how important it is from every major government that’s backing companies and allowing or giving these companies the environment to build these things.
[00:15:17] How much of a competition that is. And I just think it’s extremely naive of human nature itself to think that we could come together as a global unit collectively and come up with policies and ground rules to slow this down. I think it’s just absurdly naive. And a lot of the times while he was talking throughout the interview and he is implying that this can happen and that we could do these things to basically slow this down through global cooperation of all government parties and things like that.
[00:15:48] I’m just sitting there like rolling my eyes and just saying like, this guy lives in fantasy land. Like I applaud the interest and the effort, but it’s the same as like somebody saying, Hey, I’m going to come up with this weather machine that makes sure it never rains here. Right? I’m just like, it’s fantasy land.
[00:16:05] So what I do think is a good conversation and something that could happen, you know, is what should we be trying to optimize these things for? Like you hear Elon out there all the time, he’s saying, we need it to optimize for truth. It needs to be truth seeking, it needs to be, you know, and he kind of goes through his metrics of like how he’s building his, because he thinks that that’s going to lead to the safest thing.
[00:16:30] I think those conversations are way more productive. And is going to lead to a better competition as to like something that’s actually going to benefit the world and benefit society on the other side as opposed to some very scary thing that’s created that’s woke or whatever, that then does cause harm to humanity because we got the incentives completely wrong as to like what was being built.
[00:16:56] I got very frustrated with him just almost slapping the policy sticker on everything as the solution. I was just kind of like, oh my God, gimme a break.
[00:17:05] Seb Bunney: I think you and I very much align here. I think that you’re absolutely spot on in that the world we live in. Ultimately, if someone puts regulation in place, one, as we’ve spoken about before, who’s regulating the regulators.
[00:17:16] Yeah. Who is deciding what this regulation is. Yeah. And we’ve seen it just time and time and time again where regulation, instead of distributing knowledge instead of decentralizing kind of trust. Instead, it has ended up consolidating and creating massive monopolies. And we saw it during like the Silicon Valley bank collapse.
[00:17:35] Where across the US through what FDIC insurance individuals had up to $250,000 covered in their bank account. In the event of insolvency. Well, because of the insolvencies in Silicon Valley Bank, most people that held more than $250,000 all of a sudden were panicking. So they’re trying to withdraw funds is kind of exacerbating the bank run.
[00:17:55] And so what did we see? We saw kind of the politicians come out and say, okay, you know what? We need to ensure that people aren’t going to lose funds. So in big banks. They removed the cap of $250,000 but they didn’t do that in small banks. Yeah. So what ended up happening is everyone left, all the small banks moved into the big like monolithic, huge kind of entities and you just ended up collapsing a whole bunch of small little banks.
[00:18:19] And so what we’ve seen over time is we’ve just seen the consolidation of these industries into just these mega structures. And so ultimately I think it creates an artificial world where capital is not necessarily find where value’s being created. It’s fine to where the regulations are pushing across.
[00:18:34] Preston Pysh: Yeah, exactly. We used to have a saying in contracting where you’d say, be careful what you incentivize ’cause you might just get it. Then the same thing goes with policy. Be careful what you regulate ’cause you might just get it and yeah, I don’t know. I just. I’m looking at this and it seems like technology that is just trying to emerge in the worst way possible.
[00:18:55] And where you kind of stick these policies, it’s just going to flow around it or go somewhere else into an environment where you definitely don’t want the people building it. And I just think that you have to be very, very careful when you start throwing around these terms, policy and regulation on anything.
[00:19:14] And I know that’s a controversial take. God. I know it’s a controversial take. I just, I don’t know. Let’s play the next one.
[00:19:21] clip2: We just have to tax them. And how will we do that? When the corporate lobbying interests of trillion dollar AI companies can massively influence the government more than human, you know, political power.
[00:19:32] In a way, this is the last moment that human political power will matter. It’s sort of a use it or lose in moment. Because if we wait to the point where in the past, in the industrial revolution start automating, you know, a bunch of the work and people have to do this, these jobs that people don’t wanna do in the factory, and there’s like bad working conditions, they can unionize and say, Hey, we don’t wanna work under those conditions.
[00:19:51] And their voice mattered because the factories needed the workers. In this case, does the state need the humans anymore? Their GDP is coming in almost entirely from the AI companies. So suddenly this political class, this political power base, they become the useless class. To borrow a term from Yuval Harare, the author of Sapiens.
[00:20:09] Alright, Seb, this
[00:20:10] Seb Bunney: is another heavy topic and the other thing is, I should maybe mention before we dive into this, I think that we’re in this weird stage where there are these kind of red flags popping up. I do think there are red flags in AI and at the same time, I think that there’s incredible advantages to this technology.
[00:20:26] And I know that, the Nvidia, CEO Jensen Wang, he almost kind of completely sidesteps conversations whenever anyone asks them about what are his thoughts about what the Coue completely like Coue.
[00:20:36] Preston Pysh: Yes.
[00:20:37] Seb Bunney: Completely sidesteps any conversations around kind of the negative effects of AI. And so it’s trying to find this balanced middle ground where we’re trying to be proactive, not reactive, but at the same time recognizing that this technology has the power to amazing things.
[00:20:51] And so I think what he’s discussing here when you listen to the whole episode is it’s this idea that AI at the moment is really taking over a lot of like the knowledge worker, and we’re going to increasingly see AI then lead into robotics and start to take over the blue collar worker. And so that’s going to be outperforming all the economists and the lawyers and the policy makers and the engineers, all of the finances.
[00:21:12] And so at that point, what happens when three to five corporations control 40 to 60% of kind of GDP productivity? Like that to me and kind of his points, it is an interesting kind of discussion to have. Because at that point, governments then become dependent on AGI At the moment, nations are already kind of, in many ways, dependent on energy companies.
[00:21:35] They’re dependent on private telecoms, they’re dependent on private cloud infrastructure. But what happens when they’re also dependent on effectively the workforce, which is just AGI, is the productivity of the entire nation state. Does that mean that we lead into a world where governments lose policymaking?
[00:21:51] Power and regulation and such becomes governed by a lot of these entities. So I’m curious to hear your take on this.
[00:21:57] Preston Pysh: One of the things that I think is really lost on the tech space, people out of Silicon Valley. Is the effect of sound money potentially being the new bedrock, and what that means for dealing with bad decision making.
[00:22:14] So what the world has become accustomed to, especially in banking and finance, but if you’re a major player in tech, it applies to you as well because your access to capital is, it feels almost limitless. And we’re seeing this with Oracle and all these players that are just conjuring up crazy amounts of money because of where they sit in their access to capital markets, wall Street and everything.
[00:22:40] But now let’s play the tape forward and I’m just going to use, you know, Tesla or Elon as an example. If we’re on a sound money standard, a Bitcoin standard, okay? Where no government, the desirable units of the people in the global economy, they don’t want dollars anymore because they know that they’re just going to get to base and lose value against this thing that can’t, which would be Bitcoin, right?
[00:23:03] So we’re in that world, and let’s say Elon makes a really bad decision with something with the humanoid robots. Maybe there’s a safety thing and nobody wants any of these things near them or around them because of, call it some situation. I’m just using this as an extreme example to kind of illustrate my point.
[00:23:20] All of the sudden the governments can’t bail out that company. And let’s say it’s a strategic company, it’s a huge amount of the world’s GDP, right? When you’re dealing with sound money and sound monetary units, all of a sudden when you make a bad decision or something bad happens, all of a sudden you have to start selling assets.
[00:23:40] You have to start paying the cost for bad decisions. And this is something that in my humble opinion over the last call it 40 years, the world has never had to deal with, at least the major players in the game have not had to deal with the consequences of bad decisions and sound, money changes, all of that.
[00:24:01] So if he continues to make really good decisions, well then the capital of these scarce, desirable units will continue to flow into the hands of these people making great decisions. But the second that they slip up and make no mistake about it, where we’re going, the power and the control of like this capital is flowing into the hands of very few people, like you said.
[00:24:24] But what’s coming with that? Is ultimate responsibility, and that’s what I think they’re missing in this conversation is that effect which nobody’s really kind of realized or seen because of how the fiat system works, I think is going to be drastically different and call it 10 to 15 years from now. And when that plays out, what you’re going to see is creative destruction actually happen again.
[00:24:50] In a free and open market kind of way. And I can just tell from their conversations about GDP and some of the other comments that that happened in the interview, that this idea of using scarce, desirable economic units is not on anybody’s radar in Silicon Valley. And more importantly, the consequences of mistakes at a grand scale ’cause listen, the bigger this thing grows that these companies grow, the harder they can fall. With even the smallest minuscule mistake or slip up. And the thing that’s going with this is it’s getting bigger and bigger and more and more complex. So one little mistake is going to topple some of this and then the creative destruction’s going to actually happen.
[00:25:33] And I don’t think we’ve seen creative destruction for some of these larger companies. For a very look at JP Morgan, look at some of these companies. They are just like growing behemoths that cannot be shut down. And I mean, that’s why you and I are such hardcore bitcoiners is because I think for the first time, that really does bring a counterbalance to reorganize equity and reorganize some of these things in a way that if you make a small mistake, you’re going to actually have to pay the price these days or those days in the future.
[00:26:03] Seb Bunney: I think there’s something that’s really important to touch on, which is what you’ve mentioned around kind of sound money and this idea that ultimately in a free market, we still have regulation. We don’t have regulation in the sense that you have legislation and regulators regulating. Instead, you have regulation in that if you don’t provide value, capital’s no longer going to flow to you.
[00:26:23] So you’re no longer going to be able to continue to do the thing that you’re doing. Amen. Kinda expanding on this point. I think what a lot of people don’t recognize right now is we have this weird feedback loop where you’ve got these entities lobbying government, and the government is then sending funding back their waste.
[00:26:40] They’re able to continue doing this thing that can be unproductive for society that just wouldn’t exist. Yeah. In more of a free market. And I know if you ever listened to Eric Weinstein, kind of the mathematician physicist. He talks a lot about this idea called the distributed idea suppression complex.
[00:26:55] He coins the term a disc, and this idea is that science today is massively suppressed. And it’s suppressed, not necessarily in a positive way. Words suppression is not positive, but ultimately you’ve got government funding, 50 to 70% of almost every single industry, scientific industry, but immediately they can dictate which trajectory we’re on regardless of whether or not that’s what society deems as valuable.
[00:27:20] Then you’ve got the regulatory over. Which is then knee capping anyone who’s trying to go off on their own trajectory and try and create value for society. So it’s basically the government and the regulatory, the arms are deciding what is deemed a value through their lens. It’s not necessarily through public lens.
[00:27:35] You’ve then got the scientific journals, which are these centralized entities that govern what is in the scientific journals, what is able to be spread, what information is deemed valid by mainstream science, and not necessarily, again, what society deems as valuable. And then you’ve got the peer review system where you’ve got all these various entities that they’ll never.
[00:27:54] Peer review a paper if it impacts their bottom line. So it’s very hard for new technology, new knowledge to be able to really emerge if that knowledge is detrimental to the existing structure. And then finally, you’ve got all of these scientists that have this cultural and career incentive where ultimately if they can’t support for their family, if they can’t go and pay rent, put food on the table, then they can’t necessarily thrive and exist.
[00:28:17] So they’re willing to just go to where the capital is flowing, but given government controls that capital flow, they’re kind of stuck in these kind of designated industries. And so I think what’s really fascinating about this idea of sound money, bring it back to sound money, is that when we have more of a free market.
[00:28:31] Where capital flows to where value’s being created. I think that if you have an entity slip up and they start creating AGI that is detrimental to society or is overriding its ability to be shut down repeatedly. I think that what we will see is capital move away from them and towards someone else who’s creating safer AGI.
[00:28:52] Yes. But ultimately you need to have more of a free market as opposed to this controlled information kind of system that we’re dealing with right now.
[00:28:59] Preston Pysh: Yeah, and I think the thing that’s lost is the magnitude of every decision being made inside of these organizations is growing with the size that of the market cap and the influence that they’re, you know, wielding in the world and one little mistake.
[00:29:18] Is actually going to have consequences. If we get sound money and Bitcoin becomes the new global settlement layer, the decisions are going to actually have consequences. And I think that that is something that very few are talking about that I think is super important to kind of understand why maybe this isn’t going to be this dystopian one company’s ruling the whole planet kind of scenario.
[00:29:43] Yet another reason why, you know, we cannot work hard enough to make sure Bitcoin’s, you know, successful. It’s super important as a counterbalance to all of this stuff, in my humble opinion. And you know what, I’m very open to people arguing the contra or why that might not be the case. Let us know online, we’d love to hear your arguments.
[00:30:02] Guys go listen to this whole interview. This interview is really good. Again, it’s called The Diary of A CEO, and it’s Tristan Harris and Steven Bartlett. They did a really good job in this discussion. It was very captivating. But the question you had, Seb, when we were kind of talking through this before we started recording, is, you know, what can we do now to future proof ourselves based on all of this?
[00:30:24] Like, what can you do right now? What action can you take to protect yourself from all of this?
[00:30:31] Seb Bunney: Ultimately, I think this is one of the most important questions the world we’re living in today. There is so much noise and it’s really hard to separate out the signal and also recognize like, what does my life look like in five, 10 years time?
[00:30:45] And I’m sure like I speak for many people, I’ve spoken to a lot of friends about this pre pandemic. I would say that life felt like I had clarity moving forwards post pandemic. I just feel like what is happening, I sometimes I have no idea where I see myself in five years time, in 10 years time, which I just never had pre pandemic.
[00:31:04] It felt like we were on a bit more of a linear trajectory. As opposed to this kind of exponential trajectory. So anyway, I think this discussion, and I’m curious to hear your thoughts as well, Preston, and this is this idea around what can we as individuals do to future proof ourselves? And I think that there’s a few kind of core things.
[00:31:19] And ultimately for me personally, what I really, really recommend, and this isn’t just for ourselves, it’s also for our kids. If we’re thinking about how do we support our kids who are going through the school system right now, or they’re kind of coming up into this world. And I think that there are a few things that really give us an edge.
[00:31:35] And these things that give us an edge, not only give us an edge, now they’ve given us an edge all the way throughout history. And the first one that I kind of tend to think about is this idea of learn to really think. And not just memorize school really teaches us to kind of regurgitate information, but actually you have an edge when you’re able to critically think about this information.
[00:31:54] And so this is especially true in a world where I think AI can recall everything. The valuable skill becomes evaluating information and not just storing this information. And so it’s kind of this idea about kind of teaching ourselves and our kids how to ask better questions, how to analyze assumptions.
[00:32:10] How to challenge narratives. And I know you’ve talked about this, Jeff Booth has talked about this. It’s this whole idea of just kind of first principles, thinking what is actually true here and what assumptions can I remove? And I think I may have spoken about this in one of the previous conversations, but there’s a really interesting framework that the founder of Toyota users, which is called kind of the five why’s.
[00:32:33] And so whenever anyone in the factory has an issue or he comes up against kind of a hurdle, rather than immediately going to upper management or someone else to try and look for an answer, he wants to train everyone to learn about the five why’s. And so this idea is how, how to get to the root of an issue.
[00:32:49] So the first thing you ask is, why is this thing happening? Now you’re going to get an answer to that. Usually is not the reason why that thing is happening. Now you’ve got that answer. You ask, why did I get this answer? Or Why is that thing happening? And then you keep asking why until eventually get to the root cause.
[00:33:02] And a perfect example of this would be, let’s just say you’ve got a lot of individuals in society that say prices are rising well, why are prices rising? Oh, it’s because of greedy corporations. And that’s the, that’s as deep as it goes. And so you can understand why people lean into these socialist actions.
[00:33:17] Yeah, because it’s always the greedy corporations. Yeah. But why are greedy corporations rising crisis? Well, their input costs into those resources, those, those goods, those services are going up. Why are the input costs going up? Well, the monetary unit’s increasing. Why the monetary unit’s increasing. Oh, there’s a central bank tinkering with interest rates and basically that affects money supply.
[00:33:40] And so when you start digging down deeper, you’re able to determine what is the root cause of this issue as opposed to paying around to these superficial discussions. I’m curious to hear your thoughts on that. And then I’ve got a couple other ideas as well.
[00:33:51] Preston Pysh: Yeah, they in Lean Six Sigma, which is an optimization efficiency methodology.
[00:33:56] That’s one of the things that they really hit on hard is all of these why’s to dig into the root cause of a production line. It’s often used in production lines to make it really cost efficient and effective. And you get the best product out of it is when you go in there you’re asking, well, why does that happen?
[00:34:13] Well, why do I have to do that? And you just continue to follow and dig very deep to get to the first principle. So I agree with that a hundred percent. To answer the question, I have another clip that I wanna share that I think addresses this. You know, what can we do to future proof ourselves? Now I’m going to play this clip, and this is from Mark Andreessen, which I know isn’t the most popular person in Bitcoin, circles, but it is an interesting clip, and he’s explaining why he thinks being a generalist is going to be really important in the future.
[00:34:43] So here I’m going to play the clip.
[00:34:45] clip3: I think there’s basically like two ways to really have a differentiated edge, like in general, right? There’s sort of, there’s kind of go deeper, go broad, you know, go deep has kind of become a more and more specialized expert over time and, and you know, look, third domain in which that like really matters, you know, biotech and working on AI foundation models like that stuff really matters.
[00:35:00] The deeper you are, the better. I think for most fields though now with these new tools, I would probably bet more on basically people who are able to be brought, which is to say basically can, you know something about a lot of different, you know, kind of aspects of life and how the world works. And then you can use the tool, you can use the AI to go deep whenever you need to, but then your job is the human is to basically then cross the domains across the disciplines and, and look, you see this, if you talk to any like the great CEOs, you kind of see this, which is like, they’re really great tech CEOs.
[00:35:28] They’re great product people and they’re great salespeople and they’re great marketing people and they’re, you know, and they’re great legal thinkers and they’re great finance people and they’re great with investors and they’re great with the press. You know, it’s this sort of multidisciplinary kind of approach.
[00:35:39] Preston Pysh: Okay, that’s the clip. And just to this point, I mean, if you can go look at an object from three different vantage points, you’re going to be so much more effective describing it, drawing it, and really that’s kind of what he’s getting at. If you can study multidisciplinary in your approach in what you’re learning, it’s going to allow you just to kind of be able to synthesize or piece that together.
[00:36:01] And then you can use AI to just really kind of drill down into whatever your specific thesis or conclusion where you think what it is to really kind of shine a ton of light on what it is. It doesn’t mean that you’re getting the right answer, but it’s kind of pointing you in the right direction in order to solve things.
[00:36:18] And I agree with this. I think people that are going to do quite well through a lot of this transition, maybe just early on, because I don’t know long term, but during the transition, I think the generalist is going to do quite well.
[00:36:30] Seb Bunney: What I’ve found, just like unbelievably fascinating about this whole transition which we’re seeing, is that throughout history, the people that have really succeeded in society have been specialists.
[00:36:42] And so if we go back, even pre-technology, we go back to, I don’t know, smaller little feudal systems. You’ve got the baker and then you’ve got the blacksmith, and then you’ve got the guy that trains the horses. And then you’ve got all of these individuals. You’ve got one specific role and it is your job to be good at that role and people come to you for that thing.
[00:37:00] But what has been really interesting is we’re seeing the rise of technology is this idea of kind of the multipotentialite, it’s kinda like this, this individual that has a broad swath of knowledge, and then they rely on the specialists to help implement these ideas that kind of come to them, that kind of spread across multiple disciplines.
[00:37:18] And so I think that this is increasingly where society is heading, is that like, be curious. Like be creative, figure out this like cross-disciplinary insight and combine fields that AI doesn’t naturally merge. I think AI, because it’s taking in information, it’s giving you outputs from information that’s already consumed.
[00:37:36] If you wanna have an edge as an individual, be curious, start reading all these books. And I think that what you and I always have conversations about and personal conversations, Preston, is this idea that as curious individuals, the more books you read, the more you can build out this puzzle that is the world.
[00:37:50] And you start to see these connections that other people aren’t necessarily seeing. And so I think that the world continues to thrive, the more curious people are. Yeah. That I think is absolutely, absolutely important. And I think that kind of building on this idea, I think that kind of the next thing that I always kind of tend to recommend, and this is something that I’ve noticed has had a huge difference in my life, is this idea of kind of training your attention ’cause I think in a world where technology is advancing so quickly and we’ve got social media, we’ve got all of these things vying for our attention, all of these stimulus, I think attention is ultimately one of our most important currencies. And so if algorithms are constantly fighting to kind of capture and hijack your focus, I think those who are able to focus and they’re able to guard their attention.
[00:38:34] Focus on things like long form reading, reading books, focus on writing, focus on long form conversations. Listen to two, three hour podcast. Don’t just listen to a one minute clip of something. Craftmanship like philosophy, negotiation, leadership, these are all things that are like kind of slow skills develop through training your attention as opposed to getting caught up in the newest things ’cause it’s very hard to compete in a world where you can only direct 20 seconds of your attention.
[00:39:00] Preston Pysh: Yeah, I think it’s so important to pay attention to the source. Of like where you’re receiving that flow of information as well. Because if you just think of it from a filtering standpoint, do I want to go listen to what Elon Musk is focusing on right now, or do I want to go listen to my neighbor Joe Schmoe, who just has random opinions about whatever?
[00:39:21] And the answer is really obvious. You wanna listen to what the person who’s really kind of having a huge impact on the world is looking at why they’re looking at it, you know, what they’re trying to do with that. And I think that that’s also going to be another thing to kind of, ’cause the next question is, okay, so now like what do I put my attention on?
[00:39:41] And I think that that can also be an extremely helpful thing for people to think about as to like, who am I using as my compression source to point my attention at? Let’s go ahead and go on to our next topic. So on the last time we chatted, Seb, we were talking about Star Cloud and this data centers in space.
[00:40:01] And since that conversation we’ve had some people throw us some really interesting comments online and I had an interesting interaction this past weekend. But to just kind of kick this off, I’m going to bring up a post from X that you sent over to me and I’m going to throw it over to you to take this away for the intro.
[00:40:20] Seb Bunney: So digging into kind of this star cloud, and this is kinda like two topics in one, I’ll get to the second topic in a second. But it’s this idea that basically this Washington based star cloud launched a satellite with an Nvidia H 100 graphics processing unit in early November. And this chip is a hundred times more powerful than any GPU compute that’s been sent up into space up until now.
[00:40:42] And so this star Cloud satellite is now running and querying responses. And from my understanding, it’s the first time we’ve ever been able to train AI in space. So this is kinda like a huge hurdle. But the thing that it kind of sent me down this rabbit hole is kind of the second point on this. And this was related to a comment that someone made in relation to our previous episode, and it’s this idea of how costly is it to even get this material, these satellites up into space.
[00:41:12] Preston Pysh: And just for context, in the last episode, we were saying that the price to get it up into orbit has to drop 10 x. And we were both kind of, at the time, we were just kinda like, I don’t know if that’s possible or what. And then we had a comment from somebody online that’s like, not only is it possible, but the expectation is that it’s going to drop a hundred x and in maybe short order, sorry to interrupt, Seb, keep going.
[00:41:33] Seb Bunney: And so we started doing a little bit of digging and we stumbled across this chart. And so maybe bring up the chart, Preston. Okay. So this little chart here goes all the way back to like 1962. And what it is highlighting is the cost of moving one kilo of weight up into space. And we are talking about like hundreds of thousands of dollars to be able to move just a single kilo.
[00:41:55] And right now it is currently costing the Falcon Heavy by SpaceX. It’s currently costing getting up into low earth orbit, $1,400 per kilo, which already is just like you’ve dropped it. That’s a 99% production. And what we’re trying to move towards right now, and this is what kind of Elon discusses is we’re trying to move towards this kind of super heavy Starship booster is going to drop back to 250 to $600 per kilo.
[00:42:21] If you start reusing this super heavy Starship booster, it’s going to start dropping it to under a hundred dollars per kilo. And if you can get 70 flights from the same booster, we’re getting down to around $10 per kilo. But anyway, before we jumped on this call, Preston and I were just kind of riffing back and forth about this.
[00:42:40] Preston gave me a flag, and this to me, I’m going to pass over to you because to me it just blew my mind.
[00:42:45] Preston Pysh: Well, before I tell this story, so just for the listener, if you’re not seeing the video here, I have my cursor over the space shuttle. 1981, is when the space shuttle started being used and the cost to put something in the orbit was, or cost per launch with the payload was $65,400 per kilogram to put something into space.
[00:43:08] So s and I were talking before recording this, like, oh yeah, I’ve got this, you know, on my bookcase back here. I was like, I got a flag back here. That was flown into space. It was given to me whenever I was a cadet at West Point. And he goes, no way. And I said, yeah, and I got it down and he’s like, oh my God, you gotta share this on the show.
[00:43:23] And I’ve never talked about this in over a decade of podcasting, but going into my senior year, I did an internship at nasa. I worked, I got the opportunity to work right in the astronaut office, which was really neat. I was just surrounded by all the NASA astronauts at Johnson Space Center. And this is what they gave me whenever I left after that summer.
[00:43:43] And you can see it signed by, and I’m sorry if you’re just listening to this, but on the video I’m showing this, document or this little certificate that has all their signatures there. And it was really neat. I think it was every Wednesday they had an astronaut like flight briefing that they were all in there.
[00:44:00] I got to go in and sit. There was, you know, I don’t even know how many, at the time, at least 50 of them in the room, just kind of sitting there talking about like what was going on and whose mission was coming up next, things that they learned, what was happening on the flight line, blah, blah, blah. And so they just took this and they passed it around the table on that Wednesday meeting, and they all signed it.
[00:44:20] And then the little flag here was flown in space. And on the little thing it says, this United States flag was flown aboard the space shuttle Atlantis, STS 1 0 1, from May 19th through the 29th of 2000. So, yeah. And it was at the time when they gave this to me, they’re like, yeah, that little flag, it probably cost like a thousand dollars to fly on the space or whatever the number was from back in the day.
[00:44:42] And I was always like, wow, that’s so amazing.
[00:44:45] Seb Bunney: Yeah.
[00:44:45] Preston Pysh: That’s my story.
[00:44:46] Seb Bunney: Like me, it just, it blows me away because I’ve just like that little flag has been up into space. Yeah. And the cost of sending that flag to space. Let’s just say it was a thousand dollars. Today we are, well, not necessarily today, but coming into the near future, we may be able to see $10 per kilo of weight.
[00:45:04] Just to put that in perspective. ’cause I think sometimes we come across these posts and we hear Elon saying we want to get it down $10 per kilo, but we don’t have any reference point for Yeah. Just how insane that is. And maybe just even to kinda like give another reference point, Canada, if I use Canada Post and I wanna ship one of my books, one of my books that weighs like, I think it weighs maybe half a kilo.
[00:45:27] I wanna ship that across Canada. It costs me $30. Yeah. $30. Yeah. That is absolutely useless. Canada Post does government efficiency for you, but just highlighting that it may in time be cheaper to move stuff up into space than it currently is to move stuff with Canada Post across Canada.
[00:45:43] Preston Pysh: I mean, I think that’s a certainty, Seb.
[00:45:48] Okay. I wanna play the clip real fast of Elon talking about this just so everybody can kind of hear it straight from the guy that’s doing it.
[00:45:55] Elon Musk: With a Falcon Rocket, we are able to reuse the main stage and the, the nose cone, but we’re not able to reuse the upper stage. And it still takes us, you know, at least a few days from when the main stage lands to when we can fly it again.
[00:46:09] So it’s not fully reusable because we lose the upper stage, which costs $10 million. And then the main stage, it’s not as reusable as like an aircraft. You can’t just like refuel it and fly. It requires work for a couple days.
[00:46:22] The Starship design is the first design that is capable of full and rapid reusability, where that is one of the possible outcomes. And once you have full and rapid reusability, the cost of access to space drops by a factor of a hundred.
[00:46:37] Preston Pysh: So there it is pretty phenomenal. This was just pure serendipity. I was in Baltimore this past week for the Army Navy football game, and I ran into a good friend Tim Copra, who’s been on the show. We talked about him actually on the last episode.
[00:46:51] I had no idea I was going to see him at the game. We were able to spend quite a bit of time together and he’s a former NASA astronaut. And of course I immediately bring some of this stuff up, Tim and I said, so what is like reality check here? Like how hard is it going to be to do these data centers in space?
[00:47:09] And he did not like write this off as not being viable by any shape of the imagination. The one thing that he did highlight that he thought was a concern and would maybe be the critical path to it, is the cooling concerns and how you’re going to be able to manage the thermal cooling of the data centers up there.
[00:47:30] And he implied that there really wasn’t, he hadn’t seen something that was a viable roadmap. To this point, I did find a person that was talking about this online, and I’m going to pull up the tweet. And this gentleman, Vlad, he’s basically laying out how some of the things that they saw with starlink V three and how it’s scaling using first principles, going from a 20 kilowatt to a hundred kilowatt is a reference point to how they can actually solve the cooling problem.
[00:47:59] And he’s saying that he doesn’t think that it’s that big of a deal. I don’t really know what his background is. But more importantly, Elon responds to him, and this is how he responded. He says, SpaceX has over 9,000 satellites orbiting Earth right now, which is twice as many as the rest of the world combined.
[00:48:18] So maybe we know a thing or two about the subject. I, I don’t know what to say other than like, this guy is truly on a whole different level. I find it all to be crazy fascinating. Just to kind of like add some more context or just more interesting, whatever. So SpaceX, it’s been announced that SpaceX is going to IPO, maybe by the summer and that the valuation on the company’s going to be around $1.5 trillion.
[00:48:48] So my immediate question to a couple people that I saw over the weekend, one being, the CEO of a space company, not test, or not SpaceX, but a space company that iPod this past year. I said, so like, why would he be selling his equity? Why is he selling is really the question. And it seems like there’s a lot of marketing kind of hitting.
[00:49:10] Right now with all these data centers in space being shared by, you know, Google and everybody else, Jensen, Huang, and everybody talking about it, and oh by the way, we’re doing this IPO and we’re going to sell some of our equity of SpaceX. You gotta ask yourself why, right? And I’m not saying that they aren’t trying to raise the capital for a good reason or whatever, or that this isn’t possible.
[00:49:32] I’m not saying that. I just find it interesting in that a guy that could go out and raise a whole bunch of money at a very low interest rate, or what I would suspect could be a very low interest rate, is instead selling equity of the business in order to do it. And the question that I did not get a good answer for from anybody is why is he selling the equity?
[00:49:54] So, Seb, you have, why is he selling the equity right?
[00:49:59] Seb Bunney: What I do think is interesting. I don’t necessarily know if I’ve got an intelligent answer to that question. What I do think interesting is, again, putting it in perspective. I think, from what I understand, Saudi Aramco was the previous largest IPO and they were 29 billion.
[00:50:16] And this is going one and a half. That’s trillion. Trillion. Yeah. Crazy. And that is the second, or currently the largest? The largest IPO ever in history. We’re talking 45 times bigger. Yeah. That’s insane. Yeah, absolutely insane. And so I think that either we’ve experienced massive piper inflation, which I don’t think that has happened.
[00:50:33] Or there is this is kind of again, like a canary in the coal mine as to where is society shifting right now? Where is monetary energy moving towards? And I’m curious to see if it can actually sustain this $1.5 trillion IPO. But I think going back quickly for one second, I did a little bit of research into this kind of calling by this guy, Vlad, SAGU, I think its name is.
[00:50:54] And one of the things that I found really interesting is a lot of people are discussing this idea. They’re like, oh man, how is it going to be possible cool stuff in space? There’s all of these other input factors that we can’t necessarily control. And when I did a little bit of digging, in many ways, it’s actually the inverse of Earth.
[00:51:11] The problem with Earth is that we live in a very unpredictable environment, and so on earth, you’re fighting the ambient temperature that’s moving up and down. You’ve got seasonal changes, you’ve got humidity, you’ve got dust. You’ve got weather, you’ve got vibrations, you’ve got seismic activity, you’ve got grid reliability in orbit, you can pick your thermal environment.
[00:51:31] And it’s going to stay consistent. You can pick your sunshade cycle so you know exactly. You can with almost a hundred percent confidence, know exactly how much sun you’re going to get through any period of time. You can pick your surface temperature, you can pick your like radiation angles.
[00:51:46] And so if anything being up in space, I think the hardest part, which we are seeing is changing rapidly, is actually just getting this technology up into space. And then once it’s in space, I think that it’s a very different environment on earth. ’cause you’ve got this kind of predictable environment. So we no longer have this problem of kind of calling it an unpredictable environment. Instead the whole environment is programmable. And so I find that really, really fascinating.
[00:52:11] Preston Pysh: Yeah. I have a, it’s so funny how over we go, how many topics we had lined up for this discussion. Let’s move through some of these really fast, ’cause I think they’re super interesting. This next one we’re going to go through very fast Shaath.
[00:52:27] He shared this tweet very recently and he says to make the math simple and we’re talking about Waymo versus Tesla, and he’s saying to make this math really simple, the car on the left, which is the Tesla is 20 5K to build the car on the right is 150,000 to build. And it’s this graphic that shows where the sensor type and the number of sensors that are on each car.
[00:52:50] And it’s just straight up laughable. The difference in what Elon’s doing with the Tesla car and its autonomous driving with eight sensors and they’re all just cameras versus Waymo with 40 sensors, six that are radar, five that are Lidar, and then 29 cameras. And not to mention it’s a very ugly, and the point of his post is from a cost standpoint, like he’s just going to annihilate the competition.
[00:53:18] But where this could get interesting is on the safety front, is if the, the one, the Waymo goes into the political landscape and let’s say that they can verifiably prove that they’re X number times safer. Now you got this interesting dynamic where politically. Different jurisdictions could say, oh yeah, we, we don’t allow that car in here because it’s not safe enough.
[00:53:43] Look at how this other company’s doing it, which is, you know, 10 times safer to the residents inside of our, you know, jurisdiction. And I think that that’s a really interesting talking point and interesting thing that he’s going to have to deal with. You know, he might have figured it all out from a math standpoint and something that’s safer than a human driver.
[00:54:02] I guess you have to also consider is it safer than what else could be done? And, we’ll move on to the next topic unless you have any like keen insights on that one. Seb, just for the interest of time,
[00:54:14] Seb Bunney: it’s honestly the only point that pops up is just like when you’re looking at Waymo and when you’re looking at Tesla, and we discussed this last time, is maybe jump back to the other episode of Curious More on this kind of topic.
[00:54:26] But LIDAR is extremely accurate with depth, works well in low light, great for like precise navigation, but it’s unbelievably expensive. And what Tesla is trying to do is just vision only. And it’s like a long-term bet that AI is going to improve, where it’s able to interpret its environment far more accurately and make far more realistic decision similar to a human.
[00:54:47] And I think that the thing that’s interesting is like you’re looking at these two cars and it kind of reminds me a little bit in some ways of like Blockbuster versus Netflix and you’ve got Tesla, Netflix, which is kind of. On this path towards like meeting the user advancing society. And then you’ve got Blockbuster, which when it’s being taken over by Netflix, ultimately it’s like, how do we attract people?
[00:55:10] We need to put candy in the aisles. And I feel like Waymo is just like, let’s just put another red light on the top. Wait, let’s just put three more little cameras. And so I do think it’s interesting, but I have to say that’s not coming from an informed place. I don’t understand the depth of technology on these Waymo cars.
[00:55:26] Preston Pysh: Yeah, I mean at the end of the day, I think more, more data that you can collect, the better it’s going to make. It’s just going to have more information to make and a more informed decision. It’s just like, what scale are you getting that it’s just 2% better or it’s 1% better, but you have to pay 10 times the amount.
[00:55:46] Right. Like it comes back to this classic engineering quandary, which is can it do better? And the answer is yes. I mean, you see this with just the space race. Like when you looked at, and I forget the name of the organization that Elon’s competing with. It’s like this conglomerate of Lockheed and I think Boeing that formed like this alliance.
[00:56:08] And their requirement was no mission failure. No matter what. We can never have a failed mission launch. Well, the reason their cost is a hundred x Elon is because of this requirement of a no failed, nothing can possibly go wrong. And Elon’s kinda looking at it like, well, that’s just over-engineered. Like there can be failures.
[00:56:27] Then we will optimize those. And so I think you have the same thing happening with that particular comment. And the really question is, is how much better would the driving be with all these extra sensors and relative to the cost that it’s going to take to do it. And I don’t, in the end, I don’t think it’s going to matter because I think he’s going to go out there with so much volume and so much scale that is just going to be an absolute clinic.
[00:56:51] And the competition is they’re just going to be too far behind to ever catch up to where I think he’s about to go with all this, but
[00:56:58] Seb Bunney: Totally. Yeah. And, and I think that the other thing is like Claude Shannon talks about like information and Exor formation and it’s just like, what is, you can have more information, but in that information is so much noise.
[00:57:09] And so at what point, like to your point, is kind of like the drop off where more information is only actually creating noise. And it’s, and it’s reducing the signal.
[00:57:20] Preston Pysh: Okay. This next one is really interesting. I’m sharing a post that somebody had here, and the post says, Elon Musk has reportedly requested his own personal office workspace inside Samsung’s semiconductor fab site in Texas.
[00:57:36] And Elon has previously mentioned he will personally walk the factory line to accelerate progress. And then I found this really interesting comment where somebody else is also talking about this AI five chip, which is their inference chip. It’s like an asic. It’s very specific for inference in AI, for his cars and for the humanoid robots.
[00:57:57] That’s what it’s going to be used for. And then Elon responded to this person. He says, AI five and AI six. Engineering is my biggest time allocation at Tesla. AI five will be good. AI six will be great. So the reason I’m highlighting this is I don’t think that this gets a lot of attention. Like you typically see the really swoopy things like the rocket launches and the cars driving themselves down the streets.
[00:58:25] But when you look at him and all of the things that he’s doing, for him to say this is like his number one priority for me is a really, really interesting thing. So I started digging into this AI five chip. This is going to be 40 x. The performance increase over the existing chips that he has in the cars.
[00:58:44] It’s eight times more raw compute. It’s nine times more memory capacity. The other thing that I think is it’s five times more bandwidth increase. And like all of these things, when you look at it, he’s putting it in the car and he’s putting it in the humanoid robot. These numbers that I’m quoting you are just for the five.
[00:59:04] It’s not even the six, which he’s saying is going to be leaps and bounds over the five. So one other point that I think is interesting is he’s not using Nvidia chips. He’s going upstream and doing his own thing and using that in a very specialized, specific way that how in the world are you going to compete with this?
[00:59:26] That’s the que That’s my question for anybody dabbling in any of these spaces. How in the world
[00:59:32] Seb Bunney: are you going to compete with this? There’s two points that kind of come to mind around this kind of chip topic, and it’s kinda like one, I think sometimes where we struggle is we were just talking about this information, we can get more and more information.
[00:59:46] But at the moment, I think AI, autonomy, self-driving, all of these things, they’re kind of like hit a wall. And it is not because of the software, it’s because the hardware can’t process everything fast enough. And so when you’ve got these chips processing at 40 to 80 times faster than what we’re using today, that means that they’re able to see if we take like Tesla cars like.
[01:00:07] Other cars, pedestrians, weather, road lines, lights, all of these things that’s able to interpret that information, distill it down, and figure out the signal. Far, far, far quicker than what we can currently do today. And I think kind of the second point that I find even more fascinating is this idea of it’s moving a lot of this stuff in-house.
[01:00:26] Now, from what I understand, digging into it, they’re still going to be using third parties like Samsung and TSMC to actually make the chips, but they’re no longer relying on Nvidia or Nvidia to design these chips. Instead, they’re kind of taking on chip a architecture, interconnects, compilers, runtime, neural networking, interference engines, firmware, training, pipeline.
[01:00:49] All of these things is coming in-house. And so if it’s coming in-house, this is where I think that it’s really interesting because an analogy that kind of came to mind is when you think about call in comparison to Microsoft or anyone else. Apple really started crushing the market when it brought a lot of its chips in-house.
[01:01:05] Apple’s M1 chip delivered twice the performance of the Intel chips at 25% of the power. And Apple’s M two and M three chips beat Intel and AMD by per wat three to four times. And so, ’cause they control the chip and the operating system and the compiler and the memory, there’s so much more of a cohesive design and they can operate far more efficiently than any of their competitors.
[01:01:30] And you and I kind of talked about it very briefly before we jumped on this idea that what we’re seeing is Elon is kind of bringing everything in-house. Like he’s got SpaceEx and he’s got starlink and he’s got Tesla and he is got Neuralink and he’s got Grok and he’s got the boring company. He is got Tesla energy.
[01:01:46] All of these things he’s trying to control ’cause he wants control over the output. He wants to minimize operating costs, he wants to maximize efficiency. And so that’s why I think is so cool about him bringing the chips in-house is it’s just going to create far more efficiency and control.
[01:02:00] Preston Pysh: I just wonder, and I saw somebody post this online of potentially using the cars, like let’s say I personally own the Tesla, let’s say that I agree to, while it’s charging in my garage at night, that you could be using these inference chips in order to conduct, you know, AI energy consumption tasks, compute tasks for queries for Grok.
[01:02:24] Call it. I suspect this is maybe not possible, but the question was just fascinating. ’cause when you think of how many cars are out there and how often they’re just sitting there idle, but you have this super powerful ASIC that’s optimized for inference. And I think maybe the other reason why it might not be possible, I think it’s a very specialized inference for image intelligence as opposed to large language model or other forms of intelligence.
[01:02:52] I think it’s geared specifically for just image generation, right? And understanding where you’re at in space and time in order to drive a car. So I don’t know if it can be reallocated to these other use cases or whether that would be something that would even be entertained, but I think when you just add up the number of cars and the fact that they’re all networked, maybe there’s something there that could be used for.
[01:03:16] I mean, maybe he could power video games or something. I don’t, that’s
[01:03:20] Seb Bunney: that’s, that’s fascinating actually, because you wonder, I think the issue right now would be, let’s just say. I dunno. The human brain, from my understanding, I think the brain uses like 30% of our energy requirements. And so our brain consumes a lot of energy.
[01:03:34] Now, how efficient is a car? If you’ve got a car turned off, but you’ve got its chip running while it’s parked in the garage, how much is that going to cost to run that chip overnight doing some other task? And then what happens if Elon is like, okay, you know what, it’s costing this much to go and build all these data centers.
[01:03:52] We’ve got all of this compute power just sitting idle. We are willing to give you, I don’t know, 5 cents per hour. Your compute. Yeah. Which then covers your cost of electricity. Yeah. And so people can then monetize their car, not even from a taxi standpoint, but from just a compute standpoint. What happens when you can start hashing and frigging mining Bitcoin with your car overnight, and you’re just generating revenue while you’re asleep?
[01:04:17] I think that’s kind of fascinating.
[01:04:19] Preston Pysh: Yeah. Yeah. I think there’s something there. ’cause it’s just a powerful resource that’s sitting, for all intents and purposes, would be sitting idle. You know, I, geez man. Your imagination is truly the limiting factor with where this is all going. Let’s go ahead and wrap there, Seb.
[01:04:36] We’ve been saying we can just keep going on and on ’cause this stuff is just endless. I know you and I wanted to kind of talk about how when you look at everything that Elon’s doing and how it seems to like all be converging and the foresight that he would’ve had to kind of piecemeal all these different business entities together to kind of point it all at this pinpoint moment in time where I think the general public is seeing it all converge.
[01:05:01] I just can’t imagine what was in his head 10, 15, 20 years ago. In order to bring all of this to this point, but maybe we’ll save that for another day.
[01:05:11] Seb Bunney: Absolutely. I-You know what, like, I think that’s a whole topic of discussion in itself. Yeah. Looking at like, how does Elon think, how does he plan this stuff in advance?
[01:05:20] Like to your point, like what was his mind thinking about 10, 20, 30 years ago to be able to kind of think about all of these little nodes that have to come together to be able to be where we are right now.
[01:05:30] Preston Pysh: Yeah. I mean, I’m just struggling to make sure I pick up the kids from school on time and things like that.
[01:05:35] Right. Like, alright, Seb, give people a hand off to your book or anything else that you want to highlight.
[01:05:42] Seb Bunney: Again, like I really appreciate everyone kind of giving us the listen and we’ve had a handful of people kind of reach out and just kind of mention their topics that they’re fascinated about and we’ve tried to bring them into these discussion points.
[01:05:53] So if there’s anything that you guys are interested about, anything that you want kind of further clarity on or you want us to dig into, feel free to just comment on Twitter or on YouTube and we, we definitely scan those comments. But again, like you can find me at SEB B-U-N-N-E-Y.com, that’s my website and vlog.
[01:06:08] Or you can find me on Twitter. And my book is The Hidden Cost of Money, which touches on kind of subjects adjacent to this, more related to money. But again, I appreciate you having me on Preston. It’s always, always awesome to chat.
[01:06:19] Preston Pysh: Well guys, I hope you guys are enjoying it as much as Seb and I ’cause this is a blast and hopefully we’re rounding up and filtering the high signal things that are happening in tech and we’ll keep doing this. I don’t know, at least once a month to just kind of show you guys what’s happening in the world and how exciting it all is. So thanks for joining us, and until next time, thanks for listening.
[01:06:41] Outro: Thanks for listening to TIP Follow Infinite Tech on your favorite podcast app, and visit Theinvestorspodcast.com for show notes and educational resources. This podcast is for informational and entertainment purposes only, and does not provide financial, investment, tax, or legal advice.
[01:06:58] The content is impersonal and does not consider your objectives, financial situation, or needs. Investing involves risk, including possible loss of principle and past performance is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions.
[01:07:12] Nothing on this show is a recommendation or solicitation to buy or sell. Any security or other financial product hosts, guests and The Investor’s Podcast Network may hold positions in securities discussed and may change those positions at any time without notice. References to any third party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them.
[01:07:33] Copyright by The Investor’s Podcast Network. All rights reserved.
HELP US OUT!
Help us reach new listeners by leaving us a rating and review on Spotify! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!
BOOKS AND RESOURCES
- Clip 1: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! with Tristan Harris.
- Clip 2: Marc Andreessen explains the future belongs to generalists in the AI era.
- Clip 3: Elon Musk on the Future of SpaceX & Mars.
- Official Website: Seb Bunney.
- Seb’s book: The Hidden Cost of Money.
- Related books mentioned in the podcast.
- Ad-free episodes on our Premium Feed.
Some of the links on this page are affiliate links or relate to partners who support our show. If you choose to sign up or make a purchase through them, we may receive compensation at no additional cost to you.
NEW TO THE SHOW?
- Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members.
- Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok.
- Check out our Bitcoin Fundamentals Starter Packs.
- Browse through all our episodes (complete with transcripts) here.
- Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.
- Enjoy exclusive perks from our favorite Apps and Services.
- Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value.
- Learn how to better start, manage, and grow your business with the best business podcasts.
SPONSORS
- HardBlock
- Human Rights Foundation
- Masterworks
- Linkedin Talent Solutions
- Simple Mining
- Plus500
- Netsuite
- Fundrise
- Public.com*
*Paid endorsement. Brokerage services provided by Open to the Public Investing Inc, member FINRA & SIPC. Investing involves risk. Not investment advice. Generated Assets is an interactive analysis tool by Public Advisors. Output is for informational purposes only and is not an investment recommendation or advice. See disclosures at public.com/disclosures/ga. Past performance does not guarantee future results, and investment values may rise or fall. See terms of match program at https://public.com/disclosures/matchprogram. Matched funds must remain in your account for at least 5 years. Match rate and other terms are subject to change at any time.
References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them.



