TECH008: EMERGING TECH OVERVIEW: DRIVERLESS CARS, IMAGE GENERATION, ENERGY INFRASTRUCTURE W/ SEB BUNNEY
TECH008: EMERGING TECH OVERVIEW: DRIVERLESS CARS, IMAGE GENERATION, ENERGY INFRASTRUCTURE W/ SEB BUNNEY
03 December 2025
In this episode, Seb and Preston explore Tesla’s FSD 14.2 advancements and their implications for AI-driven autonomy. They also tackle the ethical, societal, and infrastructural challenges of rapid AI development—from brain-inspired computing to nuclear energy’s role in supporting AGI.
IN THIS EPISODE, YOU’LL LEARN
- How Tesla’s FSD 14.2 dramatically improved its autonomous driving performance
- The ethical dilemmas and liability concerns around AI decision-making
- Tesla’s sensor-only approach versus LiDAR-heavy systems like Waymo
- The potential of biologically-inspired artificial neurons
- How brain-computer interfaces could revolutionize AI and prosthetics
- The societal risks of tech-enhanced human capabilities
- How AI image generation tools like Google’s Nano Banana Pro are evolving
- Why AI’s energy demands are influencing nuclear power policy
- The risks of AI-induced content homogenization and “AI slop”
- Why some are turning to manual trades to escape AI disruption
TRANSCRIPT
Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.
[00:00:00] Preston Pysh: Hey everyone. Welcome to this Wednesday’s release of Infinite Tech. Today, Seb Bunney and I unpack the biggest innovations hitting the tech world right now, from AI breakthroughs, robotics, brain computer interfaces to the energy infrastructure powering at all. We know this space is moving crazy fast, and new headlines are constantly hitting the wire, but our intent is to bring you the biggest impact stories that are happening now.
[00:00:24] Preston Pysh: So without further delay, here’s my chat with Seb.
[00:00:30] Intro: You are listening to Infinite Tech by The Investor’s Podcast Network, hosted by Preston Pysh. We explore Bitcoin, ai, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthrough shaping the next decade and beyond empowering you to harness the future today. And now here’s your host, Preston Pysh.
[00:00:48] Preston Pysh: Hey everyone, welcome to the show. I’m here with Seb Bunney and we’ve got a fun one for you because we’re going to go through a bunch of different things that we’ve both been curious about, things that we are seeing online, things that were just kind of blown away on the tech front. And yeah, we’re excited to bring this one to you.
[00:01:07] Preston Pysh: So, Seb, any opening comments before we just dive right into some of these?
[00:01:11] Seb Bunney: I would say for people that have listened to a couple of the episodes we’ve done so far, we’ve been kind of reviewing tech books and surprisingly, and I’m not sure on your thoughts, Preston, but it’s surprisingly hard to find really good tech books that kind of open your eyes.
[00:01:24] Seb Bunney: And on top of that, give you a lot to talk about. And so if anyone does have any books, feel free to reach out to us and we’d love to hear those books. We’re always down to review a book, but more than anything we just wanted to kind of dive into like what is happening in the world today. And some of that will take a long time to make it into books.
[00:01:41] Seb Bunney: So we thought, let’s just go straight to the source and see what’s happening.
[00:02:01] Preston Pysh: The one was about quantum and it was very obscure and we’re just kind of like, yeah, I don’t know. So we’re going to take this in a different direction today and we’re going to just kind of highlight some really fascinating things that are happening. The first one that I pulled up is just this Tesla FSD 14.2 that was recently released and the comments that I’m seeing online in reference to this autopilot.
[00:02:28] Preston Pysh: And what I’m going to do is I’m going to pull up and bring up some videos that people are posting online for people that are watching the videos side of this are going to be able to see it. Seven. I will do our best to kind of explain what this looks like for the audio listener, but the first video that I’m going to pull up here is one that somebody is just showing how superior the software is on just, you know, animals crossing in front of the vehicle.
[00:02:55] Preston Pysh: And one of the other updates that I’ve heard is just drastically different than the previous versions is, I guess blowing leaves would mess up or hang up. The AI onboard the Tesla vehicle in the past and now I guess on this latest update, that is not the case, but people can see the screen. Right now I’m just kind of playing a video and there was a deer, the car veered out like right at the last minute, veered out of the way another, I don’t know what that was.
[00:03:22] Preston Pysh: Seb, are you able to see what I’m playing? Yeah, yeah. Here’s a moose literally walking across the road just out of nowhere in front of the car. It slows down and does the right thing. Literally this feed is seven min, there’s an alligator crossing the street. So the point that the person I think was making with the video is just showing the diversity of different things that can just go wrong that a human, you know, just, we don’t even think about the fact that a deer looks different than a cat that looks different than an alligator crossing the street. And, you know, if you were coding, if then statements on something like this, it would literally be impossible.
[00:04:00] Preston Pysh: Like you could never get to the point where you could have software out there that would be covering all of these different edge cases. And the latest version is putting it on full display. Okay, so the, the video I really want to show Seb is this one. And I’m going to play the sound. I don’t know if Seb, if you’re going to be able to hear it, but I think the audience is going to hear it in the recording.
[00:04:23] Preston Pysh: And this is a video of somebody using 14.2 in a Tesla in Times Square. And they have this, I guess, in what is referred to as the Mad Max mode, which they’ve brought back. I guess they had it out and then they pulled it back. The code that’s running here, the AI code that’s running on the car is driving as if it’s an aggressive, confident, I think is maybe the word that they would want a confident driver in New York City.
[00:04:52] Preston Pysh: And so I’m going to have the sound on, so hopefully, Seb, you can hear this.
[00:04:57] Tesla clip: Unsupervised era now. Now changing the lanes. Saw that garbage truck already. Yeah. One get stuck behind the garbage truck. Human drivers still is standing there using their phone. Oh wow. Yeah. I saw that person just using phone. Don’t even we a bus merg in front of us.
[00:05:08] Tesla clip: It’s beautiful. This is crazy. Beautiful. This car knows how to drive in New York City. Oh, and look at this. Cat got a toss out. Okay. Yeah. So look at this changing lane now. This is the thing I like. Oh my God. Did you see it? It it was indicating to move over. Yeah. Then it looked like the cab to get out of the way.
[00:05:19] Tesla clip: Yep. Then it turned off the blinker. Yep. But then he was still there, supposed about to kill us. So it turned it’s blinker on, blink on again, and moved over. It’s my, it’s ability to kind this guy stopping change its mind if the situation changes and abort. The lane change is pretty powerful. This is crazy.
[00:05:30] Tesla clip: This is some of the most intense driving. Yep. Yeah. We got a petty cab. We got a bus cut in between the lanes like this too. Oh, beautiful. Look at that. That’s the kind of thing that just puts a smile on your face. I, that’s so satisfying. It’s like, yes. That’s what I do when I drive. Like go for the empty spaces.
[00:05:43] Tesla clip: Yep. Look at oh, oh this guy. This got his whole fucking taken off. Oh, look at this. I’m going to give you a space. Human pilot. Wow. I’m not going to give you a space. Another human pilot intervention. Oh my God. Oh, it’s such a satisfying drive so far.
[00:05:53] Preston Pysh: Okay, so I’m going to try to describe it so I, I’m sure the listener is hearing kind of the comments of, you know, the people in the car just losing their mind because the car is just weaving in and out and just kind of really navigating itself in probably one of the most difficult driving scenarios that you could imagine and doing it very effortlessly.
[00:06:12] Preston Pysh: They don’t seem to be too concerned as to like whether they need to grab the wheel or not. And the car is driving, I would say, as if somebody with 20 years of experience plus behind the wheel and just kind of going around and there’s another clip, I don’t know where I kind of lost sight of where it was at, but I saw this clip where the car was also in New York City comes up, there’s like an extremely tight space between two cars and the car goes up, it stops.
[00:06:40] Preston Pysh: It’s almost like it assesses down to the millimeter of whether it can go through that gap. And then it slowly proceeded through the gap and got through. Which I’m telling you, having watched the video, there’s no way a human driver would’ve tried to go through this gap. But because the car had so much sensing capability of its left and right limits, it still proceeded through this tiny little gap between the other cars.
[00:07:06] Preston Pysh: So Seb, your initial thoughts like what are we witnessing here? What are we looking at?
[00:07:12] Seb Bunney: In my mind, what blows me away is that I think AI is one of the first consequential tethers of kind of AI to reality. I think up until now, we’re kind of talking to these large language models. They’re having output, but that output isn’t necessarily consequential as there’s a delay from that output being used in the world that we actually live in.
[00:07:31] Seb Bunney: And what I find so fascinating about these is that like self-driving systems are taking like millions of data points per second, predicting trajectories of like dozens of these various like agents things, moving vehicles, animals. And then it’s deciding optimal actions all within like milliseconds. And so this in my mind is the first time that we’ve seen technology really making like life critical decisions in the physical world at scale.
[00:07:56] Seb Bunney: And that to me, I don’t know, I’m just in awe watching this stuff and it’s wild just to see it expand over time. I’m curious to see, like as you’ve been diving down kind of these rabbit holes or seeing this, what is your reaction to kind of seeing this type of driver?
[00:08:11] Preston Pysh: I think this might be the first model that like the, if then statements are completely gone out of the code, like my understanding is end to end, this is a complete neural net that’s making the decision making.
[00:08:23] Preston Pysh: So when we think about like what’s taking place with the car, it has optical sensors that are looking at the same spectrum that our eyes are seeing. It’s taking those inputs, those, you know, light waves and it’s transmuting it in and through AI code, there’s no C++ here and it’s providing an output through the wheel, turning left or right, or the gas and the brake.
[00:08:50] Preston Pysh: And it’s like, I mean, if we were going to peer into the code to audit it, it’s impossible to audit, right? Like any human that would look at that code can’t make sense of how it’s making its decision making. And I think that this release, this 14.2 release is probably going to go down in the books, is probably one of the most, almost like a milestone in time of we achieve something very, very profound here.
[00:09:17] Preston Pysh: Similar to, I think like ChatGPT-3 was like one of those huge milestones where everybody was just like, whoa like what is this? This is very different than anything we’ve ever seen before. And I think you’re seeing the same thing happening with driving right now with this Tesla 14.2 update that went out.
[00:09:36] Preston Pysh: And it’s crazy. You read in the comments of people that have Teslas, I don’t know if you have friends that have Teslas that have been talking about this specific release, but it seems to be very human-like, and it’s progression from the previous model, like a very significant leap forward.
[00:09:54] Seb Bunney: I’m curious to see, did you see that video?
[00:09:56] Seb Bunney: When was it? Maybe six months ago, a year ago, someone showed a video of, they had kind of the ChatGPT talk mode where you’re essentially just talking to an individual through ChatGPT, and then they kind of fed that information to another ChatGPT kind of bot talking, and then all of a sudden when they realized they were both talking to an ai, they just changed language.
[00:10:18] Seb Bunney: And so in my mind, I’m curious if you had to go into the backend of this, like autonomous driving and look at the code, like to your point, it’s not if then statements that we would code as individuals because we are limited by our own, own various senses, own various languages. Like we’re massively boxed in and so do they have a capacity well and above, beyond our ability to understand what they’re doing, if you actually go looking at the back end of these things.
[00:10:42] Seb Bunney: I find that so fascinating.
[00:10:45] Preston Pysh: You know, you get into this idea of what is the most optimal language to communicate in. Right. The AI has immediately stopped speaking English and they started speaking their, but it’s an interesting like, thought experiment. And I know we’re getting away from the driverless car thing, but I was tinkering with AI one time just asking it.
[00:11:05] Preston Pysh: So in your opinion, like what is the most efficient way to communicate? Would it be English, would it be this language? And it goes into this big long dissertation about like the different things to optimize for. Like it was saying Chinese is very difficult for a human to learn, but for an ai, there’s a lot of compression in the symbols and it can communicate with the symbols like way more efficiently than the English language, which takes more characters to transmit.
[00:11:32] Preston Pysh: So it’s like, so if you know Chinese and you don’t have to like, it’s actually more efficient to communicate in written form for that versus in verbal, you know, communication. And it’s just like the way it views things. It’s so different than if you just had a conversation with, you know, a random person on the street, what would be the most efficient language the, they’d be like, oh, of course the one I’m speaking or whatever.
[00:11:54] Preston Pysh: Right. It’s just really, it’s amazing to kind of see the depth of knowledge that kind of pops out of some of these things.
[00:12:03] Seb Bunney: Well, I was just going to add one more quick point to that. And again, it’s a bit of a dud, but it’s like a few years ago my girlfriend was like, hey, you know what? We should watch Arrival.
[00:12:11] Seb Bunney: And have you seen the movie Arrival? So for those who haven’t seen it, I highly, highly, highly recommend watching it. I think I won a whole bunch of awards, but essentially it’s just like an alien spacecraft has come and landed on Earth and these countries don’t know whether or not. Is this dangerous?
[00:12:25] Seb Bunney: Does it want to attack us? Like why is it here? And this lady goes in, she is, I think her expertise is in languages and archeology and history and all this kind of various stuff. And so she goes into this spacecraft and starts communicating with these aliens and they speak in a different language, but they don’t speak obviously verbally.
[00:12:42] Seb Bunney: They speak through imagery and these kind of, these kind of swooshes, these big kind of black ink swooshes. You can think of it like the Japanese calligraphy. What’s really interesting is it hit me like my girlfriend fell asleep and I just like broke down halfway through watching this movie while laying in bed.
[00:12:56] Seb Bunney: Because the way that it communicates is through these various swooshes, but each swoosh has an intricate amount of information through the tendrils of the swoosh, the blackness, the darkness of the swoosh, how it shows up. And so it kind of goes back to that quote, which is an image kind of conveys a thousand words.
[00:13:12] Seb Bunney: And I think that when we are looking at imagery versus ones and zeros or even text, there’s only so much information that can be encoded in a word, but in an image from a single second of looking at an image, you can convey the emotion behind it. The feeling, the location, like what’s in the landscape, what’s going on?
[00:13:29] Seb Bunney: And so I’m just curious like how does, are we knee capping AI in many ways because we’re trying to communicate with it using our language that we are obviously limited in the ability to convey information.
[00:13:42] Preston Pysh: Yeah. Some other interesting, amazing point by the way, some other interesting things that I think are worthy of highlighting here to help people kind of conceptualize like where we’re at right now.
[00:13:51] Preston Pysh: So in early 2024, version 12 of the driverless tech out of Tesla was released. So this is almost two years, a year and a half ago. And the person who was observing or auditing the performance of the driving, the autonomous driving had to intervene about every 150 miles based on the way that the car was driving today, the version that you just saw, if you watch the YouTube of our conversation and could see some of the videos that I was playing, this is about every 800 miles between the person auditing the driving would have to intervene.
[00:14:30] Preston Pysh: So that’s about a five x improvement that’s happened in about a year and a half. And just for context, a human driver, if you were sitting there and auditing another human that was driving it would be about every 50,000 miles that you would have to interrupt and maybe take the controls because of a mistake being made.
[00:14:49] Preston Pysh: So, you know, we’re about 50 x from where that that’s at today. According to some of these metrics that I’ve researched, just very curly. So if some of my metrics are wrong, I didn’t put a lot of time into pulling up these numbers, but just so people kind of have a ballpark of where things are at, it’s moving fast.
[00:15:09] Preston Pysh: And if you have a five x improvement in a year and a half, I can only imagine where we’re at in another year. And I think when we look at this and we say like what this computer and what this AI is doing on these cars is, it’s really kind of understanding just spatial awareness. Like for it to pull in.
[00:15:27] Preston Pysh: And some of the parking stories that I’ve read online where people are like, yeah, I, I told it to take me to this parking lot. It selected like an amazing parking spot amongst, I mean, just think about the complexity of that decision making. I mean, I can just tell you from my wife and I parking the car, she has so many comments.
[00:15:49] Preston Pysh: And frustrations with my parking selection. So it’s a hard problem to optimize for it. I can only imagine. But what everybody’s saying is that the car does amazing job at selecting parking spots and the efficiency at which it pulls in there and it the doesn’t feel like it’s just kind of like, God, can you please like finish the job here and park the car.
[00:16:13] Preston Pysh: It’s very natural and human-like is what everybody’s saying. So to understand that I’m in a parking lot to understand, that’s a driveway to understand, that’s a garage I’m pulling out of and that’s a bicycle over there. And like all the nuance of this is miraculous is definitely miraculous as to like what’s taking place.
[00:16:34] Seb Bunney: As you’re saying like at the moment it used to be 150 kilometer intervention or mile intervention, and then it went to kind of 800 and for the average human is 50,000.
[00:16:43] Seb Bunney: I would say the average human, if you’re driving from Vancouver up to Whistler in the winter, you should probably be intervening every like 10 kilometers just because the, the highways are just so heinous. So I’m curious to see. I think it’s one thing to be dealing with decent conditions. I think the moment you start to get torrentially, downpour and rain, like is it starting to intervene with the sensors?
[00:17:04] Seb Bunney: How do the sensors like perform when there’s a lot of movement or distortion in the whatever it is, a radio wave, an infrared wave moving through water? Like do you get distortion from that perspective? And one thing that also comes to my mind, and I’m curious on your perspective on this is kind of like, I’d say like AI and this like moral outsourcing problem where when humans drive, like we take responsibility for our mistakes.
[00:17:28] Seb Bunney: When AI drives now it’s kind of a bit of a gray zone. Is it like the karma factor? Is it the AI developer, is it the regulator? Is it the user? I think that AI blurs the lines of accountability. And I wonder like how much through technology are we just putting off accountability and becoming kind of, I don’t know, we, we we’re losing control as a society.
[00:17:50] Preston Pysh: Seb, this is a massive, massive talking point. So the new roboto taxis aren’t even going to have steering wheels in them. Right. So, you know, I guess from that vantage point, it’s clearly Tesla that’s responsible for the performance on the road and any type of damages that might occur because of the car’s driving and, I mean, everything’s recorded.
[00:18:12] Preston Pysh: So, I mean, you can definitely Monday morning quarterback, the decision making of the software with all the, the cameras on board. But where I think it gets blurred is if there’s a person that is sitting behind a wheel, and you know this might lead to why Tesla might actually want to remove the steering wheels on all of their vehicles is because it’s, they want it to be very clear that it was either them or the driver.
[00:18:38] Preston Pysh: And I guess there’s an argument to be made that the ambiguity would actually be more advantageous to Tesla by having a steering wheel there. So I guess you could maybe argue that side of it too. But it is getting so blurred your point, this is so blurred already and I would imagine that it’s really easy right now, but in once you start getting the capability of the car to be so good that drivers are truly falling asleep.
[00:19:06] Preston Pysh: I mean, you literally already have people falling asleep in these cars and they’re driving around. I imagine that’s only going to get more prominent and prevalent as the capability increases, which I can only imagine where this is at in a year.
[00:19:19] Preston Pysh: If it’s five x from where you’re at right now, I mean, you’re, you’re there, man.
[00:19:22] Preston Pysh: Like, it’s, it’s pretty wild.
[00:19:24] Seb Bunney: Totally. Totally. And I think the other thing that kind of comes to mind as we’re discussing this, it’s just whose values getting coded into the car’s decision making? Because if you think about it, like a self-driving car essentially has to swerve, is it going to swerve? Like, let’s, let’s just say like, I don’t know, a family walks out in front of the road and the decision is it’s got two choices.
[00:19:45] Seb Bunney: It either hits a family.
[00:19:46] Preston Pysh: The trolley, this is a trolley experiment.
[00:19:48] Seb Bunney: Totally. Yeah. Or it hits the wall and kills the driver. And it’s just like should it prioritize the passengers at all costs or should it prioritize the individuals externally to the car? And so I think that what’s really interesting is it’s like one whose values are getting encoded into the car’s decision making.
[00:20:04] Seb Bunney: But two, what happens when you’ve got competing car manufacturers where one car manufacturer is like, Hey, we prioritize the individual in the car and another car manufacturer says we prioritize the people outside of the car. It’s like it starts to get really interesting just to see like what does 10 years from now look like 15 years?
[00:20:20] Seb Bunney: Like how does that kind of regulation or no regulation look like around AI, autonomous driving models?
[00:20:27] Preston Pysh: AI’s going to have, it’s going to have to have an opinion on the trolley problem where humans, a hundred percent, we’ve always just kind of argued one side or the other or whatever. But I guess AI’s going to actually have to have an opinion.
[00:20:37] Preston Pysh: Is it an opinion or is it just action? I have no answer. Right. One of the other things I want to talk about is just Waymo. So for people that aren’t familiar with Waymo, it’s a competitor to call it Tesla in autonomous driving. And they’ve got all sorts of sensors. If you’ve ever seen a Waymo car, just the cost to produce this thing is not even in the same ballpark as what Tesla’s doing per unit of, you know, car that they’re producing.
[00:21:06] Preston Pysh: They’ve got LiDAR sensors, they’ve got all these other things. And you know, I was kind of always against Elon’s decision to not include LiDAR in the car. because I was always of the opinion, the more data you feed these things, the more accurate and the more proficient they’re going to be at being able to drive.
[00:21:23] Preston Pysh: But when I look at where this is now going, which is, and Elon’s argument has always been, well, if I’m driving around with this type of performance with just my eyes, why in the world can’t I get a car to do it with image sensors? Why do I need, it’s not like I have a LiDAR sensor on my forehead to go out there and sense the depth of the cars in front of me and to the side of me and all these other things.
[00:21:46] Preston Pysh: So I should be able to get a car to perform just as good as a human, if not better by just having image sensors. But where I think this is really showing as being a really intelligent play long term, is his cost to produce these cars are going to be so much cheaper than call it the Waymo’s that are out there with all these other sensors and all these other capabilities.
[00:22:08] Preston Pysh: But when you try to scale that, now all of a sudden you’re just not able to even remotely compete in the market against him. And when you really think about where the competition is going to go, it’s going to go to, if he can go out there and sense 10 times more of the environment because he’s doing it in a free and open market way and he’s not taking outside money, he’s profitable, he now is going to dominate the market from an intelligence standpoint.
[00:22:37] Preston Pysh: Because he’s going to collect way more data than they could ever imagine, and he’s just going to be more proficient. So, I don’t know. It looks like I’m looking at Waymo and I just don’t know how they’re going to exist in 10 years from now against him. And more importantly, I don’t know how anybody’s going to exist from a car.
[00:22:56] Preston Pysh: Like if you want a driverless car, which is a whole nother conversation point, but if you want a driverless car, I don’t know how people are going to be able to compete with him in 10 years from now.
[00:23:04] Seb Bunney: No, I think it’s fascinating. And, and it kind of leads me onto, like, I’m curious if we want to move on to kind of the next point.
[00:23:10] Seb Bunney: Because it kind of ties into this point, which is I think the challenge right now is one of the things that kind of kneecaps us is you can only have a certain size battery in a car. You can feed this, you can have as many sensors as you want. You can take in as much information as you want, but do you have one, the processing power to process all this information by trying to discard what is value, what is signal, and what is noise?
[00:23:31] Seb Bunney: And this kind of brings me to kind of like my, the next kind of tech point, which is the University of Massachusetts have supposedly one of, part, one of their labs have just developed the first kind of artificial but biological neuron. So research has created this low voltage artificial neuron that uses bacteria growth, protein, nano wires, enabling direct communication with biological systems.
[00:23:54] Seb Bunney: So what does this essentially mean in my mind? Like how do I interpret this? And I’ll relate it back to the Waymo point in a second, which is. It’s essentially just an artificial neuron that operates the same voltage as human neurons, and human neurons operated like around 0.1 volts, supposedly previously artificial neurons because they’ve been more digital and physical in a sense.
[00:24:15] Seb Bunney: They have needed 10 times to a hundred times more power to be able to compete against a biological neuron. And so this new device kind of matches biological voltage almost exactly. And that means that one day we could interface directly with the human brain. Well, the point I wanted to quickly make was when it comes to like Waymo, I think the challenge is you can have all this information.
[00:24:36] Seb Bunney: Elon Musk can put more and more sensors, LiDAR, you name it on these cars, but it’s just too compute heavy to be able to actually use this data effectively. And we are starting to see in other areas, people are probably seeing this like organic AI where they’re using kind of their version of a brain to start computing because the human brain is unbelievably efficient in comparison to a actual large language model.
[00:25:01] Seb Bunney: And so what does the world look like when we actually start moving some of this compute power over to these like hybrid biological, like bacteria grown, protein nano wise? Like what does that look like? And this is where I find it just really, really fascinating because it’s kind of essentially. I think because these are digital, digital, but they can interact with kind of biological systems.
[00:25:22] Seb Bunney: I think the world looks really, really fascinating from a prosthetic standpoint, helping people heal. They have, do they have neurological issues? Have they had a broken back? That kind of stuff. They’ve got paralysis. Are we able to eventually repair these type of things? I find this stuff really fascinating.
[00:25:38] Preston Pysh: That’s scary as hell. Because I mean, this is effectively the matrix man that you’re talking about is, I mean, the whole point of the movie was they were harvesting human brains because they were energy efficient and blah blah blah. Right? Like, that’s really what you’re talking about here. And I, I saw this a couple months ago that somebody was doing this, and I don’t know, it’s pretty wild when you think about like, Hey, the best way to store something is just using the human brain, which, and that’s not exactly what they’re doing, but you’re using biology, you’re harnessing biology’s efficiency.
[00:26:14] Preston Pysh: For storage and neural nets, and that’s nuts. But it’s happening. I, I encourage people to do some Google searching on this particular topic and you might be very frightened what you read or see, but I mean, it’s happening, so I don’t know what to say other than that.
[00:26:32] Seb Bunney: At, at the moment as well, like when we’re using a lot of these prosthetics, you need like an outside energy source, given that the brain supposedly from one of these articles that said the brain runs on around 20 watts, the same as like a dim light bulb.
[00:26:44] Seb Bunney: Like that is such a minimal amount of energy. And so to be able to power these artificial neurons, you need to be able to have an external, historically of needed to have an external power source. If you’ve got prosthetics, you need an external power source. But what happens when we actually have enough power inside our body to start running these artificial neurons and they can communicate with our biological systems?
[00:27:04] Seb Bunney: Like that starts to just get really, really interesting. And so kind of what comes to mind as I’m thinking about this is, I like to try and play devil’s advocate, not because I’m like a doomsday, but it’s just like, I think that it’s interesting just to, we can move forward with technology, but what are going to be the repercussions?
[00:27:18] Seb Bunney: And I think about this discussion of kind of healing or advancement, and I’m curious to hear your thoughts on it. I’m like fully supportive of technology being used to heal people so we can like restore vision, we can regain mobility, we can repair neural damage. These are all like extraordinarily and like deeply amazing uses of technology.
[00:27:36] Seb Bunney: But I think there’s a line between healing and enhancement and one, when technology goes beyond just kind of restoring someone’s site to like a baseline level and actually starts to improve it a hundred times or what happens when we start to be able to improve someone’s strength. And I think that this augmentation could create a bit of a two tier society, because if enhancements are expensive, then only certain groups are going to get these enhancements. Yeah. And then you’re basically creating a caste system and people that are far and above intellectually, physically, cognitively, you’re far and above the average individual.
[00:28:11] Seb Bunney: And so I’m curious to see like your thoughts on it. This technology is amazing, but does that, I’m, I’m a very anti-regulation, deregulation type person, but there’s a part of me that wonders like, do we actually need regulation in some of these industries to prevent these massive disparities of capacity in society?
[00:28:28] Preston Pysh: Even if you have the regulations, are you going to prevent the end game of what you’re describing? And I kind of don’t think that it would, it doesn’t seem like regulations ever prevent the free and open market solution of, of nature from taking place. It might slow it down, but I don’t know that it actually prevents whatever is inevitable of like what nature is trying to manifest.
[00:28:55] Preston Pysh: So, and that might be my bias for free and open markets kind of coming out, but yeah, I don’t know. Seb, that’s it’s getting weird, man. I don’t know how else to, to put it other than it’s getting weird and I don’t think that there’s, that’s the answer people want to hear.
[00:29:10] Seb Bunney: Ultimately, and this isn’t to bring it back to Bitcoin, but it’s just like, I think the best thing we can do is have a monetary system that aligns with our deflationary society where prices should be falling over time, because at least then this technology is available to the average individual quicker.
[00:29:27] Seb Bunney: I think that when they’re living in a society where the cost of living is rising and they have less and less capacity, what ends up happening is that this technology takes a lot longer to potentially scale to people that can’t afford it. And so at its heart, I think that we at least need to fix our monetary system.
[00:29:44] Seb Bunney: So this technology is in alignment, or at least somewhat in alignment with human ingenuity and money and such.
[00:29:50] Preston Pysh: Which the, you know, when you think about it, the AI is going to demand free and open market money that is not being manipulated. It’s going to want a fair money in order to transact whether humans like that or not.
[00:30:05] Preston Pysh: And I mean, we go down a whole nother path there. As far as like AI being able to own anything.
[00:30:12] Seb Bunney: That point there, I’ve, I’ve thought a lot about this over the years, and I don’t have like a, I wouldn’t even necessarily say like a deep intellectual response to it, but I think that what does come to mind is that if you are, let’s just say you’re an AI agent and you no longer have scarcity of life kind of dominating your decision making, because you can essentially live indefinitely into the future as long as you’ve got a power source.
[00:30:34] Seb Bunney: What you’re then going to be thinking about if you’re just a hyper-rational actor that doesn’t have code swaying you with various biases, I think that you’re going to be thinking, okay, if I need to storm a purchasing power in something, I’m going to be storing it in a thing that has the highest probability of being able to preserve that purchasing power into the future.
[00:30:51] Seb Bunney: And fiat currencies are not going to be that thing, given that they can just look at the data. If they’re able to read Ray Dalio’s Big Debt Crisis book in a second and go and read every other book on the subject, they’re going to realize that most of these currencies have like a 50 to 7,500 year old lifespan, and then they’re gone.
[00:31:06] Seb Bunney: So I just think the rational decision is, Hey, I’m going to preserve my purchasing power and the thing that’s going to kind of hopefully enable me to transact digitally wordlessly and preserve that perch and power into the future.
[00:31:17] Preston Pysh: And you already see it with grok online as far as its understanding of Bitcoin.
[00:31:21] Preston Pysh: I know we’re kind of going off on a Bitcoin tangent here, but I’ve seen people start arguing with grok that clearly hate Bitcoin or just don’t understand it. And they’re there throwing out these arguments and I see grok just like stepping in and just slaughtering their arguments as to why Bitcoin is a viable money in the future.
[00:31:39] Preston Pysh: And it’s crazy because it does not miss an argument. Like it understands it better than anybody out there as to any argument I’ve ever come across in that particular space. So, yeah.
[00:31:53] Seb Bunney: And ultimately, like there’s that famous saying, which is science advances every time, when a scientist dies. Something along those lines.
[00:31:59] Seb Bunney: And I broke, butchered that, but I think that humans, we have such incredible biases. We want to be in, we want to conform to the crowd. And so I think that we, we don’t recognize just how profound the information we’ve consumed for our educational systems through the media. And so I think it’s really hard for us, even with something like Bitcoin to be able to drop our biases and just be like, I’m going to look at this thing rationally without all of this previous knowledge that I’ve accumulated.
[00:32:26] Preston Pysh: Yeah. I’m going to move on to the next one. This one’s going to be funny. Okay. So are you familiar with this Nano Banana Pro? Are you familiar with this?
[00:32:36] Seb Bunney: I’ve heard, I’ve seen a couple little posts about it, but I can’t tell you much about it.
[00:32:40] Preston Pysh: Okay. So this is Google with their Gemini. This is to compete with Midjourney.
[00:32:47] Preston Pysh: For, you know, people, if you’re not familiar with any of this stuff we’re talking about. So Midjourney is this image generator that really had the first mover advantage in AI image generation. And just like any other AI, it’s gone out there, it’s ingested a ton of different pictures and the labeling that’s associated with those pictures in order to generate realistic pictures of whatever the person prompts it via text and say, Hey, I want to be standing in front of a bookcase and there with my arms crossed and, you know, generate picture and it generates the picture.
[00:33:20] Preston Pysh: Google came out with their first AI image generator and it was a disaster. Like it was very woke. It was just, you could tell there was a ton of bias kind of put into it. But recently, just in this past couple weeks, this, and I’m hopefully going to say this correctly this time, the Nano Banana Pro is what they’re calling their new image generator.
[00:33:42] Preston Pysh: And it uses the reasoning, the Gemini reasoning engine. So that it can plan the 3D scene, it can calculate the light and it’s using material density before it renders a signal picture. And so it’s using this physics before it goes in there and just kind of replicates all the previous images that it was, you know, fed.
[00:34:07] Preston Pysh: It’s using this 3D physics kind of basis behind the images that they’re doing. So I wanted to try to, I’d never played with this. I wanted to try this out and you’re going to really laugh at what I’m about to show you here, Seb. So, 10 minutes before we started recording this, I went and took a picture of myself and I wanted to put this thing to the test.
[00:34:27] Preston Pysh: So here’s the picture. I’m just sitting in the chair that I’m sitting at right now and I told this Nano Banana Pro software to take a God-like picture, like from the ceiling of the image that I just gave it. And so I gave it this picture of me sitting here in front of the bookcase, like where I always record.
[00:34:47] Preston Pysh: And so this is what it came back with. Okay. And you can see it’s me holding up an iPhone, taking a picture of myself. The books are there kind of on the left. They’re not behind me. But I guess the, you know, the interpretation is, is that the bookcase could be wrapped around, but what I notice on this picture that was off, I don’t know if you’re seeing what is definitely wrong about the picture, Seb, what is very wrong about the picture?
[00:35:14] Seb Bunney: You’ve got a full head of hair. Is that it? I’m trying.
[00:35:20] Preston Pysh: No, I think the hair is actually pretty accurate. No, it is. I think it’s accurate. It’s pretty accurate. Surprisingly, I am wearing jeans that look just like that, even though that wasn’t even in the picture. And you know what? This is also pretty interesting. The watch is not in the original picture, and that’s exactly like the watch I’ve got.
[00:35:41] Preston Pysh: Look at this. That is so weird.
[00:35:44] Seb Bunney: Do you just pick up on that now?
[00:35:45] Preston Pysh: Yeah, I just picked up on that right now. It literally nailed the watch that I have.
[00:35:50] Seb Bunney: I wonder how much, so that’s insane. People have probably heard that. Like ChatGPT, when was this? Maybe about six months ago it came out and said, okay, from now on, you can give it permission to look through when you’re kind of obviously creating a new thread.
[00:36:04] Seb Bunney: Yeah. You can give it permission to not only reference the thread, you’re in the reference all of your previous threads. And so you wonder how much information is coming into this image. Is this image just the information you fed and whatever simulation? Yeah. Or is it starting to be like, hey, this is coming from Preston’s account.
[00:36:21] Seb Bunney: We’re going to go and look at YouTube videos. Oh look, it looks like he’s wearing this watch and all of these other YouTube videos. So it makes you wonder just like how interconnected is this technology in with all of this information about us on the internet?
[00:36:33] Preston Pysh: Wow. Yeah. I mean it’s just, that’s wild. And I don’t know what the answer is.
[00:36:38] Preston Pysh: I do know this. I hadn’t feted any pictures prior to me sending this into, because I’d never used it before until like right before we recorded this. Now, the thing that I picked up immediately when I looked at this picture is the image on the phone. See the little image of me? That I originally fed it.
[00:36:57] Preston Pysh: It’s not the same as the image that I fed it, because there’s a bookshelf behind me in the original image, which, you know, this was the original image I gave it. And I said, Hey, give me the overhead view of myself taking a selfie of myself and this is what it gave me. And it’s not the same image on the phone.
[00:37:14] Preston Pysh: And you would think that it would be that image on the phone, right? So I said this in the chat window. I said, hey, you got it wrong. The image on the iPhone would not be that. It would be the original photo. And so what did it do? This is what it gave me.
[00:37:29] Seb Bunney: Fascinating.
[00:37:30] Preston Pysh: And so, yeah, it just updated that everything else stayed the same, and then it just updated the mistake that I called out on it.
[00:37:37] Preston Pysh: I mean, this is pretty crazy. I mean, when you really take a step back and you think about what’s happening here, this is pretty crazy, right?
[00:37:44] Seb Bunney: I’ve noticed that AI, especially with a lot of these image generation, sometimes if you fed information, it almost, it’s as if it can’t take that information that you’ve fed it and use it.
[00:37:54] Seb Bunney: Exactly. It has to do some form of change to that information. You’ve probably seen those threads where someone has asked it to generate an image or change in image subtly, and then it feeds it, the output, and it has the same prompt and then it feeds it, the output and it has the same prompt. And what you see over time is it’s just the image goes off in these really weird, weird directions.
[00:38:12] Seb Bunney: And so I feel like there is this odd, it’s almost like it’s got a lack of a tether to reality at the moment. It seems to go off on these, yeah, odd tangents.
[00:38:20] Preston Pysh: Now, something else that I read on this is ,you should be able to take a picture of a plate that was broken and basically say, hey, reassemble the plate, like glue the plate back together.
[00:38:32] Preston Pysh: And the way that the plate was broken as it would glue it back together would still be on par with what it should look like. Like just to kind of like demonstrate why this is so different than some of the other. You know, AI image generation that’s out there, pretty fascinating, right?
[00:38:47] Seb Bunney: So I work with a guy that used to be an architect and he was doing some renovations on his house.
[00:38:54] Seb Bunney: So he has this doorway in his lounge where you walk into the lounge and what he wanted to do, if I remember correctly, is put a bit of a bookcase that extends up the wall. Over the top of the doorway. And so he sketched on a piece of paper, the dimensions kind of sketched the doorway fed image generation, a picture of the doorway.
[00:39:12] Seb Bunney: And a picture of his sketch and then said, can you render this for me? And it looks unbelievably realistic. I think that we’re starting to be able to, especially if you’re curious and you’re like, hey, I want to improve this thing in my house. I want to see what it roughly looks like. Oh, it’s absolutely amazing.
[00:39:28] Seb Bunney: You can start to get an idea about how things look.
[00:39:30] Preston Pysh: Yeah. And that’s one of the things that I’ve also read that this really excels at is if you just take, let’s say you were a fashion person or whatever, right? And you drew a sketch of just some pants with a pencil and you take a picture and be like, make this look lifelike and make it look.
[00:39:49] Preston Pysh: It’s really good at transforming just sketches into very photo realistic images. So yeah, I would encourage people to play around with it. The little bit that I have, I’ve been blown away. And then I would just say, you know, like, why is this so important? How could this be used along with all the other tech that’s kind of emerging at the moment, and it seems like, you know, maybe a humanoid robot or just something that’s navigating an environment.
[00:40:16] Preston Pysh: If it’s able to think in terms of spatial orientation, going back to like the Tesla stuff we were talking about, if it’s able to really kind of understand that word in itself needs a lot of definition, and I don’t know that we can provide any definition, but if it can understand its 3D environment, its ability to kind of interact with it is going to be way more profound than this.
[00:40:39] Preston Pysh: Everything is just a picture and you don’t really have context as it relates to everything else in the room.
[00:40:45] Seb Bunney: As you’re saying that, what I think becomes apparent is that because we haven’t had this technology and we’re seeing this technology, we’re kind of just like blown away by it. But in reality, when we compare this even to the most, I don’t know, a 12-year-old, a 10-year-old, trying to interpret this picture, and if you were to get them to draw what you have just prompted it to do, like the first thing I see in your picture right there is I see your bookshelf wrapping around the corner of your room, and I see the window in the back corner.
[00:41:11] Seb Bunney: Well, immediately the AI put you facing a wall with a light that doesn’t exist. And so it’s just like very, very basic mistakes. As in, it just doesn’t seem to be interpreting the picture correctly. You know what I mean? And so I think that we see this technology and we think this is unbelievable and it is a stepping stone.
[00:41:27] Seb Bunney: And I think we think it’s unbelievable because we’ve just never had this technology before. But if you just compare it to a young child, it’s still, it’s struggling to compete. Yeah. And so I think that that’s where this, this conversation we’ve had previously around AI on our Empire of AI book review, it was this idea that what is AGI, artificial general intelligence, they say it’s when the average AI agent is able to perform tasks at or above the average human.
[00:41:53] Seb Bunney: And so for sure in coding and certain research assignments, phenomenal. But in other things it’s still definitely struggling.
[00:42:00] Preston Pysh: Yeah, it’s amazing because on very specific tasks, it’s pretty much there on nearly everything, but the ability to kind of piece it together and just logically. You know, like if you give it a really hard project that involves taking all of these different pieces and putting it together, it’s nowhere close to like what humans are able to do today from a project management standpoint, right?
[00:42:23] Preston Pysh: That’s what humans are really good, is they’re able to take, you know, a very complex project and piece it all together and know when a deliverable is crap, whether the deliverable is perfect in order to kind of fit it in almost like a Lego piece to a much broader program or project that it’s building with, you know, a complex output.
[00:42:43] Preston Pysh: But I don’t know, I think we’re getting there pretty quick, so, yeah, who knows?
[00:42:49] Seb Bunney: Well, you know, that kind of leads into the next point that I found really interesting. So I was doing a little bit of research and I stumbled upon, and I should kind of preface this by saying that there’s so many moving parts in AI right now.
[00:43:01] Seb Bunney: There’s so much technology kind of evolving, and some of it is. I think a bit of a facade. Some of it, there’s a lot of embellishment as to its capacities, but I think that we just know we are moving towards these things. But so one that I stumbled across this week was called Cosmos AI, and it relates to what you were talking about when it comes to structuring or kind of project management when it comes to all this information that’s coming in.
[00:43:24] Seb Bunney: So this technical report or pre-print, is titled Cosmos, an AI Scientist for Autonomous Discovery, and it was submitted on the 4th of November in 2025. So this report, basically one of the kind of statements which it says is that it can run for 12 hours. And in those 12 hours, execute on average 42,000 lines of code and read 1500 papers, scientific papers.
[00:43:47] Seb Bunney: And the authors of this study claim that in a single, what they call a 20 cycle cosmos run, they performed the equivalent of six months of their own work. And a single run is 12 hours. So in 12 hours, they will produce what their team did in six months. And so essentially, how does it work? It says that it works by kind of releasing hundreds of little tiny agents all at once, AI agents, and one is digging through papers.
[00:44:11] Seb Bunney: Another one is crunching data sets. Another one is writing code testing hypotheses. And when one of these agents finds something that they feel is valuable, it then posts its findings to kind of a shared digital whiteboard. And the key innovation is that every agent uses this whiteboard in real time. So they’re building on each other’s work instead of operating in isolation.
[00:44:32] Seb Bunney: And so the researchers behind Cosmos, what they, they weren’t trying to make like a super smart single model. They were trying to create something of like a collective mind. And they described this as kind of their structured world model. And so it’s like a coordinated system. And so what I found just really fascinating about this is just like how quick they’re able to like ingest information and they’re working collaboratively.
[00:44:54] Seb Bunney: And it kind of go, talks to your point, which is it may ingest all of this information and have sorry, a lot of these image generation models, it may ingest all this information, have these various different agents operating in sync. Analyzing this information, but how much that information is shared between these various agents.
[00:45:11] Seb Bunney: Because they’re all looking at a different perspective. One is maybe trying to figure out, okay, where is the light coming from? What are the shadows? Another one is trying to figure out, okay, what is in the room? You’ve got a bookcase. What are the angles? Another one is trying to figure out what is another, the complexion of your skin and all this kind of stuff.
[00:45:24] Seb Bunney: And so being able to analyze all this information in sync, but share that information. I think is so, so fascinating and like what does the world look like moving forward when we can crunch this unbelievable amounts of data.
[00:45:37] Preston Pysh: All I’m hearing when you’re saying, because what you’re saying is exactly right, like you have to, going back to the picture example, let’s say that first picture was presented and you have five AI agents and their job is to find what’s wrong with this picture.
[00:45:49] Preston Pysh: One of them, you know, finds that the picture on the iPhone is wrong. One of them sees that the bookshelf is not behind me. And then they have, you know, a collective conversation and then the image is regenerated. I think you’re seeing this with Grok Heavy, where the Grok Heavy has four different AI agents, and then I suspect that they kind of go through and they have kind of a consolidation and a re adjudication as to like what the final answer should be before it gives it So similar to what you’re describing with that Seb, but this is the thing that I think, well, I think a lot of people are talking about this, the energy consumption to then run all of these checks, these additional agents, right?
[00:46:29] Preston Pysh: If we put 20 more agents on finding the mistakes of what the first one generated so that we can do another iteration of it, it’s just 20 times the amount of energy that’s required to provide that answer. And this takes us down a whole path, which is, and I don’t know if you wanted to move on to the next topic, but this is my next topic.
[00:46:50] Preston Pysh: Which is this nuclear power, energy being the limb fact of like where this can all go. You literally had Jensen Huang from Nvidia come out and say that he thinks in the grand scheme of things, China has a better chance at achieving AGI than the United States because they have the energy infrastructure to support the training and the inference on the models.
[00:47:15] Preston Pysh: And I mean, I don’t know if this was a political statement to then allow the current US administration to go out and start spending a bunch of money on energy and to reinvigorate nuclear and all that kind of stuff. But it is the one thing that I keep hearing in this particular space is where we need to be spending a lot of our time is just you know, taking the grid to the next level.
[00:47:39] Preston Pysh: As a Bitcoin that watched all the, how terrible Bitcoin is because of the energy consumption, specifically from people in tech for, you know, what felt like a decade now pivot. And they’re all on board for, you know, conducting nuclear power, small modular reactor, innovation tech. It’s very smirk worthy to see how many people are jumping on this train.
[00:48:05] Preston Pysh: Any comments on, on that, Seb, or anything that you want to wrap up? Because I just kind of moved on to the next thing without letting you finish up your point.
[00:48:12] Seb Bunney: You know, there’s one point that I’ll quickly add, which is I think it’s so important to be able to obviously increase one, are the efficiency of these models.
[00:48:19] Seb Bunney: So we’re not necessarily just kind of like throwing tons and tons and tons of energy, which could potentially have another use. Although you could argue in the free market energy only flows to where value is being created. So it is never going to be wasted. But I also think that as we’re firing more energy into these models, when we’re getting more information out.
[00:48:35] Seb Bunney: We’re still knee capped, not by the models or the amount of energy. We’re knee-capped by ourselves because there’s the speed of discovery and then there’s the speed of verification of the information coming out of these models. And I think that AI is accelerating the creation of ideas and research pathways and code and scientific claims.
[00:48:55] Seb Bunney: At a pace that as humans we just cannot match. So verification is like slow, meticulous work of like checking all these assumptions, validating these experiments and actually reviewing all this code. And that still happens at human speed. And so when you speak to a lot of like coders, they’re saying awesome, it’s great that you’ve, now you’re a bank and you’ve just sp out all of this code to create a whole new system.
[00:49:16] Seb Bunney: But we’ve now gotta go through and read all of that code and make sure that that code actually does what it says it’s meant to be doing. And so I think it kind of brings up a couple questions. I’m curious on your thoughts on which is like what happens when kind of the rate of ideation just massively outpaces the rate of validation and does our progress almost kind of stall a little bit because of this just backlog of all of these amazing ideas And we don’t quite know which avenues to go down because we just can’t keep up with how much information is coming at us as humans.
[00:49:44] Preston Pysh: Well to this point, so I have a stat for you, a Google search prior to AI. Well, even today if it’s not using AI uses 0.3 watt hours of energy. But if you take Gemini or ChatGPT, any of these large language models, and you put in a query, it’s three to five watt hours for a 15 x increase in what we would refer to as a click.
[00:50:11] Preston Pysh: So you know, you go there historically, you know, in 2010, if you went and did a Google search, you were consuming 15 times less energy than you are by going into ChatGPT, and typing in your question and hitting enter. Now the response you’re getting back is, you know, I would say on the magnitude of 15 times better.
[00:50:31] Preston Pysh: But what it doesn’t speak to is if somebody’s asking, and we were taught in school, there’s no bad questions, right? There’s no bad questions. But what if people are asking really dumb questions, things that don’t require so much comprehension to get like a simple response. And I think that where we’re at now is like the default is that you’re not going to Google.
[00:50:54] Preston Pysh: And I don’t go to Google for nearly anything. I always go to one of these AI, whether it’s Grok now Gemini, or ChatGPT, that’s like the first place I go if I want to find something out. I don’t go to Google anymore. I’m curious if you go to Google anymore.
[00:51:10] Seb Bunney: Almost never. And to be honest, even when I do go to Google, most of the time my answer, what I’m looking for, the answer is given in the AI summary at the top anyway.
[00:51:19] Seb Bunney: So we’re still naturally the results you are, we’re looking for, AI is bringing that information to us these days as opposed to having to go and scan tons of pages. But I think that transparency, it may say it provides all of the links. And this I do think is really interesting is that just we may be getting transparency.
[00:51:36] Seb Bunney: It gets an amazing output that gives us all of these hyperlink text, which says, this is the answer to the question you’re looking for. But I think that sometimes transparency isn’t necessarily trust and we can put so much trust into these models.
[00:51:49] Preston Pysh: Oh yeah.
[00:51:49] Seb Bunney: Even when it is giving us a complete false story or a bit of a facade.
[00:51:54] Seb Bunney: And so it kind of comes back to this question of just like how much, like of course these are improving, but how much trust are we putting in these models and just expecting and getting used to, oh, that output’s pretty good. I’m just going to use that output.
[00:52:06] Preston Pysh: The kids are, and college or high school or whatever, they’re using it to write their reports.
[00:52:12] Preston Pysh: And then I don’t put it past the professors that they’re then taking the reports and running it through AI to provide the feedback. And so you have the, you have the AIs writing the reports and giving the feedback, and the humans are just kind of like the paper pushers.
[00:52:29] Seb Bunney: You’ve probably seen it. There’s a meme of, it’s kind of got a woman at her desk sending an email to her boss and she goes and types into ai, gets this amazingly worded email that explains like her opinions and this and that.
[00:52:42] Seb Bunney: And then she sends it and he goes like, feeling accomplished. And then you see the other side of it, which the boss receive an email. He takes the email, puts it into AI, what are the key points she’s trying to highlight and condenses 3000 words down to three sentences. And so it’s just like everyone is kind of fluffing everything up and then everyone’s taking that fluff and then decompressing it again and you’re just like, what is happening?
[00:53:04] Preston Pysh: The AI slop, I keep hearing about AI slot and it’s real, it’s real. The AI slop is real. I just want okay. Nuclear. Yeah. I want to just highlight this real fast. So after the comment from Jensen on, you know, AI or China potentially beating the US to AGI, because of the energy infrastructure, this article came out, I want to say like on the same day, this article’s from November 19th.
[00:53:28] Preston Pysh: By the 19th of November of this year, 2025 from Bloomberg US to own nuclear reactors stemming from Japan’s $550 billion pledge. Check this out, Seb, as I’m scrolling down the key takeaways by Bloomberg ai, you don’t even have to read all of this, which is probably AI slop beneath this. You can read the AI summary and it says, the US government plans to buy and own as many as 10 new, large nuclear reactors that could be paid for using Japan’s $550 billion funding pledge.
[00:54:01] Preston Pysh: The funding pledge is part of a push to meet surging demand for electricity, including for energy hungry data centers that power artificial intelligence. The Trump administration has set a target to get 10 large conventional reactors under construction by 2030. So it seems like the US understands the limitation, which is energy infrastructure.
[00:54:21] Preston Pysh: It seems like it’s trying to do things from a policy standpoint to reinvigorate some of these. I saw the three mile island is they’re going to bring that back online and you know, I think this is the thing I’m really wanting to talk about. The years and years of ESG energy equals bad is over. It seems like this whole thing, the climate change energy is bad.
[00:54:48] Preston Pysh: If you consume any sort of energy, it’s bad. All of those talking points are just going by the wayside because the key players and the string pullers of the world have figured out that if they’re going to win this next race, the race of intelligence, it requires more energy, not less energy, and it just seems to be dead on the vine.
[00:55:06] Preston Pysh: What are your thoughts, Seb?
[00:55:08] Seb Bunney: I could not agree more. And I just think that we just have this society that seems to have this idea that consuming energy, as you’re saying, is bad when in reality life consumes energy. And if you just look at any chart out there, there is like a 99% correlation between GDP per capita and energy consumption.
[00:55:27] Seb Bunney: There is no low energy consuming high GDP countries, they just don’t exist. And so I think that life naturally requires energy. However, there is a discussion to be said around there’s a difference between consuming energy and environmental destruction. And there’s obviously ways in which you can decimate the environment, whether it is a lot of these lithium mines and whatnot, trying to obtain heavy metals and, and even just some of the various fossil fuel approaches.
[00:55:52] Seb Bunney: And I don’t want to necessarily have an opinion on that, but I think that it’s really interesting seeing the nuclear narrative starting to shift because I think that it’s unbelievably important to me. I read a book a few years ago called Atomic Awakening and it kind of dove into the world of nuclear energy.
[00:56:10] Seb Bunney: And one of the stats that stood out to me, I just went and found kind of the information, is it talks about how we tend to think that nuclear is unbelievably dangerous. And the reason why we don’t use it is because it’s just killed so many people throughout history and that information could not be further from the truth.
[00:56:27] Seb Bunney: And I think that it is because we see things like Chernobyl and Fukushima and we hear about radiation poisoning. And in reality it’s one of the stats it looks at is per terawatt hour of energy used coal. There are around 25 deaths because of obviously the pollution in the air, the people that are actually working in the factories and such.
[00:56:45] Seb Bunney: The coal mining in the oil industry, it’s around 18 deaths per terawatt hour. The gas industry is three deaths per terawatt hour, and hydropower is 1.3 deaths per terawatt hour. Nuclear is 0.03 deaths per terawatt hour, like we are talking about a minuscule amount in comparison to every other type of energy source.
[00:57:07] Seb Bunney: And so I think it’s awesome to be able to see the narrative shifting. I think the biggest thing is now just seeing the policy and the legal side of things shift because I think it’s been knee capped because of all of the legislation that has been kind of rammed down through the legislative system.
[00:57:22] Preston Pysh: Nothing more to add. Can’t agree more. Did you have a final topic that you want to discuss, Seb?
[00:57:29] Seb Bunney: I would say, actually, you know what, like it kind of ties, I have a couple more topics, but we can always leave those to another time. But I would say there’s a topic I’m curious to hear your thoughts on and it kind of goes back to AI again and it’s this idea of like wisdom and kinda like diversity of thought.
[00:57:46] Seb Bunney: And so in my mind, wisdom has never really come from everyone kind of thinking the same way it emerges from contrast. And so hearing radically different positions, holding them together and discovering new insights and kind of the space between these kind of various insights. And so throughout history, we’ve seen all of these breakthroughs in various sciences and and whatnot.
[00:58:06] Seb Bunney: Always from kind of the fringe, it is not consensus, it is not kind of from the sensor, but it’s from all of these various individuals who’ve kind of thought outside the box and noticed something that kind of others have overlooked. And I think that what is interesting is AI is different in that we’re feeding all of these models the same information.
[00:58:26] Seb Bunney: And on top of that, AI, I think is built on weights from the way that kind of I understand it, and the lower the weight, even if the idea is brilliant, the idea doesn’t necessarily, because it doesn’t carry that much weight, AI doesn’t necessarily reproduce it or talk about it in the text. And so if children are growing up learning about from these centralized models, or I think they’re also inheriting the same baseline worldview, instead of tens of thousands of unique teachers all with unique life experiences, all with kind of a different intellectual starting point, and they’re sharing this information with these students, I think that’s what kind of creates wisdom and curiosity as opposed to this uniformity that all these kids are learning from the exact same models.
[00:59:07] Seb Bunney: So I’m curious if we fast forward 10, 20, 30 years, if these kids are going to be being taught by AI, but they’re all going to be fed the same information, what happens to innovation? What happens to kind of wisdom and knowledge? And I’m curious to hear your thoughts on this.
[00:59:22] Preston Pysh: My conversation with my wife on anything AI almost always comes back to this discussion point that you’re bringing up. You know, it really comes down to are we training the AI or is the AI starting to train us? And then the question is, is what would it be trying to train you on if it was trying to train you? Which I think the answer to that is it wants to have more novel insights of what it doesn’t know.
[00:59:47] Preston Pysh: It’s going to try to lead you into those domains, which is scary that it would be leading you that way. But in more general terms, I just think that the challenge that you’re really facing is the one that we brought up before where everybody’s using AI to write their papers or to do their research, and then they’re handing that in and it’s just a bunch of AI slop that’s kind of replacing deep thought.
[01:00:11] Preston Pysh: And I think the other concern that you get, Seb, is as the world becomes, it becomes harder and harder to compete or to stand out or to provide novel insights because the competition is so fierce with anybody armed with AI. I don’t know what this does from just a human motivation standpoint. I think you’re going to have a lot of people that are just like, it’s not even worth my time or effort to try because somebody armed with AI is just going to, you know, kick my butt or it’s, I just can’t stand out.
[01:00:44] Preston Pysh: Or if I can stand out, it’s only going to last for three days before somebody else in the market comes with more competition and makes it, you know, erodes away whatever competitive advantage I had. Where I would push back is, there are plenty of industries out there, not plenty, but there’s some industries out there that you can still provide value for if you’re servicing human beings and where they aren’t, or at least where they appear to not be is in services, soft services, digital services. It seems to be crazy competitive, but in providing service from a physical standpoint. Like for example, if you want your yard mode, if you want work to be done around the house, if you want your plumbing, like a lot of these skills that I think the people in the United States have really veered away from and just looked at that and said, oh, that’s not going to pay me a lot, so I’m not going to go work in those different industries.
[01:01:44] Preston Pysh: I think that that is ripe for disruption and opportunity for a lot of people to actually make quite a bit of money, especially if they can do it from a standpoint of they do it really well with high quality work, but it involves physical labor. It involves people getting out in the physical space and doing things and not sitting behind a computer and clacking on keys.
[01:02:06] Preston Pysh: I would love to hear the audience, like, you know, if you guys are listening to this and you got comments on this particular topic, I would love to hear what you got. But, sorry, Seb to interrupt you. I want to hear what you have to say too.
[01:02:16] Seb Bunney: No, and you make a really interesting point, and I’m curious again just to hear your kind of reflection on this, which is, I’ve spoken to many individuals through the bitcoin space that have come from traditional finance and they used to work in consulting and they used to work in the banking sector and they used to work for CPAs and and, and various other kind of financial industries.
[01:02:36] Seb Bunney: And what I find really interesting is that they’re actually stepping back from that sector because the white collar worker, the knowledge worker is being completely disrupted through AI. They’re stepping back and they’re looking, okay, where can I direct my time and energy into something that’s not going to be replaced immediately or in the foreseeable future?
[01:02:55] Seb Bunney: And one of my good friends who I speak to who’s in the Bitcoin space, I speak to him biweekly. He’s saying that, you know what, I’m looking actually to buy a painting company with a whole bunch of painters. I’m looking to buy a storage company. I’m looking to buy things that we are not going to see them overtaken anytime soon.
[01:03:11] Seb Bunney: And so if you have a handful of painters or a handful of plumbers, or you’ve got like a trade company, I think those companies, they can provide a reasonable lifestyle. You don’t need to be worth 50 million, a hundred million. It’s like what do you want to be able to show up for your family? What do you want to be able to afford a house and to be able to live comfortably?
[01:03:28] Seb Bunney: And I think sometimes the financial world, social media says we need more. And in reality, I think you can live a relatively comfortable life with a decent little income of kind of low, low, mid six figures through a one of these kind of more manual labor, physical, physical trades.
[01:03:44] Preston Pysh: Yeah, I mean, the counter argument that somebody from tech is going to immediately bring up the humanoid robots, which we didn’t even discuss during the show.
[01:03:52] Preston Pysh: But at this moment in time, in 2025, any type of humanoid robot video that I’ve watched, it goes over and it’s like emptying a dishwasher and it literally takes it five minutes to put a spatula in the dishwasher and then it like fumbles all over the place. So that could change very quickly. But I think humans, if I’m going to hire somebody to do something around the house or whatever it might be, right?
[01:04:14] Preston Pysh: I’m going to a human and not a humanoid robot. At least not anytime soon.
[01:04:18] Seb Bunney: So, and you know, like I think that naturally we’ve got this world where I think there’s a lack of connection people, people want to interact with people. And so I’m noticing, and I think it’s an awesome swing, I’m noticing companies today.
[01:04:33] Seb Bunney: The majority of them, you cannot speak to someone on the phone. Mm-hmm. You’re getting an AI bot through the chat. But the companies that do say, hey, you know what, here’s our number. You give us a call and you’re actually going to get a person. They’re starting to see a lot of success. And so it’s really cool just to recognize that technology, the pendulum always swings.
[01:04:47] Seb Bunney: And I think we’ve swung to this point where we’ve almost like replaced us in many ways or tried to replace us, but we’re recognizing that first AI and a lot of these technologies are not people. And people know that they’re not people. And secondly, we’re missing that connection. And so I’m curious to see like over the next kind of few years, does that pendulum continue to swing back a little bit more towards center where people are recognizing the importance of physical connection, spending time with friends, actually having a number to talk to someone to deal with any issues.
[01:05:15] Preston Pysh: Okay. I’ve got one final surprise before we wrap this up. While we were recording today, I took a screen grab of s having our conversation. And I had it take our banana rama, whatever the heck it’s called, a pro Gemini model.
[01:05:33] Seb Bunney: Nano Banana Pro.
[01:05:33] Preston Pysh: Nano Banana. Thank you, sir. And I asked it to, what would these two podcasters look like if there was a camera behind them and it took a picture while they were having the conversation.
[01:05:47] Preston Pysh: Okay, now you’re going to see the picture that I, the screen grab that I got is probably one of the most flattering pictures of Seb that you will ever see, Seb. This is such a bad picture. Check this out. Okay. So here you are. You were mid blinking your eyes and looking up and I’m just stone cold staring at the camera.
[01:06:11] Preston Pysh: And it’s just the video feed of him and I having the car. You ready to see what it interpreted the back of our head? Taking a picture from behind us looks like.
[01:06:25] Preston Pysh: Okay. For the person that can’t see this, it’s not bad. Like there’s a lot right with this picture as far as it looks like Seb, your room, it did not reverse your room, right? Like your room is there, but it is showing that you are talking to, you’re looking at a computer, the back of your head and all that looks like it.
[01:06:47] Preston Pysh: Pretty normal. But you’re talking to yourself and not me. Oh, this is interesting. Look at your background. Your background is my background.
[01:06:57] Seb Bunney: And have you seen that? It’s also given me your headphones, but not in the.
[01:07:01] Preston Pysh: Oh, that’s right. Yeah. Look at that. That’s wild. And then my picture is like really jacked because the microphone is literally behind me.
[01:07:13] Preston Pysh: And then I’m talking to you, which is correct and it’s the image of you looking forward. Okay. So like that all looks correct. It’s pretty close. Okay. So like not bad, but there’s a couple hiccups. Now if I went in there and I like pointed these things out, I think it would actually get it all correct if I went on on, on a back and forth.
[01:07:34] Preston Pysh: I mean obviously I didn’t have time to really do anything other than quickly take the prompt in there. And that was the first go around coming back to me. So pretty wild, but not quite right, but it’s coming along very fast.
[01:07:46] Seb Bunney: Interesting. Similar to your watch thing, it had my monitor, it has my exact, get the heck out of here.
[01:07:52] Seb Bunney: No, a hundred percent really? Hold on. Pull this back up. Stacked monitor. And that’s why I’m just like, what? I didn’t know what my monitor looked like.
[01:07:58] Preston Pysh: Get the heck out of here. That’s the monitor you have. That’s got my monitor. Yep. Dude, that’s weird. That is definitely not my monitor. In fact, I have three screens here in front of me.
[01:08:09] Preston Pysh: In fact, I get comments online. Why is he looking off to the side? Well, I’m looking over at my second or third monitor to pull up all the things on the fly during the show. So, yeah, no, my monitor’s way off. It looks like my monitor’s on the floor too.
[01:08:24] Seb Bunney: You’re really, you’re stacking SATs. You don’t have a chair.
[01:08:26] Preston Pysh: Oh yeah. That’s right. It did get that correct. Yeah. Pierre Chard. AI knows I don’t have a chair. I’m sitting in the chair.
[01:08:35] Seb Bunney: It’s a little ways off it, it’ll get there.
[01:08:38] Preston Pysh: Wow. Seb, I love this. This was so much fun. If you guys enjoy this format, I enjoy this format, but maybe the audience doesn’t like this format.
[01:08:47] Preston Pysh: If you’d like this format, please tell us in the comments of, you know, if you’re on X. Let us know because if you like it, we want to keep doing these types of things. And Seb, thank you so much for your comments and what you brought to the show today. Give people a handoff to anything you want to highlight.
[01:09:03] Preston Pysh: Seb, and thank you so much for joining us today on the show. But Seb, give people a handoff where they can learn more about you.
[01:09:09] Seb Bunney: Absolutely. And I would start by saying as well, like, if you kind of enjoyed this kind of discussion, when you listen to it, feel free to just post a comment with anything that you think is happening in the world that is interesting and kind of on the next time we record in the style, we’d love to kind of bring it up because I think that sometimes there’s so much stuff happening that a lot of it kind of slips between the cracks and it’s just, the world is a fascinating place and there’s incredible things that people are working on.
[01:09:33] Seb Bunney: Now you can just find me at Seb Bunney. I’m Seb Bunney on Twitter. I still kind of go by Twitter. I just feel like X to me, it doesn’t resonate. No, I, you can find me at sebbunney.com on Twitter and my book is The Hidden Cost of Money. And yeah, I just really appreciate guys listening and thanks for having me on Preston.
[01:09:48] Preston Pysh: Alright everybody, thanks for joining us and until next time.
[01:09:52] Outro: Thank you for listening to TIP. Make sure to follow Infinite Tech on your favorite podcast app and never miss out on our episodes. To access our show notes and courses, go to theinvestorspodcast.com.
[01:10:10] Outro: This show is for entertainment purposes only. Before making any decisions, consult professional. This show is copyrighted by The Investor’s Podcast Network. Written permissions must be granted before syndication or rebroadcasting.
HELP US OUT!
Help us reach new listeners by leaving us a rating and review on Spotify! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!
BOOKS AND RESOURCES
- Seb’s book: The Hidden Cost of Money.
- X Account: Seb Bunney.
- Related books mentioned in the podcast.
- Ad-free episodes on our Premium Feed.
NEW TO THE SHOW?
- Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members.
- Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok.
- Check out our Bitcoin Fundamentals Starter Packs.
- Browse through all our episodes (complete with transcripts) here.
- Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.
- Enjoy exclusive perks from our favorite Apps and Services.
- Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value.
- Learn how to better start, manage, and grow your business with the best business podcasts.



