TIP348: WILL ARTIFICIAL INTELLIGENCE TAKE OVER THE WORLD?

W/ CADE METZ

8 May 2021

In 2020, there were six companies that made up 25% of the S&P 500 and you know which ones they are: Facebook, Amazon, Apple, Netflix, Google, and Microsoft. The common denominator driving the growth for all of these companies is Artificial Intelligence. Google’s CEO, Sundar Pichai has described the development of AI as “more profound than fire or electricity,” but it is still very misunderstood. Everyone has a sci-fi image that appears in their head when they hear about AI, but what actually is it? Where did it come from? How fast is it growing? To answer these questions, Trey Lockerbie sits down with New York Times writer and author Cade Metz, to discuss his new book, Genius Makers, which lays out the story of how AI came to be.

Subscribe through iTunes
Subscribe through Castbox
Subscribe through Spotify
Subscribe through Youtube

SUBSCRIBE

Subscribe through iTunes
Subscribe through Castbox
Subscribe through Spotify
Subscribe through Youtube

IN THIS EPISODE, YOU’LL LEARN:

  • The history of AI
  • How it works and why it is used
  • The challenges for AI ahead
  • Some ethical dilemmas surrounding it and most importantly;
  • Which companies will benefit from it the most

TRANSCRIPT

Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.

Trey Lockerbie (00:01):
In 2020, there were six companies that made up 25% of the S&P 500, and you know which ones they are; Facebook, Amazon, Apple, Google and Microsoft. The common denominator driving the growth for all of these companies is artificial intelligence. Google CEO, Sundar Pichai once described the development of AI as more profound than fire or electricity, but it is still very much misunderstood.

Trey Lockerbie (00:29):
I’m sure everyone has a sci-fi image that appears in their head when they hear about AI. But what actually is it? Where did it come from? How fast is it growing? To answer these questions, I sit down with New York Times writer and author, Cade Metz to describe his new book, Genius Makers, which lays out the story of how AI came to be.

Trey Lockerbie (00:50):
In this episode, we cover the history of AI, how it works and why it is used, the challenges ahead. Some ethical dilemmas surrounding it, and most importantly, which companies will benefit from it the most. I learned a ton from Cade’s new book and this discussion, so I hope you enjoy it as much as I did. Without further ado, let’s learn about AI with Cade Metz.

Intro (01:17):
You are listening to The Investor’s Podcast, where we study the financial markets and read the books that influence self-made billionaires the most. We keep you informed and prepared for the unexpected.

Trey Lockerbie (01:37):
All right, everybody, I am sitting here with Cade Metz, the author of the new book, Genius Makers, as well as other publications. Cade, I’m so happy to have you on this show, because artificial intelligence is a really fascinating topic, and I think it’s kind of underserved, and it is the driving force behind so many amazing companies that we talk about all the time. I’m happy to dig in on it a little bit more with you. But my first question, I guess, for you is what drove you to write this book about artificial intelligence?

Cade Metz (02:09):
It first started, actually, when I came back from Seoul, South Korea in 2016. I’d flown to Seoul to see this event, which I ended writing about for my employer at the time, Wired Magazine. This lab in London, DeepMind had built a system to play the ancient game of Go, which is like the eastern version of chess, except it’s exponentially more complicated.

Cade Metz (02:34):
Most people in the AI field, as well as the world’s top Go players thought that a machine that could beat the top players at the game was still decades away. But at the Four Seasons Hotel in downtown Seoul, this machine built by DeepMind, ended up winning this match, beating Lee Sedol, who was the best Go player of the past 10 years.

Cade Metz (02:58):
There was this incredible moment when I was watching the creators of this system, the researchers at DeepMind, watched this machine play this match, this machine that they had created, and even they were surprised by how it was performing and what it was doing, and it was playing the game at a level that they never could, and that was a fascinating phenomenon.

Cade Metz (03:23):
When I came back, I resolved to write a book about the people building this type of technology, including Demis Hassabis, who was the leader of the DeepMind lab. But as I dug into the book, I found so many other fascinating characters whose stories wove in and out of Demis’s, and I found there was a great narrative story to tell about the creation of what we call AI. In using that narrative, and those characters, I could build on top of that, and really explore all the big ideas that you alluded to, all the ways that this technology is changing and how it will change our world.

Trey Lockerbie (04:04):
Well, there are a lot of characters in this book, right, and one in particular I’d never heard of before, and that was Geoff Hinton. My goodness, what a career Geoff Hinton has had. It was just mind boggling to hear his journey. Why don’t you talk to us a little bit about Geoff and his importance to this evolution.

Read More

Cade Metz (04:23):
As I built the book, and this wasn’t necessarily what I expected, as I set out to write it, was that Geoff Hinton was the central character in this story. It was inevitable that the book began with him and end with him in a way. It’s a 50 year journey really of this one person, and he’s fascinating in so many different and unexpected ways.

Cade Metz (04:44):
You learn in the first sentence of the book, that he literally does not sit down. When he was a teenager, he was lifting a space heater for his mother in England where he grew up and he slipped a disc. By his late 50s, his disc would slip so often, that it could lay him up for weeks and often months at a time. He literally got to the point where he realized that he could no longer sit down. What this means is, he can’t drive, he can’t fly in an airplane because the commercial airlines make him sit during takeoff and landing. He is someone who has faced these enormous personal obstacles, as he’s trying to bring this one idea that drives so much of the AI that we experience today, and will experience in the future.

Cade Metz (05:30):
As he’s trying to realize this idea, he’s facing these very personal obstacles, and it became a great metaphor for this 50 year long effort to take this one idea, and realize. That became the thrust of the book.

Trey Lockerbie (05:47):
Well, you mentioned this 50 year journey, and I’m not sure people are actually familiar with how long this has been going on and how long it’s taken to get us to where we are today. You talk a lot about in the book, the founding fathers of AI going all the way back to 1956, where they predicted a machine would be intelligent enough to beat a world chess champion, or prove its own mathematical theorem within a decade, and of course, it took much, much longer than that. Maybe give us a quick synopsis of the timeline going from Mark 1 perceptron to AlphaGo.

Cade Metz (06:20):
That one idea that I talked about is called a neural network, and the easiest way to understand this idea is that a neural network is a mathematical system that can learn a skill by analyzing data. If you’ve got 1000s of cat photos, and you feed them into a neural network, it analyzes those photos, and it learns to recognize a cat. It identifies the patterns that define what a cat looks like.

Cade Metz (06:46):
That’s an idea, as you alluded to, that dates back to the ’50s. In the late ’50s, and early ’60s, I guy named Frank Rosenblatt was a professor at Cornell University and worked at a lab in Buffalo, New York, built what he called the Mark 1 perceptron, and it was an early version of a neural network. It worked in very simple ways. Basically, if you gave it large printed letters, like printed letter A, or a B, or a C, if you gave it many examples of those letters, it could learn to recognize, which is an impressive task for a machine, particularly in the early 1960s. But it couldn’t do much more than that.

Cade Metz (07:26):
As he hyped the field, including the pages of my current publication in the New York Times, there’s this groundswell of belief that this system would do all sorts of other things, but it didn’t quite pan out. By the late ’60s and early ’70s, the idea was practically dead. What was so fascinating to me, is that at that moment, when this idea is at its lowest point, that’s when Geoff Hinton embraced it.

Cade Metz (07:53):
He was a graduate student at the University of Edinburgh in 1971, and that’s when he took hold of this idea and never let go, even as the idea ebbed and flowed in the estimation of his colleagues, at times, his advisor, or the people who were working right alongside him. Even as their extreme skepticism was standing right in front of him, he continued to work on this idea.

Cade Metz (08:22):
That’s the kernel of any great story, someone who believes in something, even in the face of that type of skepticism. What you see is him eventually realizing, about 10 years ago, in 2010, that single idea starts to grow.

Trey Lockerbie (08:38):
Let’s touch on why it died in those early days. It seems to go hand in hand with Marvin Minsky and Seymour Papert, who published this book called Perceptrons, that basically hindered the belief that neural networks had any merit, and it just became this belief that it would never work. This just perpetuated throughout the whole community. I’m just curious where you think we might be today, if they had never published that book?

Cade Metz (09:06):
It’s such an interesting story, right? The neural network, as built by Frank Rosenblatt, it did have a flaw. It was good at recognizing those printed letters, but it couldn’t recognize handwritten letters. If there was any sort of variation in how the letter was put together, it didn’t work. It certainly couldn’t recognize a cat photo, and it couldn’t, as Rosenblatt had promised, recognize the spoken word and do all sorts of other extravagant things that he had promised.

Cade Metz (09:36):
It had a mathematical flaw, and that’s what Minsky pinpoints. But what ended up happening, you’re right, it’s because of that book, people quit working on the idea. There was still hope that, that flaw could be patched, and that’s what Geoff Hinton ended up doing. What you might have had, was more people working on it, and maybe finding the solution to that problem quicker.

Cade Metz (10:04):
But what’s interesting is that Geoff did not quit, among others. There’s like a handful of others who continued to work on this idea. In Geoff’s lab at the University of Toronto, where he ended up, his students like to say that the theme was old ideas or new. What that meant was, is that until an idea had been completely disproven, you kept working on it, until you found a solution, and that’s what ended up happening with the neural network.

Cade Metz (10:31):
In the mid ’80s, Geoff along with a couple of other researchers found the solution to that flaw, gave the neural network that missing mathematical piece. As they described in the ’80s, with that piece in place, that’s pretty much what we have today, and it’s driving all sorts of things in our daily lives, when it comes to recognizing objects in photos, a technology, by the way, that could also be applied to self-driving cars. That’s how self-driving cars see the world around them, how they recognize pedestrians, and street signs and the like. It’s what Siri uses on your iPhone. It’s how Siri recognizes the words that you say, the commands that you speak, when you’re asking it for something. The list goes on.

Trey Lockerbie (11:15):
I’m curious, part of it is that mathematical revolution, but how much of it was also just processing power and just waiting for the processing power almost to catch up to what is needed today to run an AI algorithm?

Cade Metz (11:27):
That’s exactly what was needed. By the mid ’80s, you had the math in place, in part because of the work of Geoff Hinton, and a neural network could do some interesting things in those days, but it couldn’t reach the levels that we have today, because of those two things. You needed the data, you needed enormous amounts of data to train these systems. You needed the photos and the sounds and the text, and then you needed the computer processing power to crunch all that data, to analyze all that data.

Cade Metz (11:58):
By 2010, we had both. The internet gave us the data. That’s what gave us all the photos and the sounds needed to train this stuff. Then Moore’s Law, as they call it, had progressed to the point, and we can talk about this at length later, perhaps, but we had the chips that we needed to process all that data, and pinpoint those patterns that can recognize spoken words, or identify faces and other objects.

Trey Lockerbie (12:29):
You highlighted self-driving cars just now, and I wanted to talk about that, because in your book, one thing I learned is that self-driving cars have actually been around since 1989, when Dean Pomerleau built ALVINN, A-L-V-I-N-N, which these neural networks. How have self driving cars evolved since the late ’80s?

Cade Metz (12:50):
Well, that’s such a great example of where the neural network started to work, and show that sort of promise. Basically, what Dean Pomerleau and his fellow graduate students at Carnegie Mellon University did is they built a truck with a giant camera on top of it, and it moved very slowly. But what it would do is capture images of the world around it. Once you had all those images, you could feed that into a neural network, as well as the way that human drivers would respond to what was around the car.

Cade Metz (13:24):
As the car is seeing this particular scene, the driver is behaving a certain way, when it comes to turning the wheel or pressing the gas. All that gets fed into a neural network, and essentially, the neural network learns to drive the car.

Cade Metz (13:41):
Now, there are real limitations there; the car would move very, very slowly, when it’s driving on its own, and it couldn’t do much more than navigate a highway, a relatively straight shot. But they could drive this car across Pennsylvania in this way. What we again needed, was far greater amounts of data to train that car, and the processing power, they had neither of those things. But you can see the seed of this idea working.

Cade Metz (14:12):
It’s really a lesson in often how long it can take to realize a technology. Just because something isn’t working at the moment, doesn’t mean it will never work. There’s a very, very long runway for a lot of these big technological ideas.

Trey Lockerbie (14:29):
I found it interesting that Tesla, for example, in their self-driving cars are only relying on cameras, whereas Baidu and even Google, they’re relying on lidar, radar, as well as these cameras. I’m just curious to hear your thoughts about Tesla’s approach and moving fully away from this other technologies, is that something inherent in their IP that should be noticed in their valuation?

Cade Metz (14:54):
It’s a big difference from the way others are building their cars and it illustrates this very thing we’re talking about, they want to build a self-driving car, like Dean Pomerleau’s car in the ’80s. Dean Pomerleau’s car solely used a neural network to learn to drive. That’s fundamentally the way it worked. They would gather the data, you do this with human drivers behind the wheel. You feed that data into a neural network and it learns to drive.

Cade Metz (15:20):
Elon Musk and Tesla want to do that with modern technology. Its cars are always driving the roads and gathering that data through their cameras. As you collect more and more data, you can feed more and more data into this giant neural network that can learn the behavior that a car needs to really navigate the roads. Now, that’s an enormous task, we’re not to the point where you can do that. We don’t have enough data, we don’t have systems that can learn every scenario that a car is going to have to learn to deal with all the uncertainty and all the chaos on the road.

Cade Metz (15:57):
But that’s their goal. You’re right, that’s different than the way self-driving cars work today, and what others are trying to do. What they do in part is they use lidar to map the world. They give the car a map of where it’s going to go. This is why these cars often have to roll out city by city. You have to map San Francisco first, to help the car navigate.

Cade Metz (16:20):
Tesla wants to do away with that. They want to gather enough data, feed it into this giant neural network to learn everything, so that you can take that learning and deploy it anywhere in the world. That’s an enormous task. But that’s their goal, and it really shows you that the two philosophies that are work today. We’ll see who’s able to get there first.

Trey Lockerbie (16:44):
I once heard that Google had developed that thing called reCAPTCHA. When you’re trying to log in somewhere, and it’s trying to check your identity or you have the correct password, it’ll basically prove you’re not a robot by asking you hey, in this photograph, identify the stop sign or identify the stoplights, and you click on the little squares and you identify what you see.

Trey Lockerbie (17:06):
That is actually programming their AI to see these things out on the road. You’re actually helping, all these people signing into their accounts are helping the AI learn what those images are, and how to identify them in the real world.

Cade Metz (17:21):
That’s exactly right, and it demonstrates, again, a neural network. It shows you how fundamental this idea is. In addition to the Tesla example, we talked about, a neural network is just used for perception. It’s a way for a self-driving car, whether it’s a Tesla car, or it’s a Google car, or a car from Toyota, it’s a way of identifying objects on the road.

Cade Metz (17:42):
A street sign or a pedestrian. The way you train that system is you need 1000s of examples of a stop sign, and you need to feed that into a neural network. But you have to label the data, as you say, you have to identify a stop sign for the neural network. That’s what you’re doing when you’re signing up for those services, you’re just saying, this is a substance. Once you do that, then the system can learn the task on its own, as long as it has a sufficient number of stop signs that have been labeled.

Trey Lockerbie (18:17):
Well, it highlights perhaps, one of the challenges ahead for AI and its evolution, because clean data is very important. Not all data is considered equal, I guess. Clean data is very important. I’ve read that, 25% of machine learning’s task is involved in just cleaning up the data involved. I’m curious how much that matters to you.

Trey Lockerbie (18:40):
For example, Tesla is collecting just raw data from the streets. Whereas something like Google’s Waymo is mainly doing simulated miles, and not necessarily having cars out there on the road. Does either one produce an advantage in your mind?

Cade Metz (18:57):
It’s a mix of that. Google has cars on the road as well, they also do simulation. [inaudible 00:19:03] is going to use simulation as well. They’re not to the point where they’re just relying on real world data. It’s always a mix, and the mix can vary. It’s about finding the right balance between those two things. You can do a lot in simulation. Cars can learn tasks in the same way, through a neural network, through simulated scenarios.

Cade Metz (19:25):
Basically, it’s like a video game, you can create a city for a car to navigate, and you can train the car in that environment. But you’re going to miss those edge cases that you might encounter in the real world. You’re not going to have every situation defined in the simulation. That’s why you have to do real world as well. It’s a balance of the two. Ideally, you would have a simulation that represented everything in the world, and you can feed that into a giant neural network and train it that way. That’s the goal, but that is still a pipe dream, we’re not to the point where you can simulate the universe, and then have enough computing power to train your system using that simulation. We’re by no means there, but that is the goal.

Trey Lockerbie (20:15):
Microsoft appears to have had challenges getting its AI department up and running the same way as some of these other companies. Of course, if you think about these neural networks, it’s easy to understand how they apply to things like ad spend, and being able to target people correctly and predict what they might buy and serve this up in the ad space. Maybe that’s where it wasn’t so applicable to Microsoft. But I’m curious, they’ve had a very long journey with AI that is only recently, I think, gotten up to par with some of the other companies. They’re sort of the tortoise in this race, I think. What do you think Microsoft’s future looks like as it compares to some of the other players in the space?

Cade Metz (20:54):
Microsoft responds to the situation very differently than some of its rivals. It’s amazing to me how these companies develop particular personalities. Because of those personalities, so to speak, they will respond to what’s happening in such different ways. A neural network, largely because of Geoff Hinton, and some of his students starts to work around 2010.

Cade Metz (21:18):
The area where it starts to work first in speech recognition. That Siri example we talked about, where you can speak a word into your phone, and it can recognize it. That starts to work. In a Microsoft lab outside of Seattle, with Geoff Hinton and two of his students, he’s traveled from the University of Toronto, and has this working at Microsoft.

Cade Metz (21:39):
That type of speech recognition is pervasive now, it will become enormously important to our daily lives. But Microsoft, not only was it slow to embrace that, but it didn’t really have a place to put it. Let’s not forget that. What happens over the next couple of years is that Google deploys that on Android phones. Google had a platform that it could use to exploit this speech recognition technology, they can get that onto a phone that was already in the hands of millions of people, and they can start to use that technology.

Cade Metz (22:16):
Microsoft was behind in the smartphone race, they had already lost that race in some way. What that meant was they didn’t have a place for that speech recognition technology. But what I will also add is that Microsoft was slow to realize that, that same idea, a neural network, which was working so well in their lab with speech recognition, could also be used in all these other areas.

Cade Metz (22:39):
It looked like, to many people at the moment, that it was only a speech recognition technology, they didn’t realize it was also a way of recognizing objects and images, faces and images of driving the types of robotics we talked about. Whether it’s self-driving cars, or robots in a warehouse, or manufacturing robots. Now, it’s starting to work with text, chat bots, as they call them. You can train a neural network to carry on a conversation. There’s so many other areas where this has started to work, and Microsoft wasn’t alone in failing to realize that would happen.

Cade Metz (23:14):
Most of the tech industry, most of the field, the AI field didn’t realize this would happen. It was such a weird idea at the time. There were only a handful of people who really believed in it over the decades. There was such skepticism that it was hard to break out of that, even as it started to work in one area, and even two areas, tech industry was slow to respond.

Cade Metz (23:39):
But the industry is now catching up, and Microsoft has caught up to this and other ideas. But no, it still doesn’t have some of the infrastructure that it needs to compete in some of these areas. Microsoft is not building a self-driving car. Now, they still don’t have a smartphone. But they can compete another way.

Trey Lockerbie (24:00):
Am I right, reflecting on the book that Andrew Yang, I think was at Microsoft wanting to build a self-driving car solely to develop the technology, not even to release the self-driving car and then ultimately ends up at Baidu?

Cade Metz (24:13):
This is actually a guy named Qi Lu is his name.

Trey Lockerbie (24:16):
Qi Lu.

Cade Metz (24:17):
A fascinating guy, and we can get to this amazing story of how he tries to change the direction of Microsoft. He was one of the top Microsoft executives and started to realize that this idea was working. You’re right, one of the things he wanted to do was try to convince the company to build a self-driving car, not just put that sort of car out on the market, because that’s a way of learning where the industry is going technologically, it’s a way of seeing the new technologies that are coming to the fore.

Cade Metz (24:53):
His analogy was that Google had learned this through its search engine. There are so many technologies you need to make that work, and by the way nowadays, that includes a neural network. But the search engine was a way, not only of serving a market for Google, but learning so many of the other technologies that would become important in the years to come.

Cade Metz (25:16):
What he advocated was, going all in on a self-driving car, if only for the future of the company in general. What also fascinates me about him, and this is a story that, as I heard it, I couldn’t believe it. What he wants to do is change Microsoft’s direction, even in this more fundamental way. He realizes that Microsoft is so set in its ways that over the course of three decades, as it was rising into one of the most powerful companies on Earth, it was set in its way, this happens to companies. They develop these personalities, as I said, and they see the world in a certain way.

Cade Metz (26:00):
As the world starts to change, it’s hard for them to change course. What he did was, together with a couple of friends of his, fellow technologists and engineers, he builds what he calls a backwards bicycle. It’s a bicycle, where when you turn the handlebars left, it goes right, and when you turn them, right, it goes left. He resolved to learn to ride this bike, which is incredibly hard, by the way. It takes weeks or months to learn to do this and essentially forget everything you’ve learned when it comes to riding a bicycle. But he resolved to do this, because he felt it would show the company and his fellow executives that you could change your way of thinking, and then a corporation could change its way of thinking.

Cade Metz (26:47):
He resolves to do this, and eventually get all his fellow executives on this bicycle, this is going to be this way of moving Microsoft into the future. I don’t want to give the punchline because it’s too good. But that’s a key moment in the book, where you see the way these giant companies operate, and how difficult it can be to changed direction.

Trey Lockerbie (27:11):
Trying to teach an old dog new tricks, basically.

Cade Metz (27:14):
You got it.

Trey Lockerbie (27:14):
You touched on Google, I want to talk about them, because from your book, you just get a sense for how much further along they are in so many ways than some of these other companies, and how much of an advantage that just the data pool they have is. Walk us through the deep learning that allowed Google to go from just punching keywords into a search bar, to now being able to ask questions in the search engine and what that meant for the company?

Cade Metz (27:41):
This is the big area of progress right now, where we talked about a neural network working with speech recognition, then with image recognition. Now, it’s what’s called natural language understanding. The ability for a machine to understand the way we humans piece language together. This works in the same basic way, you now have what they call universal language models. That’s essentially a giant neural network, where you just feed text into it.

Cade Metz (28:10):
This includes 1000s of digital books, Wikipedia articles, all sorts of other content from the internet, including conversations that you and I might have, or chat services. You feed all that into a neural network, and it learns to recognize the vagaries of English, how you and I piece those words together.

Cade Metz (28:32):
The remarkable thing is that you can then take that model, which trains, by the way, over months. It learns language, after months of analyzing all that data. But you can take that, and you can apply it to all sorts of tasks. That includes question and answer, you can apply it to a system like a search engine where you and I are asking a question, and it’s giving a response.

Cade Metz (28:58):
You can apply it to chat bots, it helps these systems literally carry on a turn by turn conversation. That is something that has always fascinated the AI field. 50 years plus, researchers have been trying to build a system that could carry on a conversation the way you and I do. There’s real progress there. It’s also a way for these systems to generate their own books, generate their own articles, tweets, blog posts. It’s another area where we’re seeing huge progress, which is very promising, in a lot of ways, and also very scary in other ways.

Trey Lockerbie (29:39):
What is the point of that? Ultimately, logically you would think or just assume that okay, yeah, they’re developing all this to replace human beings, to make the company more efficient, not have to rely on human error or human judgment. But that’s not really the case. Facebook has 15,000 employees, just working on monitoring the data coming in, and what not. It’s not like it’s really eliminating jobs. What is the ultimate end goal for this, in your opinion?

Cade Metz (30:09):
You’re right, and that’s something we could talk about at length is the jobs question. There’s so much progress in all these areas, and let’s rope robotics in as well. The robots, the self-driving cars, robots in the manufacturing facilities and in the warehouse are getting better and better and better. They’re not necessarily eliminating jobs. We don’t see those self-driving cars on the road now, there are limitations to them. We’re still trying to figure that out.

Cade Metz (30:36):
The progress has accelerated, but it still hasn’t accelerated to the point where it’s just eliminating jobs and replacing humans, the technology is not there yet. You’re right. One of the places is not there yet, we can really see the limitations is Facebook. Facebook likes to talk about AI as a way of dealing with all the harmful and toxic content on its service. Whether it’s hate speech, or fake news. Identifying hate speech and fake news is a very difficult thing, even for a human beings. It’s a judgment call.

Cade Metz (31:15):
Some hate speech is obvious, other hate speech is not. If you and I have difficulty pinpointing what should and should not be on Facebook, a machine is certainly going to have the same difficulty. It was an example of where neural networks can help.

Cade Metz (31:33):
If you want to say, prevent people from selling illegal drugs on Facebook, you can feed the neural network 1000s of examples of marijuana, and teach it to recognize a marijuana ad, and eliminate that from the service, and there’s progress there. But that’s different from a lot of the other things that Mark Zuckerberg [inaudible 00:31:55] as you see in the book, has told Congress that these systems will do; identify hate speech or remove fake news. Those are enormously difficult problems, just as putting a car on the road that can deal with all the chaos and uncertainty that we human drivers, that’s very, very hard. Even as we’re seeing progress. Even as those chat bots get better, that doesn’t mean they can carry on a conversation as easily as you and I are doing now, as nimbly as you and I are doing.

Trey Lockerbie (32:30):
Yeah, there’s definitely an ethical concern around the development of this technology. You mentioned the book, that DeepMind when they sold to Google, they demanded that an ethics board would be put in place to oversee the progress and ensure that it was not going to be developed with any kind of malicious intent. Elon Musk, as you mentioned, he’s famous for saying that AI is more dangerous than nukes. It’s very much a concern. Furthermore, Google is working with the Department of Defense, and some of the employees were protesting against one of his projects at one point, because who knows how it could be used.

Trey Lockerbie (33:09):
Should an investor have concerns over owning pieces of companies like this, knowing that their projects, which are quite secretive, sometimes, might have ethical dilemmas? Whether it’s technology that’s being used for drone strikes, or simply extracting your health data, profiting from that. What are your thoughts on that?

Cade Metz (33:29):
Well, if you’re an investor, there may be concerns. But I think it really depends on what kind of company we’re talking about. What happened at Google was that it started working on a project with the DoD, to identify objects in drone footage. That’s something that could eventually be used with weapons, it’s a path towards autonomous weapons.

Cade Metz (33:51):
But what you had was a consumer company, a consumer internet company doing this, and that really surprised a lot of their employees. That’s why you had that protest against what was called Project Maven, this DoD project to do that. Google ended up pulling out of the project, because the protests grew to such a level. Now, the situation is going to be different than other companies. You had some smaller protests at Microsoft and Amazon, and both those companies, by the way, worked on that same project. But it didn’t have the same effect.

Cade Metz (34:26):
Companies are built in different ways. Google employees, over the years were encouraged to voice their opinion and push back often at management, and you had that there, and you’ve seen it in other places. Even Google, though, is starting to push back against that type of attitude. If you step outside those consumer giants, and you have companies that are built specifically say, for working with the military, the dynamic is completely different. If an employee goes to work at a startup that is designed to work with the military, they’re not going to have those same issues.

Cade Metz (35:03):
Now, there’s still going to be ethical questions. Autonomous weapons is the big, big issue, and we have startups, as well as traditional defense contractors who are working to build that, and there is concern about the path that we’re taking then. But we’re not there yet, and what people are realizing more and more is that if you clamp down on those efforts here in the US, it’s just going to happen abroad with our rival. It’s a big complicated issue that we, as a society, a global society, will have to deal with.

Cade Metz (35:36):
But you certainly have companies that are well-funded, who are working on this sort of thing. Now, I just wrote a piece about it in the New York Times, a lot of these companies are outside of Silicon Valley. They’re more in Southern California, for instance, because the attitudes towards this type of thing are different. If you’re an investor, it really depends on the dynamics within the company.

Trey Lockerbie (35:59):
I’ve heard Google’s ex chief executive Eric Schmidt talk a lot about how important it is for these platforms to ultimately be built in America. Because you’re right, they could go into other hands. It brings up a question around advantage. Jensen Huang, Nvidia’s founder recently stated that “Moore’s Law isn’t possible anymore.” Basically, due to more and more complexity, teaching these machines is becoming more and more expensive, mainly due to electricity needs. Do you see this potentially slowing down the progress of AI, certainly for smaller businesses? Because it ultimately just concentrates technology further into the larger big tech companies that can actually afford it?

Cade Metz (36:46):
There are two things that are going on there. One, what Nvidia says is true, but they’re talking their own book, as they say. They are showing where they have an advantage. The best way to think about this is, for years and years and years, Intel built the chips at the heart of our computers. They call them CPUs, the brain of a computer.

Cade Metz (37:08):
What was in our laptops, in our desktops, and it’s in the computer servers in these giant data centers that run Google and Facebook and Amazon. By the way, end up driving these neural networks, training all these systems by analyzing all that data. But what Nvidia is saying is that Moore’s Law has prevented those Intel chips from improving at the rate they did. Every 18 months or so, you can pack the same number of transistors onto a smaller and smaller package.

Cade Metz (37:42):
What that meant was, you’re essentially getting more and more computing power out of these Intel chips. But that has started to slow, so you’re not getting as much performance in terms of gains, year by year that you had in the past.

Cade Metz (37:55):
But this has not hindered the AI development, because what worked when it came to training these neural networks, oddly enough, was gaming chips. So, these chips that would work in concert with Intel’s chips. They were built to drive video games and other graphics, heavy software applications. As it turns out, those were ideally suited to the math that’s used to train a neural network.

Cade Metz (38:22):
Basically, you offloaded that work from the Intel chips onto these graphics chips. That’s what we’re seeing now. We’re seeing specialized chips built by companies like Nvidia, used to train the AI, and there’s huge progress there, to the point where many companies are now specifically building chips to train these neural networks.

Cade Metz (38:48):
This is something that’s happened in startups, both here and in China, and in the UK and other places. But it’s also happening inside some of these giant internet companies. Google has built its own chip to do this. It’s called the TPU. Amazon has done the same thing. Microsoft is moving down a similar road. What you see is all sorts of companies building new chips, specifically for this type of AI. Like in any market, those big companies are going to have an advantage. They have the infrastructure to run these chips. The way they’re served up to the world is through cloud computing services, and they’ve got the money to do this as well. There is advantage there in this area is other areas.

Trey Lockerbie (39:33):
Now, is that why Demis and DeepMind approached games to begin with, to develop their AI? I want to talk a little bit about AlphaGo, like you mentioned earlier, is that the theory behind starting with games?

Cade Metz (39:46):
That’s a separate thing, but it’s a great thing to bring up. Demis was a games player, as part of it, in the extreme. This is someone who was a chess prodigy. He was the second ranked under 14 player in the world when he was young. He ended up participating in this competition in Europe, it was essentially a games playing championship of the world where games players would come from all over the globe to compete in a variety of games. Whether it was Go, or chess or poker, the list goes on. Demis won this competition, four out of its first five years, and the one year he didn’t win, he didn’t enter.

Cade Metz (40:27):
This is part of who he is. It illustrates his interest, also shows how competitive he is, how I’m ambitious he is and that plays into DeepMind as well. But because of this, you realize that games were a great proving ground for AI. He wanted to build technology, and give people a real idea of where it was moving.

Cade Metz (40:52):
You want benchmarks to show the progress, and games are a great way of doing that. You saw this with that Go match in Seoul, South Korea. That was an inflection point for the industry. Because that system, and at the heart of it was a neural network, it won a match that captured the attention of Asia, certainly. You could feel this entire country when I was in Korea, concentrated on this MAC. You could feel their emotions sway back and forth as the match swayed back and forth.

Cade Metz (41:25):
It was a way of really getting people to understand what was happening. Games are easy for us to understand. That happened here in the US as well, even though we’re not Go players in the way that the average person is a go player in Japan, or China or Korea. But it was a moment that people can really understand, and that’s part of what they’re missing.

Trey Lockerbie (41:50):
Well, I’m not surprised that after being at that event, you got inspired to write an entire book on artificial intelligence, because that is just one of the most fascinating sporting events, to some degree. More people watch that in the Superbowl. It was very different to see AlphaGo versus Lee Sedol than it was to see something like Kasparov versus Deep Blue in 1996.

Trey Lockerbie (42:13):
Tell us a story a little bit around move 37 in game two, and move 78 in game four in that match and what it represents.

Cade Metz (42:22):
You’re right, it was very different than Kasparov. I happened to be at both of these events. I can speak with firsthand knowledge. But the difference is, chess is a game where someone like Kasparov plays several moves into the future. He can map out where the game is going, for step by step. That’s how Deep Blue the IBM machine built to play chess was built, to look forward into the future of the game and solve the problem that way.

Cade Metz (42:50):
You can’t do that with Go, there are too many possibilities. You can’t go through them all. You see this in the way the top players play. They play by intuition, by feel. They often move a piece just because it feels like the right thing to do. If you’re going to build a system that can beat the world’s top players, you’re going to have to mimic that sort of intuition. Fundamentally, you’re going to have to do that.

Cade Metz (43:14):
That’s why that event was so amazing, because the system will mimic that and not just mimic it but exceed that human intuition and play in ways that would surprise even the seasoned commentators who were qualified, accomplished Go players themselves, they couldn’t understand what the machine was doing.

Cade Metz (43:34):
That’s what happened with move 37 in game two, it was this transcendent move, after the fact that DeepMind researchers went into the system and pinpointed that move, and told me that the odds of a human player making that move were one in 10,000, and the machine made it anyway, because it had trained to a level, basically playing game after game against itself, is the way it had been trained. It trained to a level that it could outperform you and it could decide to make that move even though a human wouldn’t do it.

Cade Metz (44:07):
That’s a fascinating moment. I often say that it was one of the most amazing weeks of my life, and I wasn’t even a participant, I was just an observer. What you saw in game four, after Lee Sedol had lost the match, or he’d lost the first three games, which meant he lost the best of five matches to the system, he kept playing. In game four, he himself had an equally transcendent move, the odds of a human making move 78, as you mentioned, were the same odds, one in 10,000. He had his own moment when what he said afterwards was that the machine was teaching him new ways of playing the game. In the moment, you can see multiple examples of this. He wasn’t the only one who talked about that phenomenon.

Cade Metz (44:59):
Then, a year later, when I went to China, to a little town south of Shanghai to see this machine play its next match against who was then the top player in the world, a 19 year old from China, the system had improved to the point where the human players couldn’t compete, for one. But also, you could see, a year after it had first made its debut, you can see so many of the world’s top players changing the way they played the game, because they had analyzed the games. That phenomenon is very, very real.

Trey Lockerbie (45:35):
That’s like the silver lining in all of this, to some degree, because it’s a little intimidating to think about human beings just being replaced by robots. But you point out that Lee Sedol went on to win, I think seven of his next games against grandmasters, after he had played AlphaGo. It shows that it actually is improving human beings.

Trey Lockerbie (45:56):
Speaking of improving human beings, what are your thoughts on Elon Musk and his ideas around putting chips in the heads of human beings to compete with the AI that is to come?

Cade Metz (46:09):
Yes, he has not only said that, that is a path forward, he has built a company to do this very thing. It’s called Neuralink, and quite literally, they want to put a chip in people’s heads to provide an interface between your brain and machines. This is a moment in my book as well, where he talks about the time lag between having a thought and having to key it into your phone. He wants to reduce that to nothing.

Cade Metz (46:39):
Now, it’s quite an idea and it brings up all sorts of ethical questions, certainly. But before we even start to think about those, let’s realize that surgery of that kind, opening up the skull to put something inside your skull is a very, very dangerous thing. At this point, it’s not something doctors want to do, unless there’s a real reason to do it.

Cade Metz (47:04):
If someone has a life threatening injury, if they have some other medical condition that needs dealing with, you’re going to open up the skull. But you’re not going to do that with a healthy person. There are so many obstacles to doing that sort of thing, but Musk is intent on doing it.

Trey Lockerbie (47:25):
Well, the reason we’re talking about AI, to begin with, is because in 2020, last year, the S&P 500 ended the year with a 16.25% yield. At the end of the year, Facebook, apple, Amazon, Netflix, Google, Microsoft, just these six companies made up 25% of the S&P 500. It just speaks to how valuable this AI is, how it’s driving these valuations in today’s market. I want to just touch on or discuss how AI might continue to contribute and compound these companies in particular, versus other startups that we might want to look at?

Cade Metz (48:07):
Well, I think it’s fundamental, and it gets back to what we were talking about before, you brought this up, that what these neural networks needed, after five decades of research, for data and processing power, it’s those companies that have those two things. They have these giant data centers that are filled with the machines that provide the computing power, and that store all that data. Whether it’s images, or sounds or text.

Cade Metz (48:32):
When it comes, for instance, to those language models that could drive everything from the Google search engine to chat bots, and so many other things, those companies have the advantage. It’s just fundamental. If you’re a startup or you’re an academic lab, you just can’t compete with that. Now, what ends up happening is a lot of the technology ends up trickling down, so to speak, to other parts of the industry, into academia. What might seem like an unattainable technology now, goes down in price, things get open sourced, shared. It eventually makes its way to the academic labs and the startups.

Cade Metz (49:13):
But by that time, these giant companies have moved on to something else. There’s a gap there, and that’s very real, and it concerns a lot of people. It’s just the way it is. We’ll see how it plays out in the future.

Trey Lockerbie (49:28):
It sounds like what you’re saying is somewhat related to the internet bubble of sorts, where at the time these were all “Internet companies.” Now they’re just companies because the internet is so disseminated around the world. Are you saying that all companies will ultimately be AI companies, to some degree? That is an argument I’ve heard.

Cade Metz (49:49):
Yeah, it’s funny how AI is a weird term, coined in the ’50s, at a time, and you alluded to this too, when these scientists were sure that they would build a system that could behave like the human brain in a matter of years. That didn’t happen, and it still hasn’t happened, but we still call it AI. Each step of the way, we’re making these small gains.

Cade Metz (50:12):
What was AI in the past just becomes technology. We’re continuing to see that. We might call it AI now, in the future, it’s just going to be part of our daily lives. On this long road towards systems that can behave like the brain, we keep making the progress, and then it gets disseminated. What you say, is true, these systems that are so unusual, and are the domain solely of these very large companies will end up everywhere, then they’ll move on to another step.

Trey Lockerbie (50:47):
Quantum computing, perhaps, is your next book.

Cade Metz (50:51):
Exactly.

Trey Lockerbie (50:52):
Interestingly enough, PWC predicted that AI will add $16 trillion to the global economy by 2030, and McKinsey was predicting $13 trillion, which was the size of the global economy just in 2018. Which companies, maybe of the six I mentioned before, stands out to you as the one that will benefit the most from these advances?

Cade Metz (51:16):
I think all those big companies will benefit. The one thing we haven’t talked about, this isn’t just a US phenomenon, Baidu, which is often called the Google of China, they were there from the beginning. There’s this moment at the beginning of the book where Geoff Hinton auctions his services off to the highest bidder, the services of himself and his two students. Baidu is there at that auction, realizing what is happening.

Cade Metz (51:42):
China is a huge player here. Not only because they have their own internet giants, the government is behind these companies in a way that government isn’t behind the American tech company. This is a global thing. The gap is between the big companies and the smaller ones. A lot of those big ones are in China. I think that’s the point.

Trey Lockerbie (52:05):
It’s a good point. I actually ended up taking a position in Baidu after my conversation with Cathie Wood and learning a little bit about how they’ve approached it and interesting financials there to check out for sure. It also begs the question around chip suppliers. Are those other kinds of companies that we should be looking at, as this improves other companies that are providing the tools needed to evolve this technology further?

Cade Metz (52:29):
Yes, but it’s interesting, a lot of it goes back to these internet giants. You have Nvidia, very important player in this field, we talked about them. Intel’s trying to get into the AI chip game, they’ve tried multiple times, and they’ve been slow for various reasons. It’s like that phenomenon we talked about Microsoft, it’s hard for these big companies to change directions. Intel’s trying.

Cade Metz (52:52):
There are all sorts of startups that are building this new breed of AI chip. Many are here in the US, others are in China, there’s a big player in the UK. But again, some of the central players, if not, the central players are the big internet companies. It’s Google, it’s Amazon. Again, they are ahead of the game here. They’re really two AI chips that are used a lot at this point. That’s Nvidia’s chips, and the TPU, built by Google. We’ll see others get into that game. Amazon is one of them. But it’s interesting how the power is still centered, are the big internet companies.

Trey Lockerbie (53:34):
I want to touch on one last thing, what about companies that are going to benefit from this AI’s ability to predict things like cancer or eye disease or other health related companies that we should be taking a look at as progress develops?

Cade Metz (53:50):
That’s a really important area for many reasons. It’s something where the technology is really needed, and it’s a place where the technology can be really effective. A neural network, just as it can recognize a stop sign, it can recognize signs of illness and disease in medical scans. Whether they’re X rays or CAT scans, or the like.

Cade Metz (54:13):
I visited India at one point where diabetic blindness is a real problem, and they don’t have enough doctors to screen everyone in the country. If you have AI systems, which are already starting to be tested, if you have AI systems that can identify those signs in eye scans of diabetic blindness, you can do a lot of good. Google is another player here. They have tested that type of technology at two hospitals in southern India, and I visited one of them.

Cade Metz (54:43):
DeepMind, which we talked about, is also an early player in this area. A lot of what they were doing in the medical field has now been moved back into Google. Google is a player here, but you’re also seeing pretty healthy startup ecosystem, not only here, but again, in China, working on this very thing, and it can be applied to so many different types of disease; cancer detection, as well as diabetic blindness.

Cade Metz (55:12):
It’s a really hard thing to test and get approval for and deploy. You need to make sure this stuff works, and you need to make sure that we have the regulatory framework to deal with it. But that’s a big, big area.

Trey Lockerbie (55:26):
Apple also comes to mind to me with their Apple Watch, because I think that thing is tracking your blood pressure or your heartbeat or whatever. At some point, it could even notify you if it detects something that’s off and even predict something, just based on all the other people that are wearing these Apple watches and collecting all of that data. They might be a player as well.

Cade Metz (55:47):
That kind of prediction, whenever people talk about predicting things with these algorithms, that is a hard, hard thing, and I think there’s good reason to be skeptical of that. Whether it’s predicting what the stock market’s going to do, or predicting something in the healthcare field, there are specific areas where that works. But outside of those areas, it’s really, really difficult. In many cases, these types of algorithms we’ve been talking about don’t work as well.

Cade Metz (56:15):
But in those specific areas, in an eye scan, there are certain physical, telltale sign that diabetic blindness is on the way. The way it works today is the human doctor looks for those telltale signs. Now, we have machines that can do that. Again, it’s something that can be identified and labeled by people, then the systems learn to do it. Prediction is hard for humans. Anything that’s hard for humans, it’s going to be that much harder for machines. That prediction is something that we should be a little bit wary of.

Trey Lockerbie (56:54):
This has just been an incredible conversation, really wide-ranging and fascinating. Before I let you go, give you an opportunity to hand off to our audience where they can learn more about you, your new book, your other publications, anything else you want to share.

Cade Metz (57:08):
The new book is out now both in the US and in the UK. It’s available from Amazon and independent sellers, audio version, digital version, and then I’m on staff at the New York Times, and I cover this stuff full-time. You can follow my work there or on Twitter @CadeMetz.

Trey Lockerbie (57:28):
Really enjoyed it, Cade. Hope to have you again soon.

Cade Metz (57:31):
Love to. Thank you.

Trey Lockerbie (57:33):
All right, everybody. That’s all we had for you this week. Be sure to subscribe to the feed so that these podcasts appear in your app automatically. Definitely leave us a review. We always love hearing from you, and while you’re at it, go ahead and ping me on Twitter @treylockerbie and say hello. If you haven’t already done so, be sure to check out the dream tool we built at TIP Finance. Just Google TIP Finance, it’ll pop right up. With that, we’ll see you again next week.

Outro (57:58):
Thank you for listening to TIP. Make sure to subscribe to Millennial Investing by The Investor’s Podcast Network and learn how to achieve financial independence. To access our show notes, transcripts or courses, go to theinvestorspodcast.com. This show is for entertainment purposes only. Before making any decision, consult a professional. This show is copyrighted by The Investor’s Podcast Network. Written permission must be granted before syndication or rebroadcasting.

HELP US OUT!

Help us reach new listeners by leaving us a rating and review on Apple Podcasts! It takes less than 30 seconds and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!

 

BOOKS AND RESOURCES:

 

NEW TO THE SHOW?

P.S The Investor’s Podcast Network is excited to launch a subreddit devoted to our fans in discussing financial markets, stock picks, questions for our hosts, and much more! Join our subreddit r/TheInvestorsPodcast today!

 

SPONSORS

  • Get into a topic quickly, find new topics, and figure out which books you want to spend more time listening to more deeply with Blinkist. Get 25% off and a 7-day free trial today
  • Push your team to do their best work with Monday.com Work OS. Start your free two-week trial today
  • Bring your WiFi up to speed with Orbi WiFi 6 from NETGEAR. Save 10% with promo code BILLION10
  • Get three months free when you protect yourself with ExpressVPN, the VPN we trust to keep us private online
  • Join OurCrowd and get to invest in medical technology, breakthroughs in ag tech and food production, solutions in the multi-billion dollar robotic industry, and so much more.
  • Find opportunities to help diversify your portfolio with investments in alternative asset classes with minimums starting at $1,000 with Yieldstreet
  • Check out Kraken‘s industry-leading staking service, where you can put your crypto to work for you and earn up to 20% in additional rewards annually
  • Elevate your writing with 20% off Grammarly Premium
  • Simplify working with multiple freelancers, set budgets, and manage projects with ease with Fiverr Business. Get 1 free year and save 10% on your purchase on with promo code INVESTORS
  • Support our free podcast by supporting our sponsors

 

CONNECT WITH TREY

CONNECT WITH CADE

PROMOTIONS

Check out our latest offer for all The Investor’s Podcast Network listeners!

WSB Promotions

We Study Markets