TECH011: THE HISTORY OF AI AND CHATBOTS W/ DR. RICHARD WALLACE
TECH011: THE HISTORY OF AI AND CHATBOTS W/ DR. RICHARD WALLACE
30 December 2025
Dr. Richard Wallace, creator of ALICE and AIML, shares his journey from 1990s chatbot innovation to today’s AI frontiers. Discover how minimalist design, rule-based systems, and early wins at the Loebner Prize shaped modern chatbots.
Dr. Wallace and Preston also explore AI’s learning methods, human vs machine intelligence, and the evolving role of creativity in artificial minds.
IN THIS EPISODE, YOU’LL LEARN
- How a 1990 NYT article inspired Richard Wallace’s AI journey
- What made the ALICE chatbot revolutionary in its time
- The principles behind minimalist robotics and their influence on AI
- How AIML works and why it was crucial to early chatbot success
- The contrast between supervised and unsupervised learning methods
- Why LLM decision-making processes remain hard to interpret
- How humans and chatbots use language in surprisingly robotic ways
- The philosophical roots of the Turing Test and its modern critiques
- Insights on combining symbolic and neural approaches in AI today
- What Wallace is working on now at Franz in medical AI predictions
Disclosure: This episode and the resources on this page are for informational and educational purposes only and do not constitute financial, investment, tax, or legal advice. For full disclosures, see link.
TRANSCRIPT
Disclaimer: The transcript that follows has been generated using artificial intelligence. We strive to be as accurate as possible, but minor errors and slightly off timestamps may be present due to platform differences.
[00:00:00] Intro: You are listening to TIP.
[00:00:03] Preston Pysh: Hey everyone, welcome to this Wednesday’s release of Infinite Tech. Today’s episode is a deep dive into the early foundations of conversational AI and what they reveal about today’s language models. My guest is Dr. Richard Wallace, a pioneering chatbot creator and three time Loebner prize winner.
[00:00:20] Preston Pysh: Best known for building ALICE and the AIML language that powered early conversational systems. The Loebner Prize was an annual competition designed to implement Alan Turing’s imitation game, awarding the chatbot that could most convincingly carry on a humanlike text conversation with judges just as an FYI.
[00:00:39] Preston Pysh: So during the show, we talk about why simplicity beat scale in the early AI race, how supervised rule-based systems differ from modern LLMs, what the Turing test actually misses and why combining symbolic reasoning with neural networks may matter more than raw model size. This is surely an episode you will not want to miss.
[00:00:58] Preston Pysh: So without further ado, let’s jump right into the conversation.
[00:01:05] Intro: You are listening to Infinite Tech by The Investor’s Podcast Network, hosted by Preston Pysh. We explore Bitcoin, AI, robotics, longevity, and other exponential technologies through a lens of abundance and sound money. Join us as we connect the breakthroughs shaping the next decade and beyond empowering you to harness the future today.
[00:01:27] Intro: And now here’s your host, Preston Pysh.
[00:01:39] Preston Pysh: Hey everyone. Welcome to the show. I’m here with Richard Wallace and wow, this is really exciting for me to talk to such a pioneer in this space in the chatbot AI space. And first of all, welcome to the show. Excited to have you here.
[00:01:54] Dr. Richard Wallace: Thank you. Thank you, Preston. It’s a pleasure to be here as well.
[00:02:05] Preston Pysh: And when I look at what you accomplished very early on back in the 1990s. I’m curious what drove you or motivated you to be paying attention to chatbots and the Turing test and all of that. It’s such an early phase ’cause I think for most of the listeners, they know that all this stuff has really come to fruition in the last five or 10 years.
[00:02:28] Preston Pysh: It’s gone on everybody’s radar. But you were doing this literally decades before anybody was even aware of these ideas of chatbots and whatnot. So what was your initial motivation? That’s to get into this kind of stuff.
[00:02:40] Dr. Richard Wallace: That’s absolutely right. You know, I like to say that nobody knew what artificial intelligence was until a couple of years ago.
[00:02:46] Preston Pysh: Yeah.
[00:02:46] Dr. Richard Wallace: And now I’ll be sitting in a restaurant somewhere and I’ll hear a conversation at the table next to me and they’re talking about ai. Well, anyway, there are several threads that came together that inspired me to work on the chatbot Alice, and I’ll just pull on a couple of those threads here. One is that in 1990, I read an article in the New York Times about the first Loebner Prize contest.
[00:03:11] Dr. Richard Wallace: Now, the Loebner Prize was an annual touring test, an annual contest based on the touring test funded by a rather eccentric, philanthropist, Hugh Loebner. And the story with the very first contest was that none of the programs competing came close to passing the touring test. They were all just terrible chatbots, but Loebner awarded a bronze medal every year to the chatbot that was ranked highest by the judges.
[00:03:42] Dr. Richard Wallace: In terms of being the most human. And that first year, the bot that won was simply based on the old Eliza psychiatrist program, which, if you’re familiar with it, was a very primitive chatbot developed by Joseph Weisbaum in 1966. And it, you know, had very few responses, but it had some clever tricks to it.
[00:04:05] Dr. Richard Wallace: It could sort of match keywords and the input, and it had canned responses associated with those keywords. It could invert prepositions. So you know, if I said I came here to talk to you, then it would repeat back. You came here to talk to me. So it did that sort of pronoun swapping trick. But when I was in graduate school in the 1980s, this Eliza program was basically considered kind of a dead end or at best kind of a hoax.
[00:04:35] Preston Pysh: Yeah.
[00:04:36] Dr. Richard Wallace: In ai. And not only that, the inventor, Joseph Wisebaum, ended up pulling the plug on it because he thought it was too dangerous. He thought that people were reading too much into it and was actually there. It was a psychiatrist program, so people were trusting it with their, you know, their personal issues and problems.
[00:04:57] Dr. Richard Wallace: They were surprised to find out that Weisbaum could read all the transcripts of their conversations, and so he wrote a whole book after that, Computer Power and Human Reason, where he criticized the whole field of AI and his EIAA program in particular. You know, it’s really hard to imagine this now that someone would come up with a new AI application.
[00:05:19] Dr. Richard Wallace: It’s very engaging and popular people are using it. Then they would say, oh no, this is too dangerous. We have to put the genie back in the bottle. I think most people now would run out and try to find venture capital to start a company to commercialize.
[00:05:35] Preston Pysh: True. Very true. But isn’t this fascinating that the thing that he discovered very early on in the nineties.
[00:05:43] Preston Pysh: He started playing around with this in the sixties, which is mind-blowing to me. But what he found in the nineties was that there was a huge centralization concern with privacy and what people were putting into these discussions, which is now a, you know, a major talking point with ai. And it doesn’t seem that, I know I’m generalizing here, but it doesn’t seem like the population really cares too much or even thinks about these issues that caused him to shut down his.
[00:06:12] Preston Pysh: His entire effort behind this. I don’t know. I find that really fascinating that he discovered this, what, four decades before it became like, or three decades before it became something that the rest of the world should be very concerned about.
[00:06:26] Dr. Richard Wallace: Well, he really discovered it in the 1960s. Wow. When he first created the program.
[00:06:31] Dr. Richard Wallace: The other ironic thing about Eliza was that up until very recently, I would say, well, let’s say 20 years ago, Eliza was. By far the most widely distributed, popular and well-known AI application.
[00:06:47] Preston Pysh: Yeah.
[00:06:47] Dr. Richard Wallace: If you knew anything about AI up until maybe the year 2000, then you would know about Eliza.
[00:06:55] Preston Pysh: I’m curious, when you read this, I think you said New York Times article in 1990.
[00:06:59] Preston Pysh: Did you ever think that you would be the winner of this Loebner Prize a decade later?
[00:07:06] Dr. Richard Wallace: Well, that planted a seed in my mind and I didn’t really do anything about it for about five years. Okay, so another thread that led to the development of Alice or the inspiration for Alice was around that time in the early nineties.
[00:07:20] Dr. Richard Wallace: It was the end of the Cold War. And so there was decreased amount of government funding available for AI and robotics research. Compared to the 1980s. And so a number of us in the robotics field, I was working in robotics at the time, got interested in the idea of minimalism, robot minimalism, and basically that was the idea that we could build robots with very simple, inexpensive sensors and actuators, you know, very commodity microprocessors. And as a result of that, you could actually get more lifelike behavior out of these robots than you could with, you know, approaches. People have tried in the past with much larger computers and so forth.
[00:08:09] Dr. Richard Wallace: One of the interesting inventions that came out of that period was the Roomba. So if you think of the Roomba rolling around and, you know, bumping into things and changing its direction, it’s all basically just a stimulus response application. So we call that stateless. So it’s sensing something and then taking an action based on what it’s sensing, you know, changing direction, for example.
[00:08:28] Preston Pysh: Yeah.
[00:08:29] Dr. Richard Wallace: So that whole approach of minimalism. It was also in my mind at the time, and that kind of dovetailed with the very simple approach of the Eli a program, which was also kind of a stimulus response. You know, it was so simple that it could respond very quickly. It didn’t have to go and do a lot of computations to come up with a responses.
[00:08:50] Preston Pysh: What was your inspiration for thinking that simplicity was going to lead you to better results? Was there something in your life or something that you were reading at the time, or you know, what drove you to that intuition?
[00:09:04] Dr. Richard Wallace: Like I said, we were working on the minimalist philosophy of robotics.
[00:09:09] Dr. Richard Wallace: And yeah. At that time, I was working on the development of a robot eye, and by that I mean a visual sensor that’s based on the architecture of the human eye. So the human eye differs from a TV camera in the sense that a TV camera is basically a square grid of square pixels. But the human eye is more like concentric rings of pixels with higher and higher resolution towards the center.
[00:09:35] Dr. Richard Wallace: We call that a log map. And so we had developed a sensor that had that log map pixel organization. In order to use a camera like that effectively, you have to be able to point it. So we developed a little motor high speed pointing motor. Based on a direct drive design and that motor could point the camera, the eye camera in, you know, pan and tilt directions very quickly.
[00:10:03] Dr. Richard Wallace: And again, it was a very simple kind of actuator, simple sensors, and it could move very quickly, could move actually faster than the human eye. So you’d sort of see this thing whipping around and looking at different things, and it was very lifelike.
[00:10:19] Preston Pysh: So just for the audience to understand, so in 2000, I believe, 2001 and 2004, Dr. Wallace won the Loebner prize, which is this Turing test with his ALICE protocol or chatbot that he had created. And I guess for me, what was the major insight that you think that you had back then? You talk about this idea of simplicity, but what would you say was the major insight that you had to outperform everybody else that was competing on What is, I mean, for anybody listening to the most complex, challenging, you know, problem you could ever try to go after, right?
[00:10:56] Preston Pysh: Like what would you say was your keen insight that you had that allowed you to do this?
[00:11:00] Dr. Richard Wallace: Well, it was basically the idea that I could build on the EIA program. So the Eli, a program, had about 200 rules, 200, you know, stimulus-response rules. And you could think of that as a pattern and a response. And my idea was to build a kind of super EIA, where instead of 200 rules, you had thousands and thousands of rules.
[00:11:23] Preston Pysh: Yeah.
[00:11:23] Dr. Richard Wallace: And in fact, by the time I was entering those contests, I got ALICE up to about 50,000 patterns and responses.
[00:11:31] Preston Pysh: Wow. Amazing. So, Richard, one of the things that I found really fascinating about you back at this time was that you came up with this artificial intelligence markup language. You effectively, for all intents and purposes, and correct me if I’m mischaracterizing this, you had to come up with your own language in order to kind of build efficiency into how this chat bot was working, which is, you know, as a person who’s not very good with languages, I’m much more of a math person.
[00:12:00] Preston Pysh: I’m reading this and I’m thinking, this is mind blowing. So talk to us about this, and what was this insight that you had to come up with the AIML, artificial intelligence markup language at the time that you did this?
[00:12:13] Dr. Richard Wallace: Well, AIML is based on XML and XML was very popular at the time. One thing that appealed to me about XML for the purpose of writing chatbots was that I always say XML has an implicit print statement.
[00:12:28] Dr. Richard Wallace: So when you write the responses, you don’t have to put in an expression that says print, blah, blah, blah. Something you know, between the parentheses because the XML already just provides the text inside the markup. So the response is just the text inside the markup, and then basic unit of knowledge and AIML call the category, which is like the rules I was talking about a second ago.
[00:12:53] Dr. Richard Wallace: So the category consists of a pattern. It matches some input, some natural language input, and then a response called the template. The reason it’s called the template is because it’s not exactly the answer, but it’s a template for the answer that can be populated with various other things. And then there was also a recursive element to it where the response could actually simplify the input into a kind of simpler input.
[00:13:22] Dr. Richard Wallace: So the example of that is I want you to tell me who you are right now. So you can reduce that by removing the right now. So I want you to tell me who you are and then you can remove the, I want you just so it reduces the, just tell me who you are and then that reduces to who are you. So there was that recursive element built into the responses as well.
[00:13:45] Preston Pysh: So in general, you’re just, you were taking language and you were making it way more efficient. And they’re like, where do you even start with something like that? I mean, you’d literally have to go through, there’s just so many different variations of language and I think of the complexity of this. I wouldn’t even know where to begin to start writing something that makes it more efficient.
[00:14:05] Preston Pysh: Like, yeah. So how did you think about solving that problem?
[00:14:11] Dr. Richard Wallace: Well, it all goes back to the conversation logs. So just like why isn’t, you know, I could read the transcripts of conversations people are having. By the way, this would’ve never worked without the internet, without the World Wide Web.
[00:14:23] Preston Pysh: Yeah.
[00:14:24] Dr. Richard Wallace: Because with the World Wide Web, I could start to accumulate conversations from a very large audience of people. By looking at the transcripts of those conversations, I could basically program responses to the things people were saying. Later on, I realized that. There was kind of zip distribution over the things people were saying.
[00:14:47] Dr. Richard Wallace: So you know, there’s kind of a most common thing people say, which is, hello. And then who are you and how are you? And I like something so you can create the responses in order of how frequently people say particular things.
[00:15:06] Preston Pysh: My question for you is, and I don’t, I’m struggling to find a way to frame this, but so you write these thousands of rules and you’re also working on a way to compress or make the English language more efficient.
[00:15:21] Preston Pysh: What did you fundamentally learn through the experience of writing these thousands of rules and rules of thumb of compression? Because when I think about it. Like you’re doing, like we look at these LLMs and machines are doing all of this really hard and complex work, but I would imagine what you were doing there in the nineties and early two thousands was exactly what all these LLMs are doing today, but you were doing it manually, and so I guess.
[00:15:49] Preston Pysh: It’s almost, I hear these people that say, well, we have no idea what’s behind these ones and zeros in all these LLMs, which I guess is a true statement, right? Yeah. But if a human was going to maybe be able to understand what it is that it’s doing. I think you would be one of the very few people on the planet that could maybe help us understand what that is because you did this manually for so many years.
[00:16:14] Dr. Richard Wallace: Right? Well, there’s so many things wrapped up in that question, so lemme see if I can pull it apart. Yeah. So there’s always been a kind of tension in the history of artificial intelligence between, let’s say, supervised learning and unsupervised learning.
[00:16:30] Dr. Richard Wallace: So what I was doing was what we call supervised learning because I was playing the role of a teacher, or you know, a guide. So whenever I added a new response, it was manually added as you’re saying. Driven by a particular input that I saw in the conversation logs. And so the way that I’m teaching the robot is by acting as its teacher basically, and saying, you know, when you see this, you should say that.
[00:16:56] Preston Pysh: Yeah.
[00:16:57] Dr. Richard Wallace: And that’s in contrast to unsupervised learning, which is what these LLMs are doing. They’re basically, you know, trying to accumulate a lot of inputs. And, find the neural network weights that match it to particular outputs. And so with that technique, you can get phenomenal results, obviously.
[00:17:20] Dr. Richard Wallace: But as you’re saying, it’s difficult to know how the LLM came up with particular responses, whereas in the supervised learning case. Where it’s all a symbolic process. It’s very easy to trace back through the, you know, the logic of the program and see what caused a particular response to be generated.
[00:17:39] Dr. Richard Wallace: And I always say that people who do supervised learning approaches spend all of their time doing creative writing, which is what I was doing with the Alice bot. People who do unsupervised learning spend all of their time deleting crap from the database. Yeah. And that’s sort of what’s going on with the LLMs now is, you know, they’re having to put a lot of work into filtering to make sure they don’t say anything inappropriate or offensive or political.
[00:18:08] Dr. Richard Wallace: And you know, that it ends up being a lot of manual work as well.
[00:18:12] Preston Pysh: Yeah, and I guess my understanding is that everybody that’s on the cutting edge of AI today, like that’s the holy grail for them, is to get the human out of the loop and for it to be completely AI generated and filtered and just like there’s no humans there.
[00:18:29] Preston Pysh: As a person who deeply understands this, and the way that you frame that is this back and forth and there’s consequences to one side and the other, is there a moment where you think that they will be able to get away from complete removing the human out of the loop and it progressing in a way that’s actually beneficial?
[00:18:48] Preston Pysh: Or do you think that. The more that they lean into removing the human out of the loop, that they actually are setting themselves up for a systemic failure because it’s going to spiral into this AI slop, if you will. Or it’s creating and generating content in a direction that’s so fast and so extreme that they get away from human filtering altogether, and it just kind of turns into this.
[00:19:12] Preston Pysh: Almost like a runaway virus, if you will. Is that how you kind of see this, that it needs to be balanced or is it even possible for it to go in that direction without humans?
[00:19:21] Dr. Richard Wallace: Well, it’s so hard to predict the future of I would’ve never expected this whole LLM development to come along in the first place.
[00:19:29] Preston Pysh: Yeah.
[00:19:29] Dr. Richard Wallace: But you know, I always think of a child learning language, and there are big differences here between a child learning language on an LLM. Yeah. You know, a kid doesn’t have to scan the whole internet to learn how to speak a language. In fact, they’re pretty good at, you know, what we call one-shot learning.
[00:19:46] Dr. Richard Wallace: You know, if you say to a kid, this is a dog. Then they can instantly recognize every dog in the world as a dog.
[00:19:53] Preston Pysh: Yeah.
[00:19:54] Dr. Richard Wallace: But what also comes into play here is the supervised unsupervised learning dichotomy, which is if you are a kid and you have a good teacher and good parents, you’ll learn to speak very well.
[00:20:08] Dr. Richard Wallace: But if you’re a kid who has to pick up language on the street. Without any supervision, then your language learning won’t be nearly as good. And so the LLM is more like the kid out on the street, learning language without any supervision, and that’s why they learn so much inappropriate and offensive material and so on.
[00:20:27] Preston Pysh: What did the winds teach you back in the day when you were winning this about how humans judge intelligence?
[00:20:33] Dr. Richard Wallace: Well, you know, I can say the same thing about LLMs now that I said about my chatbot back then, which is that people say, well, these chatbots are becoming more and more like humans. And you know, I have a different opinion about that, which is that what it’s really showing us is that people are more like robots.
[00:20:52] Dr. Richard Wallace: Then we would like to think we are, because it’s not that the robot’s becoming more like a human, it’s that it’s revealing to us how robotic we are. And you know, back in the early days of working on Alice, I came to realize that most people, most of the time, are saying things that they themselves have said before.
[00:21:12] Dr. Richard Wallace: Or that they’ve heard other people say before. And even when they’re, you know, writing, they’re basically synthesizing thoughts and ideas that are not necessarily original. And all of these chatbots work because language is predictable, and predictable means robotic. So I always say that if we were all William Shakespeare’s uttering an original line of poetry with every sentence we spoke, then these chatbots would never work.
[00:21:41] Dr. Richard Wallace: Because they’re based on language being predictable, not original.
[00:21:47] Preston Pysh: Is it fair to say that you would suggest that humans judge intelligence by their flow or by this response of like, most people are looking at that and they’re saying, oh, that’s intelligence. But then you’re looking at it and you’re saying, that’s not intelligence, it’s just repetition.
[00:22:02] Preston Pysh: I think that’s kind of what you’re getting at.
[00:22:04] Dr. Richard Wallace: Yeah. In general, necessarily repetition, but robotic predictable.
[00:22:07] Preston Pysh: Yeah. You know, it’s interesting, I just read something, it was like last week, and I think Google came out with this many months ago. But for them to do this long-term learning where it has much more of a memory, it’s highly based on whether something’s novel or not relative to its index of everything that it’s been trained on.
[00:22:26] Preston Pysh: And that’s, and when it sees this novel thing that it wasn’t predicting or expecting to come next. That it then stores that in its long-term memory, or I apologize for the terminology here, Dr. Wallace, but it flags it as something that is worthy of being remembered because it’s novel and so different and outside of, would’ve predicted to be the next thing.
[00:22:49] Preston Pysh: And it’s interesting that it’s in keeping with Claude Shannon’s information theory and how it’s all aligned, I’m curious if you have any opinions on that in particular, and whether you think that has a key component to intelligence or how new things are discovered in knowledge in general.
[00:23:09] Dr. Richard Wallace: Well, that really gets to the heart of what I think the difference is between humans and robots, which is that, like I said, I think most people, most of the time are acting like robots.
[00:23:21] Dr. Richard Wallace: They’re just acting in kind of a stimulus response fashion. Just as an aside, I always used to say that most human conversation is stateless. Meaning that what I’m saying to you right now only depends on the question that you just asked me. And we could forget the whole history of our conversation up to this point.
[00:23:39] Dr. Richard Wallace: You know, one of the pieces of evidence for that is, you know, if you can imagine yourself having a casual conversation with someone at a party, say, and then you say, oh, where did you go to college? And they say, oh, I went to Harvard. I already told you that. You kind of forgot that. Yeah, you had already talked about college earlier in the conversation.
[00:23:59] Preston Pysh: Yeah.
[00:24:00] Dr. Richard Wallace: And you know, you’re just responding to the most recent thing you heard and most recent input. But what really gets to the difference between humans and robots is even though most people, most of the time are speaking in this kind of reactive, behaviorist way, it is possible for people to have original thoughts and be creative and it, you know, it’s almost like a muscle that you need to exercise in order to build it up.
[00:24:26] Dr. Richard Wallace: If you want to break out of that robotic mold, then you have to put some effort into trying to be creative and original. With your thoughts and thinking and ideas,
[00:24:36] Preston Pysh: Do you think that the Turing test actually measures intelligence, or is it something else entirely?
[00:24:43] Dr. Richard Wallace: I’m so happy you asked me about the touring test.
[00:24:47] Dr. Richard Wallace: So the Turing test, most people understand the touring test as sort of game where there’s three players. You have a person who’s called the interrogator or the judge, and then they’re communicating through a teletype, a text only medium. You know, much like. Texting on your phone, but without any audio visual, just typing.
[00:25:09] Dr. Richard Wallace: And then the two entities that the judge is talking to, one is a human and one is a machine. So then the judge has to decide which one is the human and which one is the machine. And if they misidentify the machine as the human, then it’s said to pass the Turing test. But you see this has a big problem as a scientific experiment because it’s not really clear how often the interrogator has to, you know, misidentify the human.
[00:25:39] Dr. Richard Wallace: Is it, 50% of the time? 75% of the time? A hundred percent of the time? What does that even mean?
[00:25:46] Preston Pysh: Yeah.
[00:25:46] Dr. Richard Wallace: That the robot is more human than a human? So in turn, the 1950 paper on computing machinery intelligence. He actually describes two different versions of the test or the game, and earlier in the paper he described something called The Imitation Game, which as far as I understand was based on a real parlor game that people played in Victorian England.
[00:26:10] Dr. Richard Wallace: And in this game, again, there are three players, the judge or the interrogator, and the other two players are a man and a woman. And let’s just set aside the, you know, the gender issues in the context of 19 writing in 1950 here. So there’s a man and a woman sequestered away in, in the Victorian England case in different rooms.
[00:26:31] Dr. Richard Wallace: And then the judge is sending them handwritten questions back and forth, and the judge’s job is to decide which one is the man and which one is the woman. Now furthermore, turn stipulated that the woman should always tell the truth and the man should always lie. Okay? So now if you, ask the man, are you a woman?
[00:26:51] Dr. Richard Wallace: He would say yes, because he has to lie.
[00:26:53] Preston Pysh: Okay.
[00:26:54] Dr. Richard Wallace: And then, you know, the judge’s job is to try to figure out which one is the man and which one is the woman. Now. If you replace the line man in that scenario with a machine, okay, let’s say you replace the man with a very crude chatbot like Eliza or even ALICE.
[00:27:12] Preston Pysh: Yeah.
[00:27:13] Dr. Richard Wallace: Then the judge could identify the woman correctly a hundred percent of the time. Because it’s clear that only one of the players is a human at all. And that has to be the woman. So now as a scientific experiment, we can say, let’s run this experiment with, you know, a hundred judges and a hundred men and a hundred women.
[00:27:35] Dr. Richard Wallace: I don’t know exactly how many are needed for statistical accuracy, but let’s just say we did a random sample where we collected. The results of this game for, you know, a large number of players, then you could measure a certain percentage of the time that the judge would identify the woman correctly.
[00:27:52] Dr. Richard Wallace: And you know, let’s say that’s 70% of the time now if you replace the line man with a computer, and the computer is a very good AI that can actually play the role of the lying man. Then you should get closer and closer to that actual 70% measurement.
[00:28:09] Dr. Richard Wallace: So that’s actually a better, scientific experiment than the Turing test.
[00:28:13] Preston Pysh: Very interesting.
[00:28:15] Dr. Richard Wallace: Yeah. Yeah. The Loebner contest was really based on the original standard Turing test.
[00:28:21] Preston Pysh: Turing test. Okay. Yeah.
[00:28:23] Dr. Richard Wallace: And you know, the rules change from year to year depending on, you know, who is hosting the contest. Loebner’s Rule was basically if 50% of the judges, usually there were four judges, so two out of four judges misidentified the robot as a person.
[00:28:40] Dr. Richard Wallace: Then he would award the silver medal for passing the Turing test.
[00:28:44] Preston Pysh: That’s so cool.
[00:28:45] Dr. Richard Wallace: Okay. It was never awarded, by the way.
[00:28:47] Preston Pysh: It was never awarded. Interesting.
[00:28:49] Dr. Richard Wallace: Yeah.
[00:28:50] Preston Pysh: If you could get in a time machine right now and go back to your days, call it 2000, when you had done this, what would be the thing that you would whisper to yourself as a hint as to how to improve the chatbot that you had back then?
[00:29:05] Dr. Richard Wallace: I would probably tell myself, don’t even do this brcause you know how hard it is and how there was no money to be made from chatbots for, you know, until very recently.
[00:29:20] Preston Pysh: You were very early. Yeah.
[00:29:22] Dr. Richard Wallace: The Loebner contest was always the domain of, you know, hobbyists and amateur programmers. There were a few, you know, academic entries, but no, no big companies ever got involved in it.
[00:29:33] Dr. Richard Wallace: And then in the 2000’s, I organized a number of chatbot conferences, you know, international chatbot conferences, and we have a hard time getting 25 people to attend.
[00:29:45] Preston Pysh: Oh, really? Okay.
[00:29:46] Dr. Richard Wallace: Yeah. So, you know. After many years of really struggling with this and trying to figure out how to make a living with chatbots, I co-founded a company called Pandora Bots, which is, you know, based on attempting to commercialize the AIML bots.
[00:30:04] Dr. Richard Wallace: But, you know, after a while, in the early after, in the early teens, I should say, I just decided to get out of the field completely and I went to work in healthcare.
[00:30:15] Preston Pysh: Yeah.
[00:30:15] Dr. Richard Wallace: But then in the past five or six years, I’ve gotten back into AI as it’s become more, you know, lucrative, I should say it.
[00:30:23] Preston Pysh: It seems like in 2017, Google came out with this paper. It was called Attention is All You Need. And this seemed to be a very seminal breakthrough. In how to, for all intents and purposes, do what you were doing in a very manual way and let machines do it way faster and with way more horsepower and more data. Right. I’m curious, when this paper came out, did you read it when it first came out and were you kind of aware of this or did it kind of hop on your radar a couple years after? When we started seeing the breakthroughs.
[00:30:58] Dr. Richard Wallace: I was really not paying attention to it at the time. Like I said, I was working in healthcare. I don’t think the LLM industry really came to my attention until, you know, we started hearing about GPT.
[00:31:12] Preston Pysh: Do you think that paper was a really important seminal piece of work for people to kind of understand how to start doing this in a mechanical machine kind of way?
[00:31:22] Dr. Richard Wallace: Yeah. Obviously that was a breakthrough.
[00:31:25] Preston Pysh: Wow. And so in your own words, what would you say? I mean, we know attention is a big piece of it, but I think for somebody that just kind of hears that label, it’s like, okay, well what does that mean? If you were going to try to explain to somebody in a very simple way, like what is that paper saying that has enabled, you know, machine learning to do what it does?
[00:31:48] Dr. Richard Wallace: Well, in a way, I’m reminded of the work we talked about earlier, which was a robot eye in the early nineties, because that was also an attention-based mechanism.
[00:31:58] Dr. Richard Wallace: So I described how in order to make use of that, you know, log map arrangement of pixels, where there’s. High resolution towards the center. You have to be able to point the camera so that the high resolution can be aimed at something interesting. Well, how do you know what’s interesting? It’s by, if you see something in the periphery, for example, movement. Yeah. And you want to move your eye towards the thing that you’re seeing in the periphery and place the attention on that.
[00:32:26] Dr. Richard Wallace: So attention has to do with. Focusing your highest resolution sensor sensory capability on whatever seems most interesting in a scene. I think there’s an analog for that in the LLM version of attention as well. You know, they’re sort of swinging the direction of where the gaze of the robot is looking, depending on what they see in the periphery.
[00:32:52] Preston Pysh: Okay, so this is super, I love this example because it’s very physical and you can kind of make sense of it very simply because it’s dealing with vision. And so when you are changing your attention and you’re able to zoom in because you have the capacity to zoom in on something, how are you filtering or knowing what’s novel in that broader site picture in order to know, to adjust the focus to that thing?
[00:33:18] Preston Pysh: What gives us that capacity to know, oh, well, I’m looking at you and now I’m focusing on the tree back behind you, and I’m zooming in on that and I’m putting my attention there. What would be that insight in order to say, oh, that’s different. That’s something I need to dial in on or pay more attention to?
[00:33:37] Dr. Richard Wallace: Yeah. A long time ago, a guy called Hans, who’s very interesting, we should talk about him some more. He came up with. An attention mechanism called an interest operator. And this is for computer vision again.
[00:33:50] Dr. Richard Wallace: It’s basically that things in your visual field that have high variance, you know, a high ratio of dark to light, are more interesting than other things.
[00:34:00] Dr. Richard Wallace: So that would typically be edges, like the edges of the tree you just described. Or corners of things or just any sort of bright spot against a dark background or vice versa. And then recognizing those in the periphery of your visual field would cause you to move the center of your visual field towards whatever the interest operator is highlighting.
[00:34:23] Preston Pysh: Fascinating. Okay. Here’s an odd question for you. Do you think your, real subject of study ended up being humans rather than machines?
[00:34:32] Dr. Richard Wallace: Oh, well, you know, I’m a computer programmer, so I was always more interested in the machine side of it. I think I did learn a lot about human conversation from monitoring those conversation logs.
[00:34:46] Preston Pysh: The reason I asked this question, is, you know, in kind of research and preparation for the interview. It seemed to me that you have this opinion. I suspect, and correct me if I’m saying any of this wrong, but it seems like you were not convinced that any of these chat bots were actually saying anything intelligent.
[00:35:05] Preston Pysh: It was just, it was this canned response that was coming back, and then the reaction that humans had was like, wow, this thing is real and there’s like something behind it. And so I guess that’s the impetus for the question is because. You were, I suspect you were fascinated at the response of people and how duped I guess they were by interacting with some of these chat bots.
[00:35:30] Preston Pysh: So I guess that’s more of the impetus for the question. And would you agree with everything that I just said?
[00:35:36] Dr. Richard Wallace: I used to categorize the users or the clients. I call them into three categories, A, B, and C. Okay. So and A clients are abusive, okay? So they’re going to say. How can I put this, you know, very inappropriate things to the chat bot and you see those in the conversation locks.
[00:35:56] Dr. Richard Wallace: Although you always have to wonder if someone is saying, you know, I hate you or I love you even, is that what they’re, what they really have in mind, or are they just, you know, trying to get a response out of the robot and see, testing the limits. Yeah, testing the limits. Exactly. Yeah. And then the next category B are just average users.
[00:36:16] Dr. Richard Wallace: So those category B people were the ones who could suspend their disbelief and they would be very engaged with the bot and have, you know, very long conversations come back and continue their conversations and so on. And so that would be the group that, you know, as you’re saying, would be kind of reading more into the bot that was then was actually there because they’re engaged with it on an emotional level. And then the last category I call the critics. There are people who know something about computer programming and AI, and they just think this thing is terrible, and you know, they walk away after a few interactions.
[00:36:53] Preston Pysh: Yeah. Well, I’m curious to hear your thoughts on where we’re at now and where you see some of this going next.
[00:37:01] Preston Pysh: You know, you have some really smart people in this space that have, you know, demonstrated their knowledge through the things that they’ve built and I think, you know, if, we back up the tape three years ago, many of them were very suspect as to whether AGI could ever be possible. Today and I have a hard time knowing if this is them trying to get more capital or they actually believe that we’re on the cusp of AGI.
[00:37:28] Preston Pysh: I don’t know which one of those two it is, but I’m just curious to hear your general thoughts on where you see us today and like what the next five years might bring as exciting of a next five years as we’ve seen in the past five years. Kind of just give us your one over the world on it.
[00:37:45] Dr. Richard Wallace: Well, I definitely think it’ll be exciting.
[00:37:47] Dr. Richard Wallace: You know, the term HEI seems a little strange to me because it’s what we’ve always called AI.
[00:37:53] Dr. Richard Wallace: AI has always been a goal that’s just out of reach and, you know, we have an imagination of what it is based on seeing science fiction movies and that sort of thing. You know, how R2D2 and all those examples give us a template for what we’d like to see in an AI.
[00:38:09] Dr. Richard Wallace: And so it seems kind of odd that they’ve come up with a new term AGI to kind of move the goalpost even further, but I’m very skeptical about that. You know, a very simple answer to this question, which a lot of people I know would not agree with, is that God gave human beings a soul. But machines don’t get a soul.
[00:38:29] Dr. Richard Wallace: So, you know, in the sense that human beings have freedom of thought and self-reflection and creativity, I don’t think those things will be reproduced in a computer anytime soon.
[00:38:42] Preston Pysh: Yeah. And I think I’m with you a hundred percent on what you just said. And I know there’s a lot of people that want to argue these ideas, and we’re not here to do that.
[00:38:52] Preston Pysh: I’m with you a hundred percent. I think that there’s something very special and unique about just any living being not just humans. I think any living being has this special connection from, you know, a higher source. And I don’t think that we’re necessarily going to see, you know, these humanoid robots have whatever that is, and I have no idea how to define that, but I do think that some of these humanoid robots, call it five or 10 years from now, are going to do things.
[00:39:17] Preston Pysh: And it goes back to some of your earlier comments about these chat bots and how people were just like, oh my God, I feel like I’m talking to a real person. This feels real. And I think that some of these humanoid robots are going to feel like real humans to a lot of people, but that doesn’t mean that it’s the same thing as us.
[00:39:36] Preston Pysh: I think we are something very hard to define, very different, but. Oh my goodness, Richard, I really enjoyed this conversation. Anything else that you want that you think is super important on this particular topic that you see right now or kind of going into the future that you think is worthy of highlighting or that the audience should know?
[00:39:55] Dr. Richard Wallace: Yeah. Well, the company I work for right now, Franz. Okay. it’s actually a very old AI company founded in 1985 and. Franz started out as a company selling Lisp compilers, but then, you know, by the end of the 1990s, very few people were paying money for software. You know, because there’s so much free language software available.
[00:40:19] Dr. Richard Wallace: So they pivoted to graph database technology. Okay. And you know, without getting into too much detail about what that is, now that we have the LLMs, we are taking an approach called Neuro Symbolic Computation. So we’re in the history of AI, talking about supervised versus unsupervised learning.
[00:40:40] Dr. Richard Wallace: Another dichotomy in AI is between. Symbolic and neural approaches. So symbolic approaches are things like, you know, theorem proving programs or the early chatbots that we were talking about based on rules where basically you’re manipulating symbols or you can also think of a, chess playing program, you know, which is very mechanical and manipulating symbols and searching through the space of moves.
[00:41:07] Dr. Richard Wallace: And so the symbolic approach is in contrast to this neural learning approach. And now we’re basically trying to find the best of both worlds.
[00:41:18] Dr. Richard Wallace: So one example of that is in the medical field, you can make predictions about how likely someone is to be, well, their mortality, how likely they’re going to be readmitted to the hospital after being discharged.
[00:41:32] Dr. Richard Wallace: You know, within 30 days, how likely are they to be readmitted or how likely they are to have a stroke, and the various other things. But the medical field has developed these symbolic techniques for making those predictions. And so in the case of stroke from AFib, there’s a test called Chad Vask. And it basically takes into account criteria like, you know, your age and gender, whether you’ve had, congestive heart failure, history of hypertension, and various other factors like that.
[00:42:05] Dr. Richard Wallace: And when you plug in those values, it produces a number, which can then be used to, you know, estimate the likelihood of you having a stroke.
[00:42:14] Dr. Richard Wallace: And now you could also do that with a neural network, a recurrent neural network where you basically train it by beating in the, you know, the patient data, the diagnostic data, and their medical history, and then just look at whether they had a stroke or not.
[00:42:30] Dr. Richard Wallace: So you can train this recurrent neural network to. Take a new patient data and you know, give some prediction about whether they’re going to have a stroke. Then the third way of doing that is to use an LLM and you can just simply upload the entire patient chart to the LLM and say, how likely is this person to have a stroke?
[00:42:50] Dr. Richard Wallace: So what we’ve been doing is sort of combining those three approaches together. You know, we’ve got the symbolic estimate, we’ve got the neural estimate, and we’ve got the LLM estimate. You know, you could potentially display all three of those and then it’s up to the clinician to make a judgment. Or you could even put them all back into a different LLM and ask the LLM, which one of these measurements is best, which one of these predictions is best?
[00:43:15] Dr. Richard Wallace: It’s an effort to combine the best of the symbolic approaches with these newer neural approaches.
[00:43:22] Preston Pysh: Wow. Say the name of the company one more time. I want to make sure I have the name of it in the show notes for people if they want to check it out.
[00:43:28] Preston Pysh: Franz,
[00:43:29] Dr. Richard Wallace: F-R-A-N-Z.
[00:43:30] Preston Pysh: Alright, well I am just so thrilled to be able to talk to somebody who’s been in this space for decades.
[00:43:37] Preston Pysh: It’s miraculous to see what’s happening, and I can only imagine where we’re going to be in five years from now. But Dr. Richard Wallace, thank you so much for making time and coming on the show and imparting all of this knowledge that you have. We really appreciate it.
[00:43:50] Dr. Richard Wallace: Well, I’m glad people want to talk to me about it after a long time of people not being very interested.
[00:43:56] Preston Pysh: Well, there’s a lot of people interested now. Let me tell you, sir, but thank you again for making time and coming on the show.
[00:44:03] Dr. Richard Wallace: Okay, my pleasure. It was great talking with you as well.
[00:44:05] Outro: Thanks for listening to TIP Follow Infinite Tech on your favorite podcast app, and visit theinvestorspodcast.com for show notes and educational resources.
[00:44:15] Outro: This podcast is for informational and entertainment purposes only, and does not provide financial, investment, tax, or legal advice. The content is impersonal and does not consider your objectives. Financial situation or needs investing involves risk, including possible loss of principle and principal. Performance is not a guarantee of future results. Listeners should do their own research and consult a qualified professional before making any financial decisions. Nothing on this show is a recommendation or solicitation to buy or sell any security or other financial products. Guests and The Investor’s Podcast Network may hold positions in securities discussed and may change those positions at any time without notice.
HELP US OUT!
Help us reach new listeners by leaving us a rating and review on Spotify! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it!
BOOKS AND RESOURCES
- The platform behind ALICE: Pandorabots.com.
- Website: Franz.
- Related books mentioned in the podcast.
- Ad-free episodes on our Premium Feed.
Some of the links on this page are affiliate links or relate to partners who support our show. If you choose to sign up or make a purchase through them, we may receive compensation at no additional cost to you.
NEW TO THE SHOW?
- Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, Kyle, and the other community members.
- Follow our official social media accounts: X (Twitter) | LinkedIn | | Instagram | Facebook | TikTok.
- Check out our Bitcoin Fundamentals Starter Packs.
- Browse through all our episodes (complete with transcripts) here.
- Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool.
- Enjoy exclusive perks from our favorite Apps and Services.
- Get smarter about valuing businesses in just a few minutes each week through our newsletter, The Intrinsic Value.
- Learn how to better start, manage, and grow your business with the best business podcasts.
SPONSORS
- Simple Mining
- Linkedin Talent Solutions
- Alexa+
- HardBlock
- Unchained
- Amazon Ads
- Vanta
- Abundant Mines
- Horizon
- Public.com*
*Paid endorsement. Brokerage services provided by Open to the Public Investing Inc, member FINRA & SIPC. Investing involves risk. Not investment advice. Generated Assets is an interactive analysis tool by Public Advisors. Output is for informational purposes only and is not an investment recommendation or advice. See disclosures at public.com/disclosures/ga. Past performance does not guarantee future results, and investment values may rise or fall. See terms of match program at https://public.com/disclosures/matchprogram. Matched funds must remain in your account for at least 5 years. Match rate and other terms are subject to change at any time.
References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor’s Podcast Network is not responsible for any claims made by them.



