top of page
The Decoding AITM Podcast
Episode 1: Professor Mike Wooldridge on Multi-agent Systems
​Show Notes and Transcript 

This episode is sponsored by Teradata.

Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today.

The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. Learn more at Teradata.com.

Teradata_logo-two_color.jpg
Episode description

Michael Wooldridge is a Professor of Computer Science at the University of Oxford and Programme Co-Director for Artificial Intelligence at the Alan Turing Institute in London. He has been an AI researcher for more than 30 years, and has published more than 400 scientific articles on the subject.

Professor Wooldridge’s main research interest is in the field of multi-agent systems, and his current focus is on developing techniques for understanding the dynamics of multi-agent systems. His research draws on ideas from game theory, logic, computational complexity, and agent-based modeling with applications, amongst others, in business management, decision making under uncertainty and financial services. Professor Wooldridge is a Fellow of the Association for Computing Machinery, Fellow of the Association for the Advancement of AI (AAAI), Fellow of the European Association for AI (EurAI). and was President of the European Association for AI, from 2015-17 was President of the International Joint Conference on AI (IJCAI), was President of the International Foundation for Autonomous Agents and Multi-Agent Systems (IFAAMAS).

In 2006, he received the ACM SIGART Autonomous Agents Research Award, given annually to an individual whose research has been particularly influential in the field of autonomous agents and multi-agent systems.He has been the recipient of the AAMAS "best paper" award three times (in 2008, 2009, and 2011).

Key points discussed in this episode

●  Multi agents interaction and safety

●  Understand how to engineer AI systems with social skills: the ability to autonomously

cooperate, coordinate, and negotiate with each other.

●  Understand how to engineer multi-agent systems with safe, predictable dynamics.

●  Enabling autonomous agents to reason about the adequacy of models used to predict the

actions of other agents - a crucial limitation in current autonomous agents is that they lack

the ability to reason about the adequacy of models

●  Applications in decision making under uncertainty / financial services / collusion

●  Empathy in agents - do we need it ?

●  Sentience - is there such a thing in AI systems ?

●  Reliance on AI : We get better by engaging with AI systems, but we also seem to erode

some cognitive abilities. Linked Alzheimers and the reliance on GPS ? How do we strike the

balance - or is there a balance ?

●  Mapping an intercultural path to privacy, agency, and trust in human-AI ecosystems

Expanding and enriching current AI ethics discussions by including international cultural

views

●  Ethics of AI systems

●  AI Regulations

Nuggets of Wisdom

●  If you've also got a Siri, why doesn't my Siri talk to your Siri? And they just arrange it between them. That is multi-agent systems.

●  Human emotions are human. Let's just keep machines out of that altogether. That this is just a dangerous road down which to travel. So just let's not go there.

●  A computer is a glorified toaster, right? I mean, it's in the same category, it's got a bit more circuitry, quite a lot more circuitry, lots more transistors. Toasters these days actually do have computers in them.

 

Ideas discussed

On Multi-Agents

● the idea that we had was to change the way that you interact with software, so that instead of just being the dumb recipient of your instructions, the software

actually cooperates with you, it works with you in the same way that for example,

a human assistant would.
● AI systems are not just isolated things. They are systems which are connected to

one another. They can interact with one another and crucially they use skills like social skills, like the ability to cooperate and coordinate and negotiate the kind of social skills that we all use as human beings in the human world every day. It's about having software that has those skills.

One Teaching Multi-Agents Social Skills

●  They are acting on behalf of their owner's goals on behalf of their owner's preferences. Then it's not unreasonable to assume that they are, that they are also going to do their best.

●  They're gonna behave rationally in the economic sense of rationality. And then the mathematics that we use for analyzing that is the very natural mathematics that we use in game theory. And game theory is all about understanding interactions between rational agents, that's what game theory is.

On Rationality and Irrationality in Agents

●  The agents that we deal with are not companies. They're not governments, they're not human beings, they're pieces of software, but nevertheless, they're pieces of software that are designed to be perfectly rational. The question is what happens with irrationality?

●  These computer programs are programmed to be rational and to do the best that they possibly can with the computational resources and the information resources that they have; and that's the basic working assumption. There's a much smaller body of work that's gone on to look at irrationality in multi-agent systems and how you deal with it. There's a slightly separate question of how can you prove that these agents are rational, can you show me that these agents are really going to be rational, that they're going to do the best that they possibly can for themselves.

 

On Collusion

● We think about collusion in markets where you are trying to avoid collusion between companies, for example, price fixing. In the model that we use, is there something going on which could not be explained, assuming these individuals, these companies, or whatever, were acting independently, is there some behavior which can only be rationally explained if they've cooked up some deal behind closed doors? We're at the very beginning stages of trying to understand that. It's a very, very natural area to look at and a very, very practically important area. And I think you will see a lot more work on that in the next few years.

On Game Theory

● As AI researchers, the waters are muddied because in economics and game theory, the typical assumption is that you have unlimited computing resources available at your disposal and in reality, of course you don't. And so we have to

try to adjust these solution concepts for limited computing resources and so on.

On Empathy and Sympathy

●  So sympathy is when I look at, you know, I look at somebody who's in a difficult situation, who's maybe lost their job and I can see that, you know, this is a bad thing for them and you know, all other things being equal, I wish that they still had their job, you know, that they weren't struggling financially. And so on empathy is where you actually feel the pain. Empathy is where you look at them. And actually it's not just that you recognize that somebody's suffering, but actually you suffer because they are suffering.

●  And there is a field of game theory called empathetic game theory and empathetic game theory is based on the assumption that human beings really do feel empathy. And we do. And what happens it seems is that, you know, there are parts of our brains that look at other people and basically model what's going on there. And only a small amount of work in game theory on empathetic game theory.

●  Our brains are modeling, you know what this situation is and we are feeling their pain. We are modeling their pain and we are feeling it locally. Now if we do have, the possibility for empathy, then situations like the prison's dilemma might not be so dangerous because, one of the difficulties in the business dilemma is that I can inflict some pain upon you, but if I'm gonna feel empathy, I don't necessarily want to do that anymore because I'm gonna feel that pain as well.

●  There is a lot of work on emotion modeling on AI, but it's more at the level of suppose you've got a chat bot, which is telling somebody they haven't got a bank loan. It's about recognizing when somebody's getting angry or frustrated. It's just at the level of recognizing emotions, I should say, by the way, it's controversial. There are a number of well respected colleagues who just ask “why should we go there”? Human emotions are human. Let's just keep machines out of that altogether. That this is just a dangerous road down which to travel. So just let's not go there.

On Consciousness, Sentience and Omelettes (yes, omelettes!)

●  we can distinguish some features of consciousness and one, I think a feature of consciousness is the idea of subjective experience. Being able to experience something from your own perspective.

●  Computers are a different category and the category of computer experience. I think the world that a computer inhabits is a very different one, but for us to get to computers that have experience, I think there would have to be a world. And there was a Google engineer who landed in a spot of bother for saying that they believed that this program was sentient. And if you look at the text, it looks like the kind of text that a sentient being might produce. And that's obviously where he's coming from.

●  But there is no world experience behind this, this is what this program has done. It's just been trained by showing it vast amounts of text, huge, unimaginable amounts of texts. And from that, it's learned to plausibly generate text just by basically looking at what it's been shown previously and trying to generate something which looks similar.

●  There is no experience of any world behind the scenes there whatsoever. It's my standard example and my colleagues get bored of me coming out with this example to do with omelets. So the Lambda program that I mentioned earlier has probably seen every omelet recipe that's available on the worldwide web.

●  And if you could probably have a very nice conversation with it about omelets. But has it ever experienced an omelet? Does it know anything about an omelet? The answer is no, of course it doesn't because it's a piece of software and all that software is seen is lots of words. The words that we use to describe omelets, but for you and me, when I say the word omelet, that word is not just a symbol in some data set, it actually refers to all the experiences that we've had related to omelets.

●  You know, the good omelet that you had in Paris in 1997 and that bad one that you cooked yourself for breakfast last week, that's what omelet refers to it. It is a symbol. That means something in relation to all of those experiences. So a machine that's never experienced anything in the world, I think I struggle to believe could have a meaningful consciousness.

 

On AI Regulations

● If you murder somebody, does it really matter whether you used a knife or a gun? The point is that you murdered somebody, you know, the technology that you used to kill them is irrelevant. And I feel the same about AI regulation. What's crucial is what's done with it rather than the technology that's used to do it.

Contact Professor Wooldridge 

University of Oxford
The Alan Turing Institute

Links to books and materials

Professor Mike Wooldridge's books

●  Book: Introduction to MultiAgent Systems (Second Edition)

●  Book: Principles of Automated Negotiation

●  Book: Computational Aspects of Cooperative Game Theory

●  Book: Reasoning about Rational Agents

●  Book: The Road to Conscious Machines. (Pelican, 2020)

    

Other relevant sources 

●  OECD held a roundtable on "Algorithms and Collusion" as a part of the wider work stream on competition in the digital economy, in order to discuss some of the challenges raised by algorithms. Link

●  Autonomous algorithmic collusion: economic research and policy implications Link

●  From an antitrust perspective, the concern is that these autonomous pricing algorithms may independently discover that if they are to make the highest possible profit they should avoid price wars. That is, they may learn to collude, even if they have not been specifically instructed to do so, and even if they do not communicate with one another. This is a problem. Link

Episode's Transcript

Clara Durodie: Welcome to The Decoding AI Podcast. Today, I have the privilege of having a very special guest, professor Michael Wooldridge. Michael a professor of computer science at the university of Oxford and the programme co-director for artificial intelligence at the Alan Turing Institute (AIT) in London.

Michael has been an AI researcher for more than 30 years and has published more than 400 scientific articles on the subject. And he's a fellow of the most revered and respected associations of artificial intelligence. He served as a president of the European association for AI international joint conference on AI and the international foundation for autonomous agents and multi agent systems. I would like to welcome Michael and say how happy I am, how privileged I feel to have him on my show.

 

Welcome Michael!

[00:01:15] Mike Wooldridge  Hi Clara. Thank you very much for that lovely introduction, but honestly, the pleasure is all mine.

 

[00:01:22] Clara Durodie: Today I would like to start by saying that Michael's research work in multi-agent systems is fantastic. I mean for someone coming from a business background like myself, trying to understand the tools that technology offers us in financial services to reach a sustainable business, life, sustainability profitability, I find Michael's work very interesting with a number of applications in our field. But today I'd like to focus on something which I think is of consequence in our industry when used to its full potential. And that is the field of multi-agent systems.

So, Michael, would you be able to give us some background on how these systems work or how they can work and what kind of applications you see as being relevant to

our industry and what kind of disciplines? Where do you draw knowledge and experience from, when you build this ecosystem of multi-agents?

[00:02:41] Mike Wooldridge: Sure. Well, there are two bits to the idea of multi-agent systems. The first bit of the idea goes back to the late 1980s. About the time that I was becoming a researcher and finishing my undergraduate studies, people were thinking about the way we interact with software.

And if you, and it's very much for most software, the way that we interact with it is the way that we have done since, since the mid 1980s. If you open Microsoft Word or PowerPoint or a web browser like Chrome, everything that happens on that web browser happens because you make it happen.

You click on a button or you select something from a menu, and there is one agent in that interaction. And that agent is you. And the software is just the dumb recipient of your instructions. It's just doing exactly what you tell it to do and doing nothing more than that. And that doesn't mean you can't be productive.

It doesn't mean the software isn't useful, but it's a very one-sided relationship. You are just telling the software exactly what to do. And so the idea that we had was to change the way that you interact with software, so that instead of just being the dumb recipient of your instructions, the software actually cooperates with you, it works with you in the same way that for example, a human assistant would.

So the analogy is, you know, imagine you are reading your email. Well, imagine that your email browser, instead of just showing you the emails that have arrived today, does what a good personal assistant would do, it prioritizes them, it points out the ones that are likely to be junk and not very high priority.

And so allows you to focus and guides you to getting answers to the emails that urgently need answers and to quickly get rid of those that are junk or that just don't need your attention at all. But the point is that the software is not just doing what you tell it to do, it's actually working with you, it's cooperating with you.

And so the phrase agent or software agent was coined to refer to software, which was not just the dumb recipient of instructions, but was actually actively working with you. So that's the first part of the agent's story. And what that turned into is basically Siri and Alexa and Cortana, right?

Those systems are a direct manifestation of that dream that we had at the end of the 1980s. But that's only part of the dream that we had. The other part of the dream that we had was as follows. If you have Siri and you ask Siri to do something for you, like, for example, I suppose I need to book a meeting with you.

I could phone you. That would be the old fashioned way to do it. Or maybe my Siri could phone you. I could say Siri, arrange a meeting with Clara and Siri could actually phone

you and talk to you, but that doesn't really make any sense. If you've also got a Siri, why doesn't my Siri talk to your Siri?

And they just arrange it between them. That is multi-agent systems. That's where my agent is interacting with your agent and they're cooperating. They're working together on our behalf and you can build from the idea from there and really the dream of multi-agent systems is the dream that AI systems are not just isolated things.

They are systems which are connected to one another. They can interact with one another and crucially they use skills like social skills, like the ability to cooperate and coordinate and negotiate the kind of social skills that we all use as human beings in the human world every day. It's about having software that has those skills.

So I say there's two parts to the dream of multi-agent systems. One is having software, which is not just the dumb recipient of our instructions, but is actively working for us. And the second part is that those agents, those software agents are working with each other on our behalf and to do that, they need social skills.

[00:06:41] Clara Durodie: They need social skills. And actually in some cases, if we teach those agents to do other things like negotiate our mortgage arrangements or our financial wealth planning for the rest of our lives then I think it would be an interesting world if we step back for a bit and think, how would it be that my agents negotiate the best mortgage deal with all the agents banks that offer mortgages so that in many ways you, you think my agent will negotiate the best deal for me.

It's very exciting. But in other ways, you think that requires a lot of dreaming and a lot of blue sky thinking from banks to envisage themselves relying on these agents to negotiate on their behalf. But there are a lot of applications and areas where you draw from like game theory, logic, computational complexity.

It's very interesting. When you look at building these multi-agents, the social skills, the rationality in those agents, you have to try to see how they engage with each other and how they establish if one agent is just trying to mislead the other agent as is sometimes the case in our human world.

How do we build agents, which are sensitive to identifying, perhaps a lack of irrationality in other agents and can decide for themselves that actually they shouldn't rely on other agents' input.

[00:08:36] Mike Wooldridge: Well, you've immediately gone to one of the most difficult questions that we have.

 

[00:08:41] Clara Durodie: It's a difficult question in what way?


[00:08:42] Mike Wooldridge: So the standard sort of working model that we have when

we think about agents, is that if my agent is programmed, so that if I give it a goal on my behalf, I give it something to do, that it's going to do its best to achieve that goal. And they're programmed to do that right. And so all of the ingenuity such as in our research is directed at building programs that can do that very, very effectively.

Now, if these agents are interacting with each other, they are assumed to be doing the same. That is that they are acting on behalf of their owner's goals on behalf of their owner's preferences. Then it's not unreasonable to assume that they are also going to do their best.

They're gonna behave rationally in the economic sense of rationality. And then the mathematics that we use for analyzing that is the very natural mathematics that we use in game theory. And game theory is all about understanding interactions between rational agents, that's what game theory is.

And so the difference is that the agents that we deal with are not companies. They're not governments, they're not human beings, they're pieces of software, but nevertheless, they're pieces of software that are designed to be perfectly rational. The question is what happens with irrationality.

If you've got virus agents, for example, it’s much more problematic and the standard answer that game theory proposes in cases like that, is that what you need to do is, you need to assume for, and plan for the worst and get yourself the best outcome in the worst case scenario.

And so that question, that last question, how do you deal with rationality? That's not where most of the energy is. Most of the energy is on assuming that these computer programs are programmed to be rational and to do the best that they possibly can with the computational resources and the information resources that they have; and that's the basic working assumption. There's a much smaller body of work that's gone on to look at irrationality in multi-agent systems and how you deal with it. There's a slightly separate question of how can you prove that these agents are rational, can you show me that these agents are really going to be rational, that they're going to do the best that they possibly can for themselves.

That's a much bigger body of work and we've done a lot of work on that. But the way to think about that is because these are computer programs, it's the same problem. Basically as checking that computer programs don't have bugs, that they do what you want them to do. And we've got a wide range of techniques now for doing that for checking that programs operate the way that we want them to.

[00:11:41] Clara Durodie: I looked into algorithmic collusion in markets because I think it's very relevant and topical in our industry. I think this is an area which perhaps needs a lot of work for the industry and regulators to better understand how the systems operate with each other.

How do they actually influence market movements? Because obviously, that has a very material impact on how they operate. It has a material impact on people's money. And I'm not necessarily talking about big banks, trading budgets, but I'm talking about the little person that thinks he or she can trade on their own.

So it's a big conversation in our industry. And I don't know if the Alan Turing Institute is doing any work in this space. That would be interesting for my audience to perhaps reflect on. And if there's any resources we can direct them to in the show notes.

 

[00:12:55] Mike Wooldridge  There's a little bit of emerging work, mostly at the kind of the theoretical level, but actually kind of algorithmic collusion, I think is very much a hot topic now. So I think you're likely to see an awful lot more of it out in the next few years.

I mean, the way that we think about collusion in markets where, you know, markets where you are trying to avoid collusion between companies, for example, price fixing or something like that, the model that we use, is there something going on which could not be explained, assuming these individuals, these companies, or whatever, were acting independently, is there some behavior which can only be rationally explained if they've cooked up some deal behind closed doors?

And so that's a working model that we're using, but we're at the very beginning stages of trying to understand that, but I agree it's a very, very natural area to look at and a very, very practically important area. And I think you will see a lot more work on that in the next few years.

[00:14:07] Clara Durodie: We’ll certainly do some research. My listeners can find some valuable research on algorithmic collusion in the show notes. I know a number of professionals from the legal profession are particularly interested in this, but from a philosophical point of view Michael, I'd like to ask the question, how do we, how do you, manage the tension between the rationality of agents and the optimization thereof.

Obviously in very simple terms, they are designed to reach the optimal route or to optimize for the best outcome. So it reaches a point where one agent optimization conflicts with the optimization or approach to rational actions of other agents. How, from a purely philosophical point of view, can we manage this tension?

[00:15:16] Mike Wooldridge: So I think that's something that we look to game theory for. So imagine you're the only agent in the world then, and your problem, the basic problem an agent faces is what do I do? What should I do next? What's my plan? And if you are the only actor in the world, if you're the only thing that can change your world, then it's your problem.

What should I do? That the action, which results in the best outcome for me is just an optimization problem, rationality and optimization coincide. In that case, they are exactly the same thing. It's much more complicated if there are other agents in the world. And the answer about what is the optimal thing to do is much less obvious.

And what game theory provides us with is a whole range of what game the is called solution concepts, which try to answer that they basically try to say, I mean, what they are is, is answers to the question of what is, uh, uh, an optimal choice in, in, in the setting where you. Lots of different agents interacting where that, where, where my choices have effects for you that will impact on your wellbeing and where your choices will impact on my wellbeing.

None of them are perfect. I mean, there are some which are more widely adopted than others. I mean, the famous one that everybody knows about is Nash, equilibrium and Nash equilibrium is kind of the cornerstone concept. And it's a beautiful idea invented by John Nash for his 28 page PhD thesis in 1950.

What Nash equilibrium says is that suppose we're all making choices which affect each other. And we then get to see what those choices we've all made. And a Nash equilibrium is a collection of choices where nobody regrets their choice. Nobody thinks I could have done better given what everybody else did now, if nobody thinks I could have done better, then what you've got is a Nash equilibrium.

And the crucial point about a Nash equilibrium, is that if you have a collection of choices, which don't form a Nash equilibrium, then somebody could have done better. Somebody is regretting their choice. Somebody made a suboptimal choice, basically. So that's the cornerstone, but even with Nash equilibrium, you very quickly run into difficulties.

There are all sorts of problems with even that most basic and apparently uncontentious concept. So for example, one standard problem is sometimes there's more than one Nash equilibrium and in that case, if we are all deciding independently, how do we know where to head? And this is like, you know, when you've got two people working together to lift or move a heavy object, they could move the heavy object as long as they both pull in the same direction.

But if they pull in different directions, they drop the object and it falls apart. So if they both think of themselves as independent actors. How do they both know which direction to move? and there are all sorts of other problems as well, connected with Nash equilibrium.

So for example, a fundamental one is that even in that setting, even in what appears to be very, very simple scenarios, Nash equilibria can turn out to be very, very poor compared to outcomes that you might collectively decide upon. So the famous example which I'm sure many of your readers and listeners will be familiar with is the prisoner’s dilemma. And this is a very famous game. The point in the prisoner's dilemma is that there's a very strong game, theoretical solution, a unique Nash equilibrium, but it turns out that this is actually almost exactly the only choice that you wouldn't make. If you were able to collectively choose something, if you were able to get together and, and choose something.

So we have a range of answers to what it means to be rational in, in multi-agent settings, but none of them is perfect, basically. And when we're in our case, we as AI researchers, the waters are muddied because in economics and game theory, the typical assumption is that you have unlimited computing resources available at your disposal and in reality, of course you don't. And so we have to try to adjust these solution concepts for limited computing resources and so on. So it's a work in progress is the short answer.

[00:19:45] Clara Durodie: we talked about social skills and how we build these in multi agents and we also talked about rationality. Could we touch a little on empathy? How do we, how do you, think or look at empathy. As a core social skill, is it a social skill or what is it? I think it's so important to be able to engage with the systems as a human i.e, I feel like I'm understood or I'm properly anticipated in terms of, I don't know, customer needs etc. I'm thinking of different applications of the systems primarily in customer centers. So how do we build empathy in the system? How do you code empathy?

 

[00:20:48] Mike Woodlridge: So I think the way I think about empathy is, I mean, it's not a scientific concept of course. Right? The way that I think about it is to distinguish sympathy and empathy. So sympathy is when I look at, you know, I look at somebody who's in a difficult situation, who's maybe lost their job and I can see that, you know, this is a bad thing for them and you know, all other things being equal, I wish that they still had their job, you know, that they weren't struggling financially. And so on empathy is where you actually feel the pain. Empathy is where you look at them. And actually it's not just that you recognize that somebody's suffering, but actually you suffer because they are suffering.

And there is a field of game theory called empathetic game theory and empathetic game theory is based on the assumption that human beings really do feel empathy. And we do. And what happens it seems is that, you know, there are parts of our brains that look at other people and basically model what's going on there.

Our brains are modeling, you know what this situation is and we are feeling their pain. We are modeling their pain and we are feeling it locally. Now, if we do have the possibility for empathy, then situations like the prison's dilemma might not be so dangerous because, one of the difficulties in the business dilemma is that I can inflict some pain upon you, but if I'm gonna feel empathy, I don't necessarily want to do that anymore because I'm gonna feel that pain as well.

It's an emerging idea. It is not yet. I'm not aware of AI working. There is a lot of work on emotion modeling on AI, but it's more at the level of suppose you've got a chat bot, which is telling somebody they haven't got a bank loan. It's about recognizing when somebody's getting angry or frustrated, you know, it's that kind of level.

And it's not trying to model empathy or indeed even sympathy for individuals. It's just at the level of recognizing emotions, I should say, by the way, it's controversial. There are,

I have a number of well respected colleagues who just ask “why should we go there”?

Human emotions are human. Let's just keep machines out of that altogether. That this is just a dangerous road down which to travel. So just let's not go there. I think empathy is probably, you know, even a step further than that but I think there is an argument just in game theory to try to understand if human beings feel, can feel empathy and that can affect their behavior, then that's gonna, that can affect, you know, that can actually be good for society because we will prefer to choose outcomes where people don't suffer.

There is an argument there. I think that we should investigate the possibility of machines that feel empathy for human beings because in that route they will prefer outcomes in which human beings don't suffer. But I say that we're at very pre-scientific stages, I'm not aware of any work in artificial intelligence on those. And only a small amount of work in game theory on empathetic game theory.

[00:24:15] Clara Durodie: If we stay in this space of empathy, which is pre-scientific and considering we are just having conversations about what might or might not happen or may perhaps never happen. We don't know yet. The next question I have is to explore the conversation that the AI experts community has around the Sentience in agents. Obviously once we start talking about that, we talk about whether machines have consciousness, and if they do, then lawyers and the legal profession might say, well, in that case, they might have some legal rights.

So it's a very early stage conversation, but I think it's a very important point to reflect on. Perhaps if you have any insights on this then I'm sure our listeners would be very interested to hear your opinion.

[00:25:28] Mike Woodlridge: This again, I think the idea of sentience and consciousness, I mean, they are, they're very much at the kind of the Hollywood dream of artificial intelligence. You know, they're very prominent there, but not so prominent actually within the AI community actually rather on the fringes, and consciousness in humans is one of the great scientific mysteries.

We don't understand it full stop. You know, we've got some ideas, we've got some clues, we have more ideas now than we did 50 years ago. And part of the reason for that is we actually now have machines, which can, to some extent, look at brains as they do their work, you know, magnetic resonance, imaging machines, and all those, those that, that category of devices.

But they can monitor the thought processes that human beings have or how they work. And the fundamental question is what's called the, you know, the big question is : are there some electrochemical processes in the brain, and somehow in some way, those electrochemical processes give rise to conscious experience. and why does that happen? We can look at the electrochemical processes and we understand those pretty well. We can model those in computers for example but how and why they give rise to

unique human experience. We really don't know. I mean, we really don't know, and we don't have very much more than hand waving.

One of the fun things about this, by the way, is that everybody's entitled to their opinion because everybody's opinion is as good as everybody else's in this space. That has a downside as well. It means there's an awful lot of opinions about it. So we can distinguish some features of consciousness and one, I think a feature of consciousness is the idea of subjective experience. Being able to experience something from your own perspective. And there's a famous thought experiment by the American philosopher, Thomas Nael from the early 1970s. He wrote an essay called What is it like to be a bat? And he asked if something is conscious what it is like to be that thing. So think about, for example, an orangutan, is it like something to be an orangutan?

Well, you could imagine the experience of being an orangutan in a forest, eating your bananas, climbing trees with a thick coating of hair. And so on being a physically powerful ape. And so we can imagine what it would be like, you know, with very, very powerful hands and so on.

What is it like to be a dog? Well, yeah. You know, I have a dog, I see the dog. I can imagine the dog's experience. I can imagine my dog experiencing things from its own perspective. You know, the excitement of going for a walk or, you know, eating, when it does a trick and gets a treat or, you know, all the frustration when it does something naughty, we can imagine what it's like to be a dog.

What about a rat? kind of just about, I think we can see it starts to be more primitive, but does a rat have experiences? Yes. I think a rat has experiences. It is like something to be a rat. What about an earthworm then? I think it's actually a bit harder. An earthworm, I think, is kind of at the edge.

I mean, an earthworm is really not, more much more than a very small bundle of stimuluses and responses. It doesn't really do anything. Does it experience its world? Does it experience life? Does it enjoy a piece of whatever it is that earthworms eat? I'm embarrassed to say. I don't know. And then think, well, that was a good piece of mold.

You know, I haven't had a piece of mold that good since last Tuesday, you know, does it have an expiry date, I mean, I'm being silly there, but you get the idea. Does it have experiences? It's kind of at the frontier. Well, what about toaster? Does a toaster have experiences? Well, the immediate response is no, of course a toaster doesn't have experience.

It isn't like anything to be like a toaster. Well then what is it like to be a computer? Does a computer have experiences? Well, a computer is a glorified toaster, right? I mean, it's in the same category, it's got a bit more circuitry, quite a lot more circuitry, lots more transistors.

Although of course, you know, toasters these days actually do have computers in them

but is it like something to be a computer most people's gut responses? No, it isn't like anything to be like a computer. So that's a thought experiment and that was for a long time taken as one of the arguments against the idea that AI could be conscious because it isn't like anything to be like a computer. I'm not quite so convinced by that argument and I think the reason I'm not co convinced by that argument is that I think it's appealing to our intuitions and it's good for distinguishing earthworms from dogs and dogs and orangutans and so on. But these are, these are a different category.

Computers are a different category and the category of computer experience. I think the world that a computer inhabits is a very different one, but for us to get to computers that have experience, I think there would have to be a world. So you may have read over the last week about these programs from Google, like the Lambda program, which can generate text, which is phenomenally plausible, English language texts, you read it and you think this sounds like you're talking to a sentient being.

And there was a Google engineer who landed in a spot of bother let's say this week for saying that they believed that this program was sentient. And if you look at the text, it looks like the kind of text that a sentient being might produce. And that's obviously where he's coming from.

But there is no world experience behind this, this is what this program has done. It's just been trained by showing it vast amounts of text, huge, unimaginable amounts of texts. And from that it's learned to plausibly generate text just by basically looking at what it's been shown previously and trying to generate something which looks similar.

There is no experience of any world behind the scenes there whatsoever. It's my standard example and my colleagues get bored of me coming out with this example to do with omelets. So the Lambda program that I mentioned earlier has probably seen every omelet recipe that's available on the worldwide web.

And if you could probably have a very nice conversation with it about omelets. But has it ever experienced an omelet? Does it know anything about an omelet? The answer is no, of course it doesn't because it's a piece of software and all that software is seen is lots of words. The words that we use to describe omelets, but for you and me, when I say the word omelet, that word is not just a symbol in some data set, it actually refers to all the experiences that we've had related to omelets.

You know, the good omelet that you had in Paris in 1997 and that bad one that you cooked yourself for breakfast last week, that's what omelet refers to it. It is a symbol. That means something in relation to all of those experiences. So a machine that's never experienced anything in the world, I think I struggle to believe could have a meaningful consciousness.

Another very long rambling answer for you there.

Clara Durodie: It's not long, it's fascinating. My listeners will undoubtedly be

fascinated to read the transcript and show notes with the summary of your ideas. I find your position regarding sentience is very similar, if not identical, to how I look at these things.

I was struck at the comment you made, the meaning, the symbol of things to us as humans and how certain things awaken certain memories in us which are unique, very unique to us. So, it's very difficult indeed, to talk about sentience in machines.

I don't think even remotely embrace the conversation around this, but having said that, whilst I understand the debate, I understand the different opinions. One of the things I learned from my time in Oxford was to see other people's point of view, even if they are not in line with what I think, or what I would like to think.

Time and again, I've come across computer scientists, AI researchers, people who spend their lives, their entire existence, building a stack of emotions when they create systems. So they're so emotionally invested and attached to what they created that they feel that they're communicating at a humanlike level with the systems.

I've come across quite a large number of people who were committed to convincing me that the work they've been doing actually has feelings. I understand their point of view. Understand their emotion, their time, their everything, the moment they wake up in the morning, that's the first thing they think about.

It's also the last thing they think about before they go to sleep. That's passion for their work. And some people, I think, refer to this attachment as the Ikea Syndrome; this over attachment or very strong attachment to one's work, but I think we all have different shapes of Ikea syndromes, whether we build the digital banks or FinTech companies or whatever we do, I think we have this level of attachment.

To go that far and say that a system, an agent, is sentient, I think that's a bit of a stretch indeed. As we move the conversation a little bit beyond the philosophical nuances, I'd like to bring to your attention a very interesting conversation I had about two weeks ago about the link between Alzheimer's disease and our over reliance on GPS location guidance

Apparently the more we rely on the systems, the more our cognitive abilities diminish. One claim is that there is actually enough data to put this point forward as strongly as possible. In your experience and your research, have you come across anything which demonstrates over reliance on AI systems implies cognitive decline.

[00:11:41] Mike Wooldridge: It's a very topical point. Funnily enough, I've had this conversation at least twice this week and I think there is a school of research I discovered this week with a colleague of mine at Hertford College (NB University of Oxford) who is a cognitive archeologist.

And what cognitive archeologists do is they look at how the emergence of technologies

like fire and the wheel and whatever then starts to affect human evolution. It starts to look at how they do things, how the development of human species is affected by that. And so if you look at fire, for example, and I believe I could be wrong, but my recollection is that, you know, fire first started to be used as a regular tool in human societies, about half a million years ago.

So we started to understand how to make and maintain fires about half a million years ago. And there's some evidence of natural fires, you know, forest fires or lightning strikes and so on. These kinds of fires had been used previously but not maintained and used in a systematic way.

So fire does lots of useful things for you. It enables you to consume some foods that you couldn't otherwise consume. It makes your food safer, actually makes it taste better, but that probably wasn't an issue back in the day. And of course it frightens off predators and keeps you warm at night and so on.

So, fire has all sorts of uses, but for a group of our ancestors, half a million years ago in the wilderness, actually just maintaining the fire starts to present them with a societal challenge because you need to keep this thing fed. You need to get sources of fuel and you need to make sure that the fire is regularly maintained, you know, with fuel added, you know, rearranging the embers so that, you know, so that the new fuel can catch and, and so on.

And this requires social skills that require the ability to coordinate. So there is a theory that the use of fire went hand in hand with the development of the social skills that were required. For groups of our ancestors in order to be able to maintain it and the social skills being the ability to coordinate amongst ourselves.

It's your shift now looking after the fire to make sure that there was enough fuel and so on. So step forward, half a million years, and we've got right at the very, very beginning of these new technologies and there is formal evidence of declining cognitive processes? I'm not aware of any, but is there anecdotal evidence that the skills that people use and deploy now are different as a consequence of the emergence of these technologies? I think, yes. And I don't have to look much further than my children. I have two teenage children who grew up with smartphones. I mean, we were a very connected family and you could probably guess why. We were early adopters of smartphones and the internet, basically all of their lives. We've had high speed internet connections, and we've had pretty good computers at home and so on. So they've grown up with their digital, they are digital natives. They've grown up. They're not just digital natives, they're internet natives.

They've grown up with this technology surrounding them. And the idea that any question they have about when something happened or where something is and so on, could just be answered like that. So one example that I witness in my children and I'll bet some of your listeners will witness it in theirs as well, is that the kids don't make arrangements anymore. What are you doing this weekend? You know, you ask them at four o'clock on

a Friday night and there is no answer because they don't need to coordinate in advance. They just arrange it on the fly and they can do that because they can just message, you message each other, you know, they can go onto WhatsApp or whatever, or onto some, or into an Instagram or whatever, you know, I'm not even gonna try and keep track of what the social media du jour is.

They don't need to coordinate in advance. And they said they seem to have lost that ability. They just literally don't do it. And you know back when I was a teenager and I was arranging to meet my friends, you had to make very explicit and crisp instructions. We will meet at seven o'clock outside this pub, you know, and you remember as well, right.

They don't do that and they don't need to do it. It will be interesting to see the skills that prove to be beneficial in this new world and how that evolves. And we will see that in the decades ahead. And we will look just as I look at my teenage years and say, why can't you make arrangements?

We will see other manifestations of that as well. I think there's a bigger picture, which again, going back half a million years. What did you need to do to survive? You needed to be a generalist, right? You needed to be physically fit. You also needed a degree of competence. You needed to be able to trap animals and cook animals and to do absolutely everything at some basic level of competence.

And if you were born half a million years ago, with the unique skill to be the best human being a chess player ever, you weren't gonna survive. Because if that was your only skill, it wasn't enough. You have to be a generalist now in our specialist society, if you are born with that unique skill, you can actually carve out a very nice niche for yourself.

So there is enough specialization in our society that those kinds of skills can prosper. Generalization is no longer necessarily a winning formula in our society. What skills will the internet and the AI society, which is emerging now, what skills will they favor? That's a fascinating question.

It surely will change society. It surely will change people's behavior and ultimately it will change. It will change the course of evolution. If we last that long, God help us, but exactly what that looks like now? I don't, I couldn't tell you, but I can tell you, my kids have got no ability to coordinate in advance whatsoever.

[00:43:48] Clara Durodie: I echo that in my son who can make plans five minutes before an event but never well in advance. He's 24, and as I was listening to you talking about your teenage son that they're children are internet native and smartphone native.

I remember when the first iPhone came out in 2007. I distinctly remember the conversation I was having with my son who was probably 10. He was old enough to say to me with excitement “ this new phone, have you read about it? I'd like to have one !”

This is the generation which has pretty much embraced technology from the beginning of their lives. It would be interesting to see how their business skills, how this engagement with technology translates in the business world when they will be 35, 40 years old, 50, to see how they engage, and how the business environment changes.

Mike Wooldridge: They will do things that we cannot begin to imagine that we absolutely cannot begin to imagine. So I'm reminded of 2008, a very large tech company, a very reputable, large tech company, invited me to what they called a faculty forum event in a large European city. And it was lovely. I, they, they paid for me business class, which I don't normally get to travel. So I felt like a king for the one hour of the flight.

It was a fascinating event and we got to meet some senior leaders of this tech company, but one of the sessions, and this was about 2006, 2007, I think, one of the sessions was on this new thing called social media. And, whenever this was, I don't remember exactly, but Twitter was the new thing, the newest thing.

And one of the execs was at the front of this enormous company. I mean they're vast now but they were vast then, and one of these senior execs who was not an old person by any stretch of the imagination was saying, but we've looked at this Twitter thing. People talk about what they had for lunch.

You know, they go for a cup of coffee and they talk about, yeah, having a cup. What is all this about, you know, what is, and I want what I was reminded of what it put me in mind of literally I had this image in my mind of parents in the United States, in the mid 1950s, listening to Elvis Presley and saying, but what, this is not even music, what are they doing?

You know, they did not get it. We don't get it. I don't get it the way that my kids use social media. I look at their social media feeds, on Instagram or whatever it is. And I cannot comprehend it; it is a language which is meaningless to me. And they pick up, we think about, okay, I'm going to develop this technology, which will allow people to communicate their ideas clearly and instantly to other people on the other side of the world.

And they start talking about what they had for breakfast and that becomes a thing. We can't begin to imagine how they are gonna use these technologies that they've created. They will do unimaginably creative things. And you and I, in years to come will be as uncomprehending as that senior tech exec was.

Or as those parents of teenagers in the 1950s Americans were listening to Elvis Presley. We won't get it, but they will do intensely creative things such is the nature of technology, right? I mean, Alexander Graham Bell, you know, inventing the telephone could not have been imagined. Phone sex lines or whatever, you know, could not have imagined or all the uses that people have put that technology to similarly with television and so on.

Tim Berners-Lee, my colleague here at Oxford, could he have imagined how the world wide web would change the world? No. I think he gets a bit bored of being asked whether he imagined it. People will be intensely creative and they will come up with ideas that we just can't begin to imagine right now.

Clara Durodie: I'm glad you mentioned social media because, this week I spent a whole day at a conference which was designed for the legal profession. Interestingly enough, there was a session where they were discussing business divorces. Because obviously companies are being built by multiple founders. Not all founders get along. They part ways over time. There was a solicitor in England, who specialized in social media and she was discussing how social media actually creates quite a lot of problems in legal settings where legal issues need to be resolved. In some cases, social media is a very valuable tool to find a better deal or find weaknesses in your opponent's defense.

I think as much as we evolve, these things will evolve and impact the way we live and envisage and evolve our lives. I think there will always be an undertone regarding the legal implications of what we say are there because data never forgets. Once it's out there you can't take it back.

If you have coffee and another full breakfast for lunch as sometimes happens, I've seen this, maybe your insurance company will pick up on this and they're not gonna be very happy. Your premium is gonna go up. So whilst we look with a lot of excitement going forward as to how technology will help us evolve, I think the conversation moves to regulations around what is acceptable, what should be used, how this technology can be used or should it be used.

Indeed. I think there's a lot of discussion around ethics. I think it's great to see so many newly formed ethicists coming from everywhere, but at the same time, I think it's a bit too much going on. We are missing the core of important conversations and debates around ethics.

I think the work the European Union is doing with the AI act is very important and consequential across the world. In your experience, do you see a way where we can pursue this conversation around ethics, but in a way in which remains constructive? I'm very concerned about being too prescriptive and blocking or promoting an environment where actually businesses will now be able to do what they're supposed to do.

Because I think the whole conversation around ethics in the business environment, at least in financial services, I've seen, subscribes to compliance. And I think it, in my view, I think it's wrong. I think the ethics of technology and data ethics and everything that a business might want to introduce to their operations and internal policies, I think it's vastly different from what we, as an industry, are used to seeing as compliance.

I think there should be a distinction between these two. The listeners of my podcast will

have the benefit of having a whole episode with an expert in this field. But do you have any views as to how, at least from the AI community point of view, where we should go with regulations and with the whole ethics conversation, what would be constructive?

Mike Wooldridge: So, um, I spent some time, uh, looking at the proposed EU regulations and talking to colleagues about it. There is clearly good intent and I think with good reason, my personal take is as follows. There is at the moment, a raft of technologies which have emerged, not just one. AI is one, but a raft of technologies.

Smartphone technologies are another ubiquitous computing. The idea that we have cameras and sensors embedded everywhere in our environment, which is happening even as we speak, there's a raft of these technologies alongside the ability to store vast quantities of data and to process vast quantities of data, which we didn't have.

You know, at the turn of the century, all of these technologies have made possible a range of abuses which we should all be concerned about. And, you know, privacy is a very obvious one. If our environment is populated with cameras, with sensors, which can just track everybody all the time on the streets and AI can recognize people in pictures, just by trolling through social media, to extract the pictures that we've hopefully uploaded of ourselves and our friends.

You know there are facial recognition companies who've done exactly that just trolled through social media, extracting the pictures that we upload of ourselves or that our friends upload of us or all the, the pictures that we just happen to be in the background, then you know, surveillance technologies become a very, very real and very concerning prospect. AI is just one part of that picture and to focus obsessively on AI, I think is a mistake. And the other thing I think is to focus on how the technology works, defining AI by the fact that you're using neural networks or whatever is, I think, a misstep. If surveillance technology is being used on me, I don't care whether it's AI surveillance technology or some other kind of surveillance technology. What I care about is that people are inappropriately tracking me. It's the activities, it's the abuses themselves that I'm concerned about. So my recommendation would have been to instead look at what those abuses might be, to go to the legal profession and to think about in the legal profession, what those abuses might be; to go to the medical profession and so on and ask them, and to look.

Emerging areas like social media and asking what those abuses might be there and to focus on what those abuses are and not on the technology. I mean, the technology, you know ... if you murder somebody, does it really matter whether you used a knife or a gun? The point is that you murdered somebody, you know, the technology that you used to kill them is irrelevant. And I feel the same about AI regulation. What's crucial is what's done with it rather than the technology that's used to do it.

 

Clara Durodie: We use variations from this big basket called AI to promote all sorts of emerging technologies because obviously sometimes we call it AI, but sometimes there are simply statistical models running, I don't know, smart contracts,

and that's still part under the big basket of AI.

I think we've kind of missed that big point. I take your view and I agree that we shouldn't care about what tools are being used, that we should be focused on the outcome. From a business standpoint, I think when we make decisions to implement or to apply those regulations in the business setting, it's very important to understand how the technology we're trying to regulate actually works and build internal policies around them for governance and risk management.

We need to understand that technology. If we don't understand it, we won't be able to ask the right questions and put the right policies in place. So, I think in the big picture, it's exactly what you said when we implemented it at micro level. These systems, these regulations, these approaches, I think we need to have a better understanding whether the crime is committed with a knife or with a gun and what kind of gun, it's almost like forensic analysis, and that will inform what kind of policies we put in place.

We've covered a tremendous number of topics today, but I'd like to conclude our conversation today and focus on the work you're currently doing at the Alan Turing Institute.

Mike Wooldridge: Sure. Well, the Alan Turing Institute is something you readers may know, but possibly not. Everybody will know it is the UK's national center for AI and data science. And it has a physical base, but not one that we've been able to spend too much time in, obviously over the last couple of years. But the physical base is in the British library building, right by St. Pancras. And it's a great space that we have there. And it's a wonderful convening venue. I mean, whenever I go into the physical venue, I meet everybody from across the UK. And so it's, it's all my colleagues from across the UK, otherwise I wouldn't see them year on year.

So my role there is for what we're calling Turing 2.0. Turing Institute has been going for about six or seven years now, but for Turing 2.0 what we are doing is we're aiming to establish a program of foundational AI research. That is really core AI research. And my remit is to look at the national picture.

We're not trying to duplicate what already goes on in other universities. For example, a standard example that I have is in computer vision. There are very well established computer vision groups, research groups at my own university, Oxford and other universities like Edinburgh and elsewhere.

And there would be no point in the Turing just setting up a competing group in that. What we're looking at is where Turing can complement that by doing something different or where we can do something that individual universities can't do. So it's the core AI. So really the core AI science, where we're looking at trying to add value to what's there, not just duplicate what's out there in the community to pick up on the bits that are perhaps unfairly neglected, that actually are incredibly valuable, if we can develop those.

But in particular, you know, what can we do as a national center that can't be done at individual centers? So that's what my work is. We're right at the very beginning of that, we are still in the planning stage. But we are hoping to roll out some ideas over the next six months.

Clara Durodie: So, we covered pretty much everything we could in our podcast today. Michael, I'm so grateful for your time, generosity and for sharing so much knowledge and wisdom. I invite my listeners to read the show notes, click on the resource links and learn a little bit more.

It's a subject that we, financial services practitioners, have to continue to learn in order to remain relevant.

Thank you very much for joining us today Michael. It's an absolute honor.

 

Mike Wooldridge: It's a pleasure. Thanks for giving me the opportunity.

Copying commentaries, articles and transcripts to share with others is a breach of our T&Cs and Copyright Policy. Decoding AI® Newsletter and Podcast  Disclaimer.

bottom of page