top of page
The Decoding AITM Podcast
Episode 2: Dr. Vivienne Ming on building AI systems to solve complex human problems
​Show Notes and Transcript 

This episode is sponsored by Teradata.

Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today.

The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. Learn more at Teradata.com.

Teradata_logo-two_color.jpg

Episode description 

 

Dr Vivienne Ming, a theoretical neuroscientist and artificial intelligence practitioner who has dedicated her career to applying AI to some of the most pressing problems of our society. She founded a few startups and also Socos Labs, a philanthropic data science incubator with a mission to maximise human potential. From education to type 1 diabetes, from social inclusion to Alzheimers and climate change, Vivienne is trying to find meaningful answers in the data. She is leading the culture of innovation by tapping into the newest research trends and turning the concepts into real world technologies.

Dr Vivienne Ming was a fascinating guest, who is a pleasure to talk to. I'm really grateful she took the time to appear on the show.

 

Key points discussed in this episode

  • How to innovate  

  • Interpreting AI Models

  • Being your child’s hero 

  • Bias 

  • Explainability 

  • Augmented Intelligence 

  • AI regulations 

  • Data Trusts

 

Nuggets of Wisdom 

  • The people that have the most exceptional lives consistently are those that are the most open to experiencing

  • Ironically, if we had less information available to us, we would explore more. 

  • Every problem for me starts with the assumption that we don't understand the problem.

  • Not only to collect data, but to put it back into the system to make a difference.

 

Ideas discussed

Dr. Ming is a treasure trove with a vast and impressive multidisciplinary reach of practical application of AI to solve real life problems. She has this special gift of explaining things in an accessible form, and she does that with the authenticity only a practitioner can have.  Here are a few core ideas that I’d like the highlight from our discussion:

 

On innovation 

  • the diversity innovation paradox, people that are outliers in their field are more likely to actually produce innovative research, likely to be cited for it. 

  • But how do we do innovation and inclusion when we've sent everyone home? We dramatically underestimate the amount of simple effort and engagement that goes on in innovation. As information flows faster, innovation goes slower. It's like an inverted shape curve. If nobody can talk to each other, innovation becomes very slow. And for much of human history, we have had slowly more and more people coming together in, uh, large urban centers in universities. It turns out, particularly in an internet world, we've passed a threshold where our connections to one another are so dense in access to information. The cost of information has become so low that something really fascinating. And frankly, a little terrifying has happened, which is we, as, as individuals begin to explore less. And I don't just mean in relative terms, but in absolute terms.

  • If you look field by field, not just in the sciences, but in all academic disciplines, what you find is that as the rate of paper publication goes up, the rate of the novelty of ideas goes down. It's fundamentally a paradox. Ironically, if we had less information available to us, we would explore more. 

  • How do you approach, as a scientist, new ideas?
    Every problem for me starts with the assumption that we don't understand the problem.There are only ever messy human problems. They only ever have messy human solutions. If you can't get comfortable with that, get out of this business.


On finding, observing and collecting data 

  • Where's the right moment, not only to collect data, but to put it back into the system to make a difference

 

On being a parent 

  • I built the first ever AI for diabetes for my own son. And I'm immensely proud of that. I mean, every parent should have the chance to sort of be a superhero for their own child

 

On interpreting AI models 

  • I love those moments where things that seem very sophisticated and mathematical boil down to something very human. You can tie these two things together. That's kind of fundamentally my work is where does the human meet the math not how do we replace one or the other, or how do we privilege one or the other. 

  • I could pull all of these pieces together and bring them to bear on one single problem in a way, no one had ever been able to see those connections before.

  • That's why we made a difference. And so many of my projects that I work on it's exactly that kind of thing. Let's look at this entirely differently. Let's not just step back from the problem, but certainly from the presumed wisdom of the problem, but let's live as many different lives as possible so that we can bring all of them to bear on what comes next and, and really leverage as many of them as possible. 

  • Our unique value is our ability to see the world differently than everyone around. This could sound very soft, like a very philosophical abstract statement, but I mean, it in the most concrete sense I can: when we look at optimal incentive strategy.
     

On Bias

  • I would argue that if you have to create a fake world to de-bias your AI, that's deeply problematic. Our AI should be able to look at real world data. Race and gender socio-economic background language skills, and causally. See why this person is a good or not a good hire, same thing with loans, university, admissions, attention of police. I would argue that building AI and investing that relies only on correlations, however, is incredibly sophisticated and powerful. 

  • These powerful algorithms are destined to create bubbles and land us in disastrous places. It is astonishing the degree to which we trust. We ignore the old correlation does not imply causation, uh, mantra and put increasingly more and more decision making crucial. Civil rights, even human rights level decision making at the level of correlations, however sophisticated. 

  • I would argue that there's another form of bias, intrinsic to these AI systems, which has not been fully explored, which in fact, you almost never see discussed in ethical AI research or in talks about legislation, which is even if these things worked perfectly, they only work in the self-interest of the group that built them. 

  • Almost always, these things are actually representing transactions between multiple groups, between a doctor or a medical diagnostics company and a patient between a, a bank or lending agency and a lending, between our job seeker and a company and the software developer that developed the tools. 

  • So even if the algorithm worked perfectly in deciding who gets loans, it will almost certainly make those decisions in support of what maximizes revenue for the bank, which ironically. Probably means another U-shaped curve ignoring super safe loans because they're not very profitable, ignoring highly risky loans, which is almost certainly gonna target a specific population, but actually aggressively target people that you think will be trapped in debt cycles.

  • It is deeply problematic from a societal sense for these systems to be so obfuscated that no one realizes where bias may exist and no one recognizes that bias will persist in favor of the algorithm maintainers.
     

On using AI to augment work 

  • We treat AI like a magic wand. We wave it around a problem and assume its solution is perfect because it's math, but it turns out AI is justly biased. Just like we are, you can get very nerdy here and talk about a lot of fundamental, resolvable elements of machine learning systems, like the bias, variance trade off, but just practically speaking, we can simply say AI is biased. 

  • What excites me is changing the acronym: Augmented intelligence rather than Artificial Intelligence. I am a huge advocate for we should build machine learning systems that make people better, not, there are some things that are worth fully automating. 

  • We should recognize that there's a class of creative labor, that we can either de professionalize or we could augment. And the economic gain is overwhelmingly on the augmentation side. But actually the trend over the last 20 to 30 years, as far as automation has largely been on the de-professionalization side. So that's attention and it's right there in the finance
     

On regulating AI 

  • I think it's astonishing that we have not yet done data audits and algorithm audits as a normal regular part of business practice, the same way no public company would ever get away with failing to audit their finances, failing to audit your algorithms and data just seems shocking. 

  • Market centric regulations 

    • Explainability Battle 

      • If yours is a black box and mine has reasons, I win. So, you kind of have to have reasons also, and we can drive ourselves more towards causal relationships in lending, but it also makes the practice more transparent because the economics of building an algorithm to assess job seekers, loan seekers, they're so efficient.

    • Data Trusts 

      • So one last thing we're doing, I mentioned I have a company working and developing tools for postpartum depression and, so we mash up language models and mobility models and biomarker data. We're looking at it as well Alzheimer's and major depression and other things. We just decided right up front, we won't own the data. So we've created a trust, a legal entity, whose fiduciary responsibility is to the users, not to us. We believe our economic value proposition is clear, and valuable, and investors have agreed, but one aspect we have made the decision right up front. 

      • Part of our business value proposition is simply not going to be the value of our data. We don't own it. We can't make use of it in any way, which is not agreed to by the trust and the trust doesn't work for us. If we truly believe the thing we're doing is valuable, then how is that not a rational decision?

 

Contact details

Socos Labs Link

Send your mad science pitch (and a resume) to jobs@socos.org 

For press inquiries please write at press@socos.org

Want to book Dr. Ming for a talk or briefing? Visit Keynotes page.

 

Links to books and materials 

 

Article: I am turning my son into a cyborg. Link 

Research Papers

 Episode's Transcript 

(edited for ease of reading and length)
 

[00:00:00] Dr Vivienne Ming: It's absolutely
 

[00:00:01] Clara Durodie: Hello, welcome to the Decoding AI podcast. In this episode, I'm talking to Dr. Vivian Ming. Vivian is a theoretical neuroscientist and artificial intelligence practitioner who has dedicated her career to applying AI to some of the most pressing problems of our society. She founded a few startups and also Socos Labs, a philanthropic data science incubator from education to type one diabetes from social inclusion to Alzheimer's and climate change, Vivian is trying to find meaningful answers in data. Vivian, welcome to the show. 
 

[00:00:47] Dr Vivienne Ming: joining.
 

[00:00:48] Clara Durodie: I followed your work over the past few years with great interest. I have been interested not only in your technical insights, but also in your leadership style, in your philosophical approach to science. I also value your multidisciplinary approach to solving problems. What path led you to have this open approach? 
 

[00:01:17] Dr Vivienne Ming: Well, there've been a lot of reasons, a lot of factors in my life that drove me in this direction. And some is probably shared with everyone. That might be a bit daunting, which is what you're curious about. In fact, I recently shared some research that found that people that are truly exceptional on sort of objective measures, they win Nobel prizes.
 

They become leaders in massive organizations in analyzing their personalities versus everyone else's, it turns out there's only one consistent difference not in neuroticism or emotional stability, but in openness to change. Essentially the people that have the most exceptional lives consistently are those that are the most open to experiencing the fullness of what life might bring to us. That's not the only thing that's different, but in terms of personality, that's the one thing that really stands out. And I know maybe it now becomes sort of a secret self congratulations to say this, but, yeah, even at times in my life that were very hard, and a little dark, I was still always invested in the idea of exploring things that are unknown, whether it's science fiction, of which I've always been a huge consumer and nowadays occasional producer or science.
 

I know many people express a real affinity and love of science, but a love of the philosophy of science, of the very ideas of how it is, how you explore the unknown and how you communicate it. All of that has always been fascinating to me. Another factor is the one I just alluded to: when I was little, I was supposed to have this amazing life. I was supposed to win Nobel prizes. The more I tried to be that person, the worse everything got, and in the end, I flunked out of school and ended up homeless. That gives you a very different perspective on life than many people, let's call them my peers now, who probably had lives that were full of success. Not that there weren't challenges, but those challenges were things that were confronted and overcome. They probably didn't spend as much time wondering where their next meal was coming from, or thinking that they deserved all the bad things that were happening to them. And that's another thing that's happened to me.
 

I love geeking out with math and science, but those experiences really drive it towards what we are doing with this? What kind of a problem are we solving? It's probably the thing that makes me who I am. I learned how to use machine learning to study brains, but nowadays economic modeling, sociology research, I don't care where data comes from. If it helps me solve a problem, then it goes in my tool belt. I will say, with some real expertise, that does make me fairly different than most other people. The last factor is just my educational experience when I did get my life back on track.
 

I was at a fundamentally interdisciplinary Institute. The science department at UC San Diego. If you're not familiar with the field, most notably it's neuroscience, psychology and computer science, but it's also philosophy, systems design and, and whole new fields like cognitive engineering. Being part of a fundamentally interdisciplinary field going off from there, doing my graduate research under a student research grant that had the word interdisciplinary in it.
 

My research, as you can imagine, mashes up neuroscience, computer science, artificial intelligence, and psychology at a pretty fundamental level so much so that traditional scientists often struggle with answering: what department are you supposed to be in? But those were never things that really worried me.
 

[00:06:16] It was, what problem are we solving?
 

[00:06:19] Clara Durodie: I'm laughing at this because, in 2015, as I was diving very deep in the intersection of neuroscience, AI, and wealth management, and trying to understand human behavior, determining patterns of how we save money, how we spend money and how we invest money in personal capacity.

 

I was committed to pursuing a PhD and I went to a number of universities and everybody would just say we don't know what you want and what discipline supervisor you are seeking. What disciple are you interested in? What do you want? Do you want computer science? Do you want neuroscience? Do you want finance? It's wonderful to see that 7 years later, European universities are building a joined up multidisciplinary understanding to enable the joint study of these disciplines. In isolation, each discipline provides insights, but when joined up, they provide whole different insights. I am a great believer in multi-disciplinary research and applications.   

 

[00:07:28] Dr Vivienne Ming: I mean they are. Another piece of research I shared recently makes use of machine learning. And this is one of the spaces that I've really enjoyed the growth of over the last several years, using natural language processing to understand the science itself. Now, I love it. Use of natural language processing for any number of purposes, and we can talk about some of those, but in this particular case, it was a kind of introspection, let's build NLP systems that read every dissertation ever published.
 

[00:08:03] And, look at attention between what's known as the diversity innovation paradox, people that are outliers in their field are more likely to actually produce innovative research, likely to be cited for it. But there's a similar phenomenon which shows that research that's more likely to go on and become highly impactful. If you look 15, 20 years later, it has not just the most citations, but has truly influenced a field. And it is research using natural language processing to do this kind of analysis. It's research that has taken ideas that have never been paired up before and brought them together.
 

[00:08:51] You just see that vastly more likely in these fields and yet within traditional disciplines. That very research that's had a transformative effect over decades of time. Initially it is less cited, it doesn't show up in a prestigious traditional journal. It's just, all of the things. So even though science itself is changing. And by the way, if it's not clear, I'm gonna argue that we're gonna generalize this to banking. We're gonna generalize this to finance, to every field. We fall back into these traditional silos, even in fields where innovation is the whole purpose. It puts maybe methodological issues at the center, or for whatever reason, just defines a certain kind of approach to the world. Though we can show that it's people that break those approaches; it's ideas that violate those boundaries that end up having the biggest  impact. It's such a hard thing for our cultures to break out of the rest.
 

[00:10:11] Clara Durodie: Why is it so difficult? 
 

[00:10:17] Dr Vivienne Ming: So there's a number of reasons why. During the lockdown for reasons which didn't have to do with the fundamentals of innovation or science, really had to do with remote work these became these two separate issues; it became a big central piece of research for me and to sort of iterate from your lead in. Central issues because of the philanthropic work I do. So people bring me problems. I have some traditional startups working in Alzheimer is another in postpartum depression. I love that kind of work. I love my basic science research as well, but my real passion, why I do what I do is solving problems that increase human potential. I wanna maximize our capacity in the world. So people bring me problems and they can range from “Dr. Ming, My daughter has 500 seizures a day, please save her life.” No one's ever coming to me first. So, it's an easy starting assumption that if someone's writing to me, it's because nothing else worked.
 

So let's assume that our assumptions are wrong. So, right off the bat, I'm in a world where we're breaking a boundary, because whatever boundaries exist have clearly been holding back our understanding of the problem. And as you can imagine in about March of 2020, a number of companies had a very different problem, which was “we're sending everyone home”. What, what do we do? We don't know. We don't know how to do remote work. Uh, certainly not the scale of 30,000 employees. Amazon and Facebook in particular had very particular questions. Sure. 

 

We don't know how to do remote work. Nobody does. But how do we do innovation and inclusion when we've sent everyone home?  What does that in particular mean? And, and you can see the innovation problem right away. If your assumption is innovation, is this sort of serendipitous, um, phenomenon. It's just a bunch of smart, creative people hanging around a water cooler together, and you just kind of have aha moments and ah, moments exist in innovation. Certainly, we could document what it means and even neurologically what's going on during those experiences. But it turns out, we dramatically underestimate the amount of simple effort and engagement that goes on in innovation. So, starting in March of 2020, with the requests of these two large organizations, I thought, yeah, those are interesting questions.
 

And like all of the work that we do, I said, okay, now I own the problem. So, I had my team and myself begin to explore what these things meant. I was hoping we could just read a bunch of research papers and be done. You know, release that back out into the world, but we just found that there was in fact, very little research and the research that existed kind of violated everyone's assumptions.
 

In the end, it had a lot more to say about innovation and inclusion than about remote work. So one is the assumption that you need large numbers of smart people randomly banging into each other. Well, we actually found probably the most fascinating finding for me over the last several years: as information flows faster, innovation goes slower. So in other words, it's like an inverted shape curve. If nobody can talk to each other, innovation becomes very slow. And for much of human history, we have had slowly more and more people coming together in, uh, large urban centers in universities. In the recent century, in the form of corporations and coming together, we've seen the benefits of breaking through those barriers of communication, but it turns out, particularly in an internet world, we've passed a threshold where our connections to one another are so dense in access to information.
 

We might put it in economic terms. The cost of information has become so low that something really fascinating. And frankly, a little terrifying has happened, which is we, as, as individuals begin to explore less. And I don't just mean in relative terms, but in absolute terms. So, for example, the rate of scientific publication has increased exponentially for basically as long as there has been science.
 

But even if you just look at the World War II, there is exponential growth in the publication of papers, but in fact, if you look field by field, not just in the sciences, but in all academic disciplines, what you find is as the rate of paper publication goes up, the rate of the novelty of ideas goes down.
 

The established, well known scientists and thinkers in those fields get more and more citations, but new innovative thinkers get fewer. And the ones that do break through and become the establishment. This is a little bit harder to get into without getting into the math of how this gets measured, but essentially their selection becomes more random.
 

It has less to do with the quality or innovation of their ideas and more to do with not just random chance. We could get really nerdy here and begin to talk about the chaotic process that this happens. You know, just the dumb luck that three prominent people all happen to like your paper at the same time.
 

You might look at some of the really big songs over the last five years or scientific papers or books published. You can probably pretty easily see if they came from a new author or musician or scientist that there probably is nothing that truly distinguishes that piece of work from its peers.
 

Other than a bit of luck. You have to be good to be great, I guess. But beyond that, it's a coin toss. So in that context, we wanted to understand how do you break through all of that? How do you design, how do you engineer a culture to promote innovation? And you asked a very simple question and I'm finally arriving at, uh, hopefully a relatively straightforward answer.
 

What we see when we just let people interact freely, a big, massive social network of people, densely connected via technology or, other media. What we see is hurting. It becomes very easy to find good enough ideas and as soon as people find a couple of good enough ideas, particularly if they are found by people, very similar to you, culturally similar, cognitively similar. Yeah. Similar in ways that make us uncomfortable, like gender and race, as soon as some, a group of people, very similar to you find a good enough idea, then everyone in that group tends to hurt around it. And just to be completely transparently clear, I'm not just talking about, uh, someone on the internet, not thinking for themselves. I am talking about professional scientists, professional fund managers, clearly probably hurting around safe ideas while thinking that they are innovating. It takes very specific practices to break that behavior. It's fundamentally a paradox. Ironically, if we had less information available to us, we would explore more. Because the rate of information has gotten so rich and so fast, we actually pull back and simply look at those closest to us and say, and some point in our heads. Uh that's good enough. That's what I'm gonna do. 
 

[00:19:38] Clara Durodie: It's a phenomenon which I've seen before in the investment industry: this herding approach to investment ideas in the presence of information overload. There are systems and policy policies in our industry which are promoting diversity of views. But the reality is exactly what you've said. We tend to like less ideas that are outside of the circle, people we frequently spend time with and engage with. This creates an echo chamber and that is hurting, that group thinking it actually impacts in some cases, not only decision making, regarding investment decisions, but also it impacts business decisions at the board level. It's a key decision making level. So we have to be very careful. We spend a lot with our work, my work. I spend a lot of time in this space encouraging constructive destruction of group thinking and encouraging decision makers to question their ideas, whether they are wrong, not self-doubting themselves, but having the ability to question themselves.
 

And that takes me to one of the things I absolutely admire about your work as a scientist. How do you approach the philosophy of science? How do you approach, as a scientist, new ideas? 
 

[00:21:47] Dr Vivienne Ming: I alluded to something that comes up a lot in my work in general, which is one that I'm fundamentally interdisciplinary, and I'm looking explicitly to connect ideas together. And the other is that again, usually when a problem arrives, at my doorstep, it's a reasonable assumption that all of our assumptions are wrong. What's interesting though, is that doesn't mean you ignore everything that came before. So for example, in artificial intelligence, there tends to be a real bias among applied AI researchers. This would include people working in industry to kind of ignore everyone else. You know, there's a famous line. I think it came out of Google ”Our language models got better every time we fired a linguist.” The spirit behind that is all of this received wisdom from scholars turns out to not help. And linguistics is a particularly fraught field.

 

One of my favorite stories is that I flirted with leading at Amazon and would've built a big AI for hiring. I mean hiring has been studied at business schools, it's been studied scientifically and psychologically for a hundred years. It doesn't mean all that research is correct, particularly the stuff that's about a hundred years old. That's some real challenging ideas baked into it, but simply ignored. Thinking, you know what we'll do. We'll just take a billion data points of Amazon hiring history, and we'll throw a big complex, deep neural network at it and it'll solve this and we don't need to know anything about hiring. We don't need to know anything about research, showing causal relationships between employee characteristics, promotion rates and quality of work. That is a terrible starting point. So, you know, there's something interesting and maybe it seems paradoxical: every problem for me starts with the assumption that we don't understand the problem. There are only ever messy human problems. They only ever have messy human solutions. If you can't get comfortable with that, get out of this business. 

 

But the other side is that I go read the research literature. I read. All of it, because usually I am not an expert in the field. People are asking for help, you know, sometimes I am, but I'm not an epilepsy specialist. Why am I being asked about a daughter that has seizures? 

I am not an economist. Why are you asking me how to measure, build a counterfactual model of the cost of bad promotions inside your company? 

 

But again, it's because if there is such a discipline, as with messy human problems, we've learned what it means to approach problems that are persistent, that despite our best efforts never seem to get solved in a really constructive way. And one of the starting points here is to simultaneously question everything, but if you can't question it, if you don't know what those questions are incredibly deep dives into educating and then the last is to go out and experience the problem yourself. 

 

I work on pretty unusual problems, as you can imagine, but I'll give a grounded example. I worked on a project with the Make-A-Wish foundation. Is it possible for us to build a machine learning system that can nudge the wishes we granted, we don't get to choose the wishes for these kids. A dying child wants to go to Disneyland. They are going to Disneyland, but what if for a child like this, making a wish more social versus let's say more narrative increased their survival rates, decreased divorce rates in their families.
 

What if we could just add these little nudges? What if you brought your three best friends with you and will pay for everything? You're not changing the wish. You're just augmenting it with the goal of having it have an even bigger impact on that child's life. Well, um, the Make-A-Wish foundation doesn't collect any data.
 

You know, they had gone to other groups, famous big AI organizations in hopes of getting help with a project like this. And they said, well, here's your gigabytes worth of data for us to train a model on, but that's not how I approach problems nor I think how any of us should. It was what can we do to make a meaningful difference and what's available for us to achieve that.
 

So the only way to answer either of those questions was to literally go out, watch their wish granters ring doorbells, visit with families, cry, everybody cries. That's the way these things work every single time. And then keep an eye open for what we could do? Where's the right moment, not only to collect data, but to put it back into the system to make a difference.
 

That was an amazing project, it got stalled out for various reasons. But, the idea that you could use machine learning? Of course we can use it. In financial fraud detection, it is completely uninteresting to me. You can use it to automate call centers. The idea that you could use it to literally change the survival rates of a dying child, simply by shifting a wish is like magic. AI isn't magic. But there are those moments that it could do that sort of thing.
 

[00:28:34] Clara Durodie: I guess where I was trying to take the conversation with you was to highlight the value of being able to question your assumptions of putting your ego out of the conversation and starting with questioning the assumptions, questioning your thinking, your critical reasoning. These are skills, which I think as a leader, as a business leader, as a board director, decision maker, it doesn't matter whether you work in financial services or in other fields. I think it's very valuable in this digital environment, which moves so quickly as technology advances. thanks to the digital space. So as assumptions are fluid, our ability to put our ego out and question and be humble about what determines a problem and how we reach the decision. I think that strengthens us as leaders, and our ability to make the right decisions. 
 

[00:29:56] Dr Vivienne Ming: I don't wanna minimize. I mean, I have an ego. I kind of. I don't treat it as my enemy, but it is something I'm very wary of and the ways in my own personal life I learned to deal with that were hard. You know, being homeless for years, that's a good way to scrub your ego. I often say that it was that period of time that really taught me the lesson that it's not about me.
 

You know that again, seemingly paradoxically, my work and my life is full of seeming paradoxes. Everything about my life got better when it stopped being about my life and so for me in a kind of selfish way, I'm a little scared of my ego. I'm scared of it beginning to creep in and influence my thinking because my life is astonishing.
 

You know, I recently got to chat with the head of the census about how they can fix, make the US census work better. The UN Human Rights commission took a break from discussing Ukraine. So I could show up and address them about some issues of my work. It would be really easy to let my ego get away with me.
 

I get that. I'm a relatively clever person and it'll be easy to think I'm special and the world owes me something because I'm special. But in fact, I think I'm, honestly, not that special. Most people simply don't have the opportunity to do the things I get to do. And, and I'm not delusional.
 

I'm not saying that the chance to create a CEO, scientist, or philanthropist is like a trivial thing, but it's there at least there's a spark in a meaningful number of people in this world that will realistically, never have the chance to engage at that level. But you know, I built the first ever AI for diabetes for my own son. And I'm immensely proud of that. I mean, every parent should have the chance to sort of be a superhero for their own child, but the truth is there is some kid in a favela in Rio or a village outside Kinshasa, or for that matter down the street from me in Oakland, California, they've got the cure in their potential, not some crummy little AI.
 

A cure. And the simple truth is it is incredibly unlikely they will ever live a life that brings that cure into the world. I think about my life and how easily it could have ended on the street. And by ended, I mean, ended in the nineties. I had every advantage, cuz I had a lot of advantages in life, with every advantage, almost slipped through the cracks. Think about how many other people could be doing what I do, but we'll never realistically be given the chance. 

 

This becomes a huge motivation for my work. I don't have anything against being rich. I don't have anything against people being proud of their accomplishments, but it does take a kind of. Either courage. If you wanna see it that way, or just a mindset to be willing, to take chances at ideas, for example, that might be wrong.
 

I certainly do see what you spoke of earlier, which is astonishing, brilliant, highly accredited junior executives seem to spend their entire career making certain they don't make a mistake when in fact the thing the world most needs them to do is make three mistakes so that the fourth idea changes the world but we just, we just don't.
 

[00:34:06] Clara Durodie: We don't. 
 

[00:34:07] Dr Vivienne Ming: that reward is whether we're talking about banking, which is a very hierarchical industry or science, which has its own hierarchies.
 

[00:34:15] Clara Durodie: Well, in financial services, most certainly is not an industry that rewards making mistakes. Most certainly not. 

 

What, what I'd like to pick up on, on what you've just said is the blueprint. How we can actually be better at what we do if we understand our ego which in many ways is our enemy and turn it into a friend.
 

Another thing I'd like to pick up from what you've just said, I'd like to invite you to tell us a little bit about your work and being a hero for your son. I have a son. I know what it is to be a mom. I know what it is to discover that your son has a condition and you want to do everything in your power to help your child live a better life.
 

A very good friend of mine had his son diagnosed with type one diabetes when his son was very young and I can see the pain of that  parent. How did you build the strength and how did you build not only the strength, but the model to help you navigate his life which can be very difficult and very debilitating. 
 

[00:35:57] Dr Vivienne Ming: you know, there were a number of components. One is to recognize that at a fundamental level, of course, I'm just like everyone else, every night, once we were discharged from the hospital, because for us, the whole experience started with four days in the pediatric intensive care unit of Oakland children's hospital, which is about the four hardest days I've ever lived but once you get home, then there's a very human part of this. Now, my child has a life threatening illness. I was already the kind of parent that would wake up in the middle of the night. You know, I was the kind of terrible parent that would go into the crib and poke the child, just poke them until they twitch.
 

And then, okay, now I can go back to sleep. And now I've got an actual reason to be. So for a while, I mean, months, I'll be honest. It was a fear that was never remotely realized, but I'd wake up every single morning, just gripped with a terror that this is the morning where we're gonna walk into the bedroom and there'll be this horror waiting for us.
 

Obviously it never happened, but that's there at a very fundamental level and, and people hear the story. Okay. So there's this woman with some very fancy degrees and, and she knows how to program. I hate programming by the way, but I've written a lot of lines of code in my life, because it is a useful tool so she hacked all this stuff and created this thing.
 

So this, the moral of the story is learn how to program and learn artificial intelligence. I don't know, be a neuroscientist or an endocrinologist. You could take all sorts of things, study STEM. So there's three things that have been in every future work jobs report put out by every major consultancy and every public institution over the last decade.
 

And yet, I'm going to tell you not the least of which having been the chief scientist of one of the first companies ever to do AI and hiring. Actually, none of those things are particularly predictive of the quality of work of professional programmers, AI, or anyone. They are incredibly useful tools,without those tools, artists are hampered, tools without an artist are a complete waste of time. That's what our schools are. That's what our hiring systems are. 

 

All right. Back to diabetes. So, I wanna argue something very different. The reason I made a difference isn't because I know more AI than everyone else. A lot of people are astonishing. I got recruited once to be the chief scientist at Uber, to which I said, hell no, but they ended up hiring Zandi who had been a professor of computer sciences when I was a student. He is amazing. Oh my goodness, this guy, he knows so much more about the true fundamentals of machine learning than I ever will. Why didn't he invent this thing or someone like him? Why was it me?
 

It was me because I was a mom. It was me because if you went and asked an endocrinologist is there anything to predict from all of the data coming out of these devices, insulin pumps and continuous glucose monitors? They would've said no. We know everything about the biophysics of insulin and digestion.
 

There's nothing left after that, it's just random or individual variability, which you can't really account for. That's literally what they tell parents. Thing is you can't control everything. This is as good as it's gonna get. And I thought that it was absurd. I mean, I will be blunt. I thought you've gotta be f**king kidding me.
 

I make models of brains. Are you truly telling me diabetes is more complex than the brain? That is in fact, I would say it's exactly as complex, because they're all linked together. So I did end up packing his devices, although I'll note, most of what I learned how to do there came from other parents.
 

Those like me were frustrated and went and hacked devices. So I took it and I poured it into languages. I was comfortable coding in. It turns out I broke all sorts of US federal laws, but you know, sometimes you gotta break an egg or a federal law to make some change but when I looked at my son's glucose levels, the amount of sugar in his blood, we gotta measure every five minutes. And it went up and down throughout the day. I could tell you the story of his day. Not because I was a neuroscientist, not because I'm a machine learning researcher. It's because I'm his mom because I could see he woke up in the morning and even before I did and he ate a little bit without dosing himself.
 

So he went high and then by the time it was lunchtime, he was still high. His blood sugar hadn't come down. And so he dosed for lunch, but then he had to wait, but then he dropped. I could see his day. Only on school days, we saw his blood glucose levels spike, but because I'd turned him into a cyborg and we had all sorts of measures coming out of him like perspiration levels and heart rate and activity levels, we could see his heart rate went up.
 

His perspiration went up, but his activity levels were flat. What's actually going on? And yeah, this is where it does help to be a neuroscientist. He was getting stressed out. His brain was releasing adrenaline, which was causing cortisol release, which in turn was raising his blood sugar levels. But it wasn't because he was eating too much glucose or not getting the dosing.
 

Right. It was because all of these little fifth graders were being asked to stand right next to a bunch of older kids and it was stressful. So in fact, we didn't, we used my AI model to go to the school and say “Hey, we should separate these kids out”. And in fact, my son's blue glucose levels dropped because of a change of how they lined up the children at school, not because of, in a sense, my AI or the medical treatment.
 

I love those moments where things that seem very sophisticated and mathematical boil down to something very human. You can tie these two things together. That's kind of fundamentally; my work is where does the human meet the math not how do we replace one or the other, or how do we privilege one or the other.
 

So, you know, my story here is, yeah, I had powerful tools to bring to bear everyone should, but the tools shouldn't all be the same. We shouldn't all know how to program or all be AI experts, but what really made a difference is I was able to pull together my life as a mom and as a neuroscientist and as someone who could program, I'm definitely not an engineer, but I know enough and someone who could build machine learning, that I could pull all of these pieces together and bring them to bear on one single problem in a way, no one had ever been able to see those connections before. That's why we made a difference. And so many of my projects that I work on it's exactly that kind of thing. Let's look at this entirely differently. Let's not just step back from the problem, but certainly from the presumed wisdom of the problem, but let's live as many different lives as possible so that we can bring all of them to bear on what comes next and, and really leverage as many of them as possible.
 

And, and in a way, this comes back to our discussion earlier about how we create cultures that support innovation. When I hire new employees for my companies, when I bring new students into my lab, I actually only have one interview question and it is to pitch me a mad science project. And you can imagine at that point, then in the interview they need to defend the idea, but in fact, it's the exact opposite.
 

We spend the next hour together trying to figure out how to make it work. And if I feel that our idea to be better together is better than what I would've come up with by myself. Then you got the position because in a world where people like me can build automated systems, whether it's automated cognition or robotic automation or process automation, whatever it is.
 

So all of that routine work, even incredibly sophisticated, highly trained, currently expensive work. All of that can be automated to some degree. All of that can be de-professionalized. Then our unique value, as I alluded to earlier, is our ability to see the world differently than everyone around. And, and that could sound very soft, like a very philosophical abstract statement, but I mean, it, in the most concrete sense I can, when we look at optimal incentive strategy, we're using game theoretical models.
 

So these models where we can create games,  for example, a game in which you need to discover new innovations and what we wanna do is maximize the output of the market that's presented by new innovations and we look at different incentive strategies, such that if the individual people or agents in our models maximize their individual returns, which incentive strategy maximizes the market returns the collective intelligence.
 

So it could be a true market. a competitive market. It could be a market inside your organization, or it could just be a relatively small group of people. And what we find is almost all incentive strategies lead to hurting, including market incentives. And that sounds interesting as listeners of this podcast will appreciate it. But if I have a more unique idea, fewer people will also have that idea and therefore I get a bigger share, but that's not what we actually see.
 

We see both in optimized, theoretical models and in experiments, the way for individuals to maximize their returns is to hurl around good enough ideas, the only types of incentive strategies that truly make a difference, truly induce people to explore and take risks, go collectively by the term.
 

Minority opinion, which is to say it, kind of works like a market. You get a reward, an incentive. If your idea was right, you share it with the other people that sort of invested in that same idea. You can think of this, for example, as a prediction market, but only if the majority opinion was wrong. So, you only get rewards for ideas that are right when the majority is wrong. That maximizes collective intelligence; it maximizes collective market return and discovery of rare, innovative ideas.
 

But let me finish with this last crucially important point, because you'll now appreciate why this is such a hard thing to put into place. The corollary of what I just said is most of the time, most people will be wrong by design. Most of the time, most people won't get their incentives by design. So the collective intelligence is maximized, but you are obligating people to go out and really take chances and even though that's the optimal strategy, provably optimal strategy, provable both empirically and theoretically, can you imagine what it takes to build a culture, a leadership culture that truly supports and celebrates people being wrong most of the time, not willfully wrong, right? Obviously your goal is still to find those valuable new ideas, but to intentionally go away from their safe spaces, from the easy to find ideas to intentionally explore new spaces, this is the kind of strategy that has to be in place culturally, for us to achieve those outcome and it takes an enormous amount for companies to get comfortable with.
 

[00:50:01] Clara Durodie: It's one of the daily battles and I think it's a change in approach. I think it requires a change from the top down, a shift in appreciation accepting failure, especially in our industry which is a highly regulated industry with very little room for error. I would say it should perhaps be a place where we not necessarily reward failure, but not punish it the way the current regulations do it. 

 

I'd very much like to move to bias, now. Algorithmic bias has been the topic of many discussions around the ethics of AI, around how “machines are destroying our society, how everything is going completely haywire because no one understands how these machines work and everything is biased and how we are going to deal with it.”
 

Now, the conversation has a very important core, which is we need to get things right. So, we cannot allow technology to destroy and disable our societies and our values and how we operate, rather, as you said, and as you use technology to enable us to augment how we operate now. How do we deal with this matter of bias?
 

And I'd like to very much, perhaps if you can, to tell us a little bit from your experience around gender bias. The reason for which I'm raising this type of bias is that the academic work I've read so far seems to indicate that this is one bias we cannot remove from the data, therefore by default all systems we build will be gender biased.
 

Let's shed some light on this very complicated topic. 
 

[00:52:29] Dr Vivienne Ming: Absolutely. So I've already alluded to what I think is one of the best, real world stories to launch this discussion with. And that is the work at Amazon. Now I'm not saying Amazon is a bad company or fools. I'm not making any statement about them, but there was a very particular project launched there, which as I alluded to, I received what was truly the best recruiting message I have ever received in my life.
 

Dr. Ming in seven years will be a 1 million person company and your job would be to make their life. I've gotten a lot of very fancy recruiting messages from companies asking me to be their chief scientist or head of algorithms. And they usually are things like we're all gonna get so rich, or we've got the gnarliest data problems, which boy, you really don't get me at all.
 

I have nothing against being rich, but I give it all away. So, that's probably not gonna be a huge incentive for me but here, I got this message from Amazon and boy did they get me? So I flew up to Seattle, met with everyone; suffice to say, Jeff's definition of a better life is distinctly different to mine.
 

And unsurprisingly, I did not take the role, but one of the projects I would've run was this one to build a deep neural network, the modern face of AI for hiring. Let's be clear. The intention of this network was to remove bias from the hiring process, remove gender bias, racial bias, whatever, wherever it may exist.
 

And there was a reason they were recruiting me as alluded to earlier. I was the chief scientist or one of the first companies ever to actually tackle this, not a giant corporation, but a startup, but we had our successes and that work came to their attention. So they were recruiting me and I was very flattered, but I looked at the project and I said, this isn't going to work.
 

What I could see about it being fundamentally flawed was the very way they were approaching the project. Here is all of Amazon's hiring history data, which is  huge. A data set that is hard to get a bigger one. Maybe the US federal government, the office of personnel management is essentially the largest HR department in the entire world.
 

And it has 200 years of records. Quite literally. I've had a chance to work with that set, but for a private company, there are not many that would be bigger than Amazon. So they have all of this amazing rich data which most traditional AI researchers and application builders love data sets of that kind of scale.
 

They had a very specific question, is this person likely to get a promotion in their first year at Amazon? And of course they add historically all of the correct answers and that actually fits perfectly into the educational experience of almost everyone working in artificial intelligence, let's say you were earning a PhD at Cambridge or at Stanford, and in all likelihood, you know, you are (1) a genius, and (2) you spent seven years with a big giant data set like ImageNet.
 

The exact question, how many dog breeds are in these pictures and all of the right answers and you and your advisor work to optimize that. And that's exactly what Amazon is now asking you to do, but that wasn't my approach. That wasn't how I was educated in AI. I was educated in AI with let's understand what's going on inside the brain.
 

Here's this powerful tool we could use to do that. And that turns out to be a pretty fundamental difference. When I looked at the problem, I immediately said, All of your historical hiring data is full of bias. I mean, that doesn't take any deep insight at all, nor are we calling out Amazon. Every tech company, every company is full of hiring history, bias that in fact, it's the very reason you wanna launch the project is because you know, that that's true.
 

So right away, the data set is a problem, but it's actually not the biggest problem. The biggest problem was the question who gets a promotion in their first year at Amazon is that if you're let me put it this way in more than two decades of working on projects like this, never once has a problem walked through my door with the right question, attached to it, much less the perfect data set and all the right answers. They walk through the door and it says things like your son has diabetes have fun with that. I walk through the door and say, we kind of think maybe, granting wishes to dying kids might change their mortality rates, but we don't really know, and we don't know what to do with that and by the way, there's no data. That's the kind of problems that actually occur in the real world. And what you have to do is say, well, this is the data we have. I mean, I dream of data sets like all of Amazon's hiring history. I almost never get a giant data set like that. That is a true gift, but understanding by its very nature, it is full of bias of all sorts of bias of all kinds, but gender.
 

Yeah. And sure enough, Amazon took their, a big giant neural network. Threw it at that data with that question. As the principle metric and the resulting AI absolutely would not hire women. If the word women occurred in your resume, you got downgraded and again, that isn't a shock of course it responds that way.
 

Are we really shocked? That's the history of the tech industry. Keep in mind, virtually all of modern machine learning, not all of it, but virtually all of it, almost everything that we get excited about today from models like Bert and GPT-3, there's the new Dalle models for turning text into images.
 

They're amazing, or the work at DeepMind, Google have this model called reinforcement learning and they learned how to play video games and then go and beat the best human chess players.
 

Nowadays, they're figuring out how to fold proteins or you can actually give it a problem statement and it'll write the code to solve the problem and with a system called Alpha Fold, explain to me again how everyone's supposed to learn how to code, because it will guarantee them a job. But every one of those examples I just gave are purely correlational.
 

They don't truly understand the causes of protein folding or go or hiring what makes a great employee causally. What they're picking up on are phenomenally subtle correlations once they did an analysis of my own work and found that for every causal relationship and the kind of data sets that I work with messy human ones, there were orders of magnitude, more spurious correlations.
 

Things like you're more likely to get promoted if you're a man than a woman, that's not a causal relationship. Even if you are the kind of person that thought there was a reason why men should be promoted above women, maybe you think testosterone gives men an edge. Even then there are women that are high in testosterone. In fact, behaviorally, they can be very much big risk takers, just like men. There's also wonderful research on the very small number of matriarchal societies in the world, showing that women from those societies are just as big, if not more risk takers than men. So, okay. You think there is something special about being a man, in Amazon, that's still not the cause of a relationship.
 

The cause of a relationship is surely something like risk taking behavior, but these models are not sensitive to that. They're just picking up on the correlations. Turns out the biggest correlation in that data set is gender. So Amazon being good people, the people working on this project, they listened to what has grown in recent years, we might call it the ethical AI movement.
 

And it, boy did not exist when I got started in this field, not even remotely, but now it's out there. And it has some clear statements, like for example, de-bias your data sets. So in this case, for example, remove all of the names, pronouns, remove the names of women's colleges, remove all the gendered language entirely.
 

So they did that and they created a data set where the average human being couldn't really tell whether a given candidate was male or female, and then they trained the neural network up on that data set. And it did something amazing. It figured out who was a woman when humans couldn't and wouldn't give them a job and in fact, I would, I'm a bit of an outlier in this. I think all of these tools, having ethical guidelines, having advisory boards, particularly empowered advisory boards that can really make concrete decisions about the deployment of algorithms and methodological things like rebalancing data sets, all of those tools should be on our table.
 

They should be in our tool belts, helping us build better systems, but they don't fix these problems. We fix these problems and in this particular case, I would argue that if you have to create a fake world to de-bias your AI, that's deeply problematic. Our AI should be able to look at real world data. Race and gender socioeconomic background language skills, and causally. See why this person is a good or not a good hire, same thing with loans, university, admissions, attention of police. I would argue that building AI and investing that relies only on correlations, however, is incredibly sophisticated and powerful.
 

These powerful algorithms are destined to create bubbles and land us in disastrous places. It is astonishing the degree to which we trust. We ignore the old correlation does not imply causation, uh, mantra and put increasingly more and more decision making crucial. Civil rights, even human rights level decision making at the level of correlations, however sophisticated.
 

So that was a big, long tour, but that's one side where AI induces bias. Let me give one more minute to say there are amazing uses of AI to combat bias as well. The reason Amazon was recruiting me was because the systems we built at my company, I think we're about as well as anyone's ever done it and what we did was we looked at the scientific literature, what is causally related to success on the job?
 

If you are a software developer, if you're a salesperson, a designer, what we can see is unrelated to gender or race and predicts causally that you will write great code. You will deliver sales. And we built little targeted AI to look for those qualities in people given publicly available data about them still had some pretty complex ethical issues there.
 

No one said we could use that data. Although it was public data, we didn't technically do anything wrong, but that's another area for us to think about ethically but that's not what most groups do nowadays. Nobody is putting in that time to say, here's something we know causally relates to what we are trying to do.
 

And here's an AI that leverages that discovery. Instead we treat AI like a magic wand. We wave it around a problem and assume its solution is perfect because it's math, but it turns out AI is justly biased. Just like we are, you can get very nerdy here and talk about a lot of fundamental, resolvable elements of machine learning systems, like the bias, variance trade off, but just practically speaking, we can simply say AI is biased.
 

It turns out it's biased in ways that are different from us. You look at the kind of pictures where AI sees  a giraffe versus the ones where we see a giraffe that are errors. And to us, the AI's mistakes look insane. Like there is no giraffe in the picture. There's nothing that even remotely looks like a giraffe like, but they make decisions in a way different than us.
 

Whereas it probably thinks that we're pretty crazy and the way we would, if they could think, and they don't, when they look at the kind of errors we make, what excites me is changing the acronym. Augmented intelligence rather than artificial intelligence. I am a huge advocate for, we should build machine learning systems that make people better, not, there are some things that are worth fully automating.
 

I totally support the idea that people shouldn't be bent over in fields, picking lettuce 16 hours a day, but we then have to own where is their life going to take them at the same time that, we should recognize that there's a class of creative labor, that we can either de professionalize or we could augment.
 

And the economic gain is overwhelmingly on the augmentation side. But actually the trend over the last 20 to 30 years, as far as automation has largely been on the deprofessionalization side. So that's attention and it's right there in the finance 
 

[01:08:36] Clara Durodie: I believe we spent a lot of time, at least in the past few years, in our industry looking at ways to use automation to cut costs rather than to improve the quality of the work by augmenting it which is what I think we need to do. We spend time trying to recalibrate the conversation, the narrative around using AI as a business tool for growth and augmentation rather than a replacement tool.  

 

This conversation is fascinating. I'd like to now move to discuss regulating AI. Some of it was not used for the right purposes. And it upset some rules and some people, some ways in which we live our lives and obviously that led regulators and a chorus of people to ask for regulations to be implemented. And one could very strongly argue that there is a need for some sort of regulation. And the European Union has been working very hard to put this in place. The EU AI Act, which regulates based on the risk of those AI systems. What is your experience as a scientist, how should regulators look at regulating AI? 
 

[01:10:34] Dr Vivienne Ming: Yeah, wow. Regulation is a very complex issue. You know there's this old, funny saying, democracy is the worst of all forms of government, except all the other ones. And I kind of feel that way, a similar kind of ambivalence about regulation, which is to say on one hand, you can look for example, at drug discovery, and the idea that neither the US or the EU or most places would allow a drug to go onto the market that hadn't been proven to be effective.
 

Where the risks were well documented, the benefits were well documented and at least in the ideal, and let's be honest, it isn't ideal doctors were balancing those risk decisions. The truth is they're probably hurting just like the rest of us, but in the ideal they're balancing these decisions. 

 

I gotta admit, there's a part of me that looks at algorithms deciding who gets loans, who gets jobs, who gets to go through border security without additional attention from the police. And I think how do we allow such deeply personal intrusive experiences in our lives that don't have the same proof that we would allow a company building an AI that analyzes facial expressions during a job interview to allow it to influence whether people get jobs. And I will say, as someone who has both worked in facial analysis, and in jobs work, there is no scientific basis to those claims.
 

So, that's one side of it. And clearly there, you have a role for regulation. You have a role for legislators, or, and that's not the only way of thinking about it. You can think of institutional regulation having groups come in and essentially with carrot and stick, to incentivize better behavior and punish worse behavior.
 

So it doesn't have to just be regulation. You could think about the roles of institutions as well, whether they're governmental super governmental or even private institutions, you know, the role it's been a little messy in the US and Europe recently, but you know, the role of audit firms.
 

I think it's astonishing that we have not yet done data audits and algorithm audits as a normal regular part of business practice, the same way no public company would ever get away with failing to audit their finances, failing to audit your algorithms and data just seems shocking.
 

And that was a norm. It became formalized and regulated, but it started as a norm because it was good for markets to be able to have confidence in what you were investing in. Yet. Again, we have another domain where we have black boxes. We literally refer to them as black boxes. They can be core to business practices, and yet the companies themselves may not fully understand them, much less outsiders.
 

So there's some arguments in favor. The argument against, I think is pretty simple, which is, this is a domain which just moves far too fast for regulation to keep up which is one reason by the way that I think between those two choices, regulation versus institutional. we might call it regulation.
 

The latter is better. I think legislators should empower institutes. You know, boy, during the pandemic, the WHO and the CDC have not covered themselves in glory, I will acknowledge, but nonetheless, that's the kind of thing, institutions that can come in and say “Hey, you're making these claims. Let's take a deep look at it.”
 

And we understand algorithms, you know, at a level that you do as well. So that's one perspective, but I would argue that there's another form of bias, intrinsic to these systems, which has not been fully explored, which in fact, you almost never see discussed in ethical AI research or in talks about legislation, which is even if these things worked perfectly, they only work in the self-interest of the group that built them.
 

Almost always, these things are actually representing transactions between multiple groups, between a doctor or a medical diagnostics company and a patient between a, a bank or lending agency and a lending, between our job seeker and a company and the software developer that developed the tools.
 

So even if the algorithm worked perfectly in deciding who gets loans, it will almost certainly make those decisions in support of what maximizes revenue for the bank, which ironically. Probably means another U-shaped curve ignoring super safe loans cuz they're not very profitable, ignoring highly risky loans, which is almost certainly gonna target a specific population, but actually aggressively target people that you think will be trapped in debt cycles.
 

That's the way the algorithm will maximize returns to the bank, which isn't really in society's self-interest; it certainly isn't in the self-interest of the lender, but that's not how an algorithm would optimize its returns. We could expect these systems to put that into effect but we don't see that happening.
 

So here's another thing that I have done in my work and others have called for that's another way of thinking about regulation, but in a more market centric way, which is we need more of a balance of power. For example if a bank wants to bring an AI to bear in making a decision about my loan, I should be able to bring my own, you know, it can present all the reasons why I should get a loan.
 

Now we have what we might call in nerdy AI terms, an explainability battle. If yours is a black box and mine has reasons, I win. So you kind of have to have reasons also, and we can drive ourselves more towards causal relationships in lending, but it also makes the practice more transparent because the economics of building an algorithm to assess job seekers, loan seekers, they're so efficient.
 

I don't need to wait for someone to apply. I could assess everyone and then proactively target them with job offers or loan offers, which then means I've been passed over for a loan, let's say in a way that violates EU or US civil rights legislation. And yet I'm completely unaware that that happened.
 

And so I have no legal standing to bring a case of action, not in the US at least. You know, that's a change which makes it economically efficient to run those models like that. And I know I'm getting very wonky and very nerdy here, but it is deeply problematic from a societal sense for these systems to be so obfuscated that no one realizes where bias may exist and no one recognizes that bias will persist in favor of the algorithm maintainers. 

 

So one last thing we're doing, I mentioned I have a company working and developing tools for postpartum depression and, so we mash up language models and mobility models and biomarker data. We're looking at it as well Alzheimer's and major depression and other things. We just decided right up front, we won't own the data. So we've created a trust, a legal entity, whose fiduciary responsibility is to the users, not to us. We believe our economic value proposition is clear, and valuable, and investors have agreed, but one aspect we have made the decision right up front.
 

Part of our business value proposition is simply not going to be the value of our data. We don't own it. We can't make use of it in any way, which is not agreed to by the trust and the trust doesn't work for us. If we truly believe the thing we're doing is valuable, then how is that not a rational decision.
 

Plus of course we're working in private spaces, you know, postpartum depression and, and Alzheimer's, and, and fundamental health issues. We also don't think that people would really feel comfortable sharing that data if they thought we could sell it on to add targeting companies or political manipulation companies.
 

So we've decided just right up front, we will literally put our money where our mouth is, uh, in the form of never, ever possessing the economic value of that data other than what we extract to make our medical predictions, again, that's a huge leap. That's the norm. No, one's forcing us to do it.
 

We just think it's right, right. From a business perspective, right. From a societal perspective, I think the tech industry will follow us.
 

[01:20:28] Clara Durodie: I think it's a very powerful proposition. I think it's a proposition which should enable the entire value chain from investors all the way through to users to look at the data use and data collection and the value of data in a different way on that positive note and positive solution to a very complicated problem like regulations and personal data use.
 

I would like to very reluctantly, but in the interest of time, conclude our conversation today. I've enjoyed every single minute of it. 

 

It's an absolute privilege and an honor to be able to have you on our show today. 

 

The show notes will include the summary and valuable references, which you mentioned in the conversation today so our readers are able to read everything in one single place.
 

We thank you very much, we hope to see you again on our show as we say goodbye from England.
 

[01:21:54] Dr Vivienne Ming: It's been a pleasure and goodbye from California. We'll be in London soon. 
 

[01:22:01] Clara Durodie: Thank you very much, Vivienne.

Copying commentaries, articles and transcripts to share with others is a breach of our T&Cs and Copyright Policy. Decoding AI Briefing and Podcast  Disclaimer.

bottom of page