Dr. Nicholas Mattei
Nicholas Mattei is an Assistant Professor of Computer Science at Tulane University. His research focuses on the theory and practice of artificial intelligence; largely motivated by problems that require a blend of techniques to develop systems and algorithms that support decision making for autonomous agents and/or humans. Most of his projects and leverage theory, data, and experiment to create novel algorithms, mechanisms, and systems that enable and support individual and group decision making.
Before joining Tulane he was a Research Staff Member in the IBM TJ Watson Research Laboratory in the IBM Research AI – Reasoning Lab where he worked on a number of topics including Complex Question Answering / Natural Language Inference, Decision Making, and issues in AI, Ethics, and Society. Before that he was a senior researcher working with Prof. Toby Walsh in the AI & Algorithmic Decision Theory Group at Data61 (formerly known as the Optimisation Group at NICTA). At NICTA he was the founder and maintainer of PrefLib: A Library for Preferences; the associated PrefLib:Tools available on Github; and has been the founder/co-chair for the Exploring Beyond the Worst Case in Computational Social Choice (2014, 2015, 2016 2017) held at AAMAS. He was/is also an adjunct lecturer in the School of Computer Science and Engineering (CSE) and member of the Algorithms Group at the University of New South Wales. He previously worked as a programmer and embedded electronics designer for nano-satellites at NASA Ames Research Center. He received his Ph.D from the University of Kentucky under the supervision of Prof. Judy Goldsmith in 2012.
Nicholas has served on the SPC or PC of AAAI, AAMAS, AIES, IJCAI, COMSOC, ECAI, CP and other major AI conferences. He has reviewed for AIJ, AGNT, JAIR, IEEE Intelligent Systems, ACM:TEAC and other journals. He serves as the AI, Ethics, and Society Officer for ACM:SIGAI and as the Public Relations coordinator for JAIR. He has also given tutorials on social choice and empirical methods in artificial intelligence at a number of venues including AUSAI and the MSS Workshop in New Zealand. He has taught various courses on knowledge representation and reasoning; artificial intelligence; and programming at the University of Kentucky and the University of New South Wales.
While you may struggle to develop an AI system that is fit for purpose, the more significant challenge may be for you and your team to convince leadership, your business partners, and others to trust the AI system you want to implement. How do you think about trust? In this conversation, Kristina and AI expert Dr. Nicholas Mattei explore the challenges in building trust in AI, focusing on transparency, fairness, and ethical considerations. They discuss the role of AI in society, data governance, and public perception, particularly in relation to generative AI models like ChatGPT.
[00:00:14] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.
[00:00:32] KRISTINA: One of the biggest considerations our industry faces at this moment with AI is trust. To help us understand the risks and opportunities in front of us and make sense of it all is Dr. Nicholas Mattei. Nicholas is presently an assistant professor of computer science at Tulane University, with research focused on the theory and practice of AI. Before joining Tulane, he was a research staff member in the IBM T. J. Watson Research Laboratory at the IBM Research AI Reasoning Lab, where he worked on a number of topics, including complex question answering, natural language interface, decision-making, and issues in AI ethics and society. Nicholas has an impressive list of additional affiliations before his time at IBM, as well as countless achievements and accolades. I won't go into a lot of that because we will be here for a while, but I encourage you to check out his bio on the site right now; let's dive into his area of expertise. Nicholas, welcome. Thanks for coming to hang out today.
[00:01:25] NICHOLAS: It's good to hang out with you, Kristina.
[00:01:26] KRISTINA: You have a rich history in this space. Help us understand how to go about establishing trust. Things like transparency, fairness, accuracy, accountability, and AI systems. It sounds so simple when you say it fast, but you kind of wrote a book with others around this, so it's a little bit more complex than it sounds at first glance.
[00:01:46] NICHOLAS: Yeah, it's one of these topics that I think we, we say a lot of these words like you say trust, you say transparency, and they, it seems like everyone knows what these mean. But when it comes to deploying these systems and when it comes to designing these systems, there's a lot more to think about. And I think a lot of it's on the technology side, and a lot of it's on communicating with your customers and communicating how you're using folks’ data. What's the thought process that went into these systems? What are the decisions that it's making and how are those decisions being made? And I think a lot of folks have this kind of a knee-jerk reaction. And a lot of customers, I think, have a lot of this sort of reaction to just the mention of AI, right? They've, I've been doing AI type media for a long time and every time they always want to put the Terminator on there or something like that, and it has an image problem in some ways around that, because. What AI does is it routes you through traffic when Google says, Oh, there's traffic going on the highway, right? It gets your boxes to your front door, faster and through multiple shipping these large shipping networks like UPS and FedEx have. And so a lot of these AI systems have been making our lives easier, but they've been very much in the background for a long time. And I think what you're starting to see is that over the last about 10 years, especially over the last couple of years with chat GPT and a lot of these generative systems, conversational AI systems, you're starting to see these AI systems be sort of very consumer-focused and people know that they're AI, right? And so once that kind of happens, people have lots of questions cause they sort of think about the terminator, and they say, Oh, go, well, what's going to happen to all my data when it goes into these systems and they don't know, right? And it's because I think we've got a little bit of an image problem on the AI front, with how these systems are being built. And that's, I think, getting more and more interesting, as some of these new systems come out, like, if you think about what the chat GPT interfaces, it's actually there to kind of convince you it's slightly human in a way, right? It's got the little thought bubbles, that pop up while you're trying to talk to it, like to try to convince you that it's like thinking about it, even though it just answers you in real time if it really wants to. But, How are your customers going to interface with these applications, and how are they going to interface with these AI systems is really what it comes down to: how do you expose some of the data that's gone into that system? What data is it actually using? What data is it not using? A lot of misconceptions about what kind of data goes in and out of these systems. And then how those what's the sort of Oversight, you know, is there actually a person? Is it, you know, some of these AI systems are just rules? They're just rules that we write down. They're like, okay, well, if there's a, you know, a backup here, you go somewhere else. And it's not this like a magical system in the background. And so I think really trying to expose more of that process is really key to establishing some of these things like transparency and consumer trust.
[00:04:13] KRISTINA: You just said data about nine times fast, I think. That kind of got me thinking about what role data quality and data governance play in building trust in AI. Specifically, I'm thinking about things like data providence standards, for example. And I'm also cringing at the same time because I'm thinking about the fact that we've been saying data governance for like 30 years, right? And everybody's saying like data quality, and nobody's gotten data quality right inside of enterprises yet. So how is that looking for us in terms of AI?
[00:04:41] NICHOLAS: I don't know if it's looking that much better. I've moved from industry back to academia, so I don't have quite the insight on the industry side that I used to, but I think, you know, a lot of folks are really still trying to get their sort of data situation in place. And at the end of the day, these AI systems, data is the fuel for these AI systems. They don't work without the data because AI is really just like data plus algorithms and that's all it is. And so if either part of that setup is not of good quality. If you're using data that's inappropriate, if you're using data that you probably inappropriate, both in terms of whether it actually relates to the decisions or the predictions or the system that you're trying to build, or do you have the rights to it? This is a big thing that's coming up with a lot of these generative AI models; as they go out, they scrape the web; they just ingest huge amounts of data with somewhat limited regard to who owns the copyright to these things. And, there's all these, there's a great, there's a lot of data quality issues, a lot of data provenance issues. And I think that's what I was; I said data six times, eight times fairly fast; we just wrote a piece talking about data and AI systems as well. Really, looking at folks who are worried about how their data gets used, right? And they're worried about sort of two things, right? If you look at PEW research and there are several of these polls that come out, folks seem to be worried about how their data is being used specifically sort of private data, like medical data, like employment history data, things like that. How is that data going to feed into new AI systems? What are their protections around that? And then both are they using other weird data that you don't know exists? And so I think the data, and we talk about this actually in the book, At least at a personal level and the customer level, understanding the data really comes down to understanding, this kind of like representation of self, right? I don't know. It's like, a little out there and like sort of the ethics space is like, you have all these data points about you and they do kind of reflect you. It's like where you've lived, like what you like to eat, you know, like what your medical history is. And these are very personal data because they paint a picture of you. And I think folks get very uncomfortable when there's this picture of them out there on the Internet out there in these servers out there on all across all these products. They don't understand what that representation is, if it's accurate if they can access it if they have the ability to update it. So if you look at the European Union's GDPR, the General Data Protection, and Regulation, that's why a lot of the principles that are around those are things that we talked about in the book that are very sort of ethics-based. It's the right to be forgotten. It's the right to update. It's the right to transport your data, right? So it's very right-centric around the user's data. And so I think that's really gone a long way for building transparency and trust along those products because if you've seen there's these very interesting, but you look at some of these marketplaces that have been created by GDPR in Europe now. So in Europe, since GDPR is in effect, you can, you have to be able to download and transport your data out of these systems. And so that's been a big I. T. Infrastructure back in for a lot of these companies like Google and Amazon and things like that. But what it's done is now there's this weird reseller market for personal data in Europe. And so there are companies that will pay you to go in, download your Netflix data, download your Amazon data, download your shopping data and then send it to them. They will pay you for that, right? Because that's like market research data. Whereas in the U. S., we have no ability to extract what we've watched on Netflix from Netflix, right? So it's a sort of portability rights-based thing. I think it gives folks a feeling of more; I mean, it does give them a force, more control. It also gives them the feeling of more control. Uh, and two, it creates these weird marketplaces, right? I think in some ways is actually beneficial for some of these businesses, maybe not the ones who have been trying to hoard and hold on to the data all the way, but it's opened up these marketplaces for things, and it's let people decide who they want to sell their data to, who they want to share that data with and really capture some of that value back, which is I think really kind of cool.
[00:08:21] KRISTINA: Really cool, but it's also causing me to wonder where we are with AI and the adoption of technologies. Because you're absolutely right. For a long time, I've been thinking like, Hey, you know, I don't want my private data abused, but then I'm kind of the mindset of like, okay, if you're going to take my data, at least give me excellent services. So, it's personally very upsetting to me when I log into my YouTube TV channel, and it keeps spewing at me ESPN. And yet I'm like the least loving sportsperson on this earth. Like, I never watch ESPN. So it's like either the algorithm is totally wrong, or they have the wrong data. And I'm wondering which one is it. So, are we just really far behind on technology, or do they have the data wrong? Or is it both, you think?
[00:09:04] NICHOLAS: The whole idea of a lot of these systems, these machine learning systems, is that if you look far enough into the past and the past is data, the data of recorded events and things like that. So the whole idea is if you look hard up into the past, you can predict the future. And so a lot of cases, this doesn't work, right? So whenever you go to, like, deploy these AI systems and think about it, it's like, okay, does knowing something about what has happened give me any information about what will happen? And in a lot of cases, this is, like, not true, right? So you look at like weather, right? Like weather, like broad patterns are somewhat predictable, but like, when you look at day-to-day weather, they actually run these really complicated particle simulations of the atmosphere and stuff like that. Because, the weather, honestly, if you want to be very accurate, you just say, oh, whatever the weather was yesterday is what. It will be tomorrow, and that's actually a fairly accurate predictor, but it's kind of useless. And so similarly for these, like YouTube things, like a fairly accurate predictor, like whatever you watch before you're watching exactly that again, right? Because people like to rewatch things, but that's also fairly useless, right? So it's not a good product in some sense. And so they tried to mix in a lot of different data signals to get these predictions in terms of what they think that you're going to be interested in. Sometimes, they're macro; sometimes, they're based on people in your area. I'm not privy to all the different little bits that work on it. They do integrate a lot of signals. But the question is, yeah, exactly. If you don't watch YouTube a lot, then they don't have a lot of historical data on you. And so they sort of fill that in with historical data about people like you or people near you or what they, who they think you are. And so when the data is right, they can be scarily accurate. But when the data is wrong, they can be sort of hilariously bad, right? I think you've got to have both, right? You have to have a system that's in a setting where you maintain this idea, right? That the past is predicted the future, and you have to have good data about the past good data about, the past events that are indicative of the future. And if you look at a lot of these systems that folks get upset about or get that have caused, sort of political controversy. Some of the books, some of the systems that Kathy O'Neill talks about her books or "Automating Inequality." They're very much using not great data to try to predict the future because it's the data they have. So it's data of convenience. So, a lot of times, we'll talk about samples of convenience. We'll talk about data of convenience. And you use that because you're like, it's kind of loosely correlated. It turns out to cause a lot of these problems that you mentioned before, like bias and things like that, right?
[00:11:09] KRISTINA: It's not necessarily going to be great at predicting the future, then, right? And so, I'm thinking to myself, it's almost like you might as well be using the Farmer's Almanac then to predict the weather. It's kind of where we're going with this, is what I'm thinking.
[00:11:20] NICHOLAS: Right. Well, I think in some cases, it's almost better, right? Because in some cases, You almost make it worse by you're adding in all this data that has nothing to do with what you're trying to do, even though you think it might, but the system, you know, this is where you're really paying attention what those algorithms are and what they're telling you in terms of the accuracy, the positive parity, all of these complicated statistical words about like how good it's capturing your data, it'll tell you if you pay attention to it, but then you're also like, well, but it gets it kind of right sometimes. So let's just go with it. Right? So is this big trade of what you're trying to accomplish, and I think you're really limited by the data you have, and sometimes we don't want to admit that sometimes we don't want to get our data warehousing data systems situation to be able to do the cool AI stuff that we want to do on the front end. But it really does all build up from data full disclosure. I teach data science, so I just tell my students that all the time.
[00:12:10] KRISTINA: That's a good question, though, for you because you do teach data science. Is that maybe a good way to think about it that we're just at a point in our infancy regarding data science right now? Are we just really early right now in where we will be, let's say, five years from now, with regards to AI and how good we will be in terms of the type of data we're able to expose to AI so that what we can do with AI is sophisticated? I'm thinking, developmentally, where have we been with other technologies historically?
[00:12:43] NICHOLAS: First, they think that the data science and stats people will get mad, right? Because there's this paper from about 10 years ago. It's like 50 years of data science. So, the statisticians would argue that we've been doing data science for quite some time. I think what it really is is about the volumes of data that we have, right? And I think that's a big piece of it. So, remember the buzzword for big data and all this other stuff 15 years ago? There's a three V's of big data, right? It's velocity, uh, variety, and volume. And I think what the issue is is that the volume, especially now, is so high on this data that a lot of the, I don't think, a lot of the principles for data science have changed. And quite honestly, like if you look at a lot of the base ideas of what goes into these neural networks and things like that, it's math from the eighties. A lot of the stuff that we're doing in terms of reinforcement learning is math from the, like, the late eighties. The big difference is that we had just had access to so much more data. And I think one of the things why I still did consulting type work, you go in and you say, Oh, people say, Oh, we have a lot of data. And what they think it's a lot of data. And you remember that something like ChatGPT or something like the Watson computer that plays Jeopardy and things like that. These things are trained on the Internet, right? That's a lot of data, and it really is a lot of data. And if you've got a customer base where you're serving apps to millions of people, that's a lot of data. But if you've got two years of sales data. That's not a lot of data, right? If I can fit it in a spreadsheet, it's not really a lot of data in terms of volume. And you were talking about these, the translational technologies and things like that. A lot of that's coming from just the amount of recorded spoken language that we have. We're able to better feed those volumes of data through these large systems and get predictive accuracy on the way that language works and things like that. And so it's both kind of a combination, you know, the older systems there didn't understand just the base language very well. So just the transcription when you talk to Alexa, or you talk to your Google assistant or talk to Siri, that's come a long way, both from the data front and also the algorithms front. There's been a lot of good work there. But now we have language and all the, across all these languages is more and more people are connected. The internet is more as more of the things that we are fed through these data systems that are able to record things. We're able to do new things. I think, in large part, because we have data sets that are larger than we've ever had before. And as we continue to sort of, I think the word that they use for it, and a lot of literature, is kind of data by the economy, right? It used to be that if you wanted to know what somebody watched, what kind of movies they watched, they sent you the little Nielsen book, right? They know more than likely who was in the room at the time because you have the little profiles and things like that, right? So they have more information, more data about your viewing habits than they've ever had before, right? It is the same thing with looking at Twitter and Reddit; just the amount of language, the amount of data bits that we leave out there, it's, it's insane. Mean, it's crazy. It's great. It's crazy how much there is. And then you take these techniques where we've known they're like, okay, we get enough data. These techniques are going to work. And now they work because we have enough data to do that. So that's why I go back to the whole idea that a lot of this stuff is just data plus algorithms. And then set up correctly, right? And getting that set up in the right way.
[00:15:32] KRISTINA: So you recently coauthored this book titled Computing Technology, ethics, Engaging Through Science Fiction, which I thought was an interesting title. But the crucial question you aim to answer is how do we know when new development comes at too high of a cost? And so how do we know that, especially when a lot of folks who are going to be listening or thinking about this from a business perspective, and they're trying to balance out risks and opportunities for their business? They're trying to explain to their leaders or their stakeholders whether the cost is too high, whether it's brand or its money or its reputation or whatever it is, is it the right amount to invest? Is it too high?
[00:16:08] NICHOLAS: Now that I'm out of the industry, I'm a bit more on the hippie side, right? So we also have this center at Tulane University Center for Community Engaged AI, which we've also started, to try to understand these questions on more of a community part. But, both the book and the center are really related to understanding these trade-offs and understanding that everything we build, every system we create, has baked into it both these ideas of how we think it should be used and things that it enables, not necessarily always enables, but things that it makes easier, and things that it makes harder, right? And that just comes with it this sort of fundamental instrumentalist idea of technology is that when you build a system, when you build an information system, when you build a technology system, you're codifying a process. And once you codify that process, you've necessarily included certain people and excluded other people. One of the things that the book really gets out into the center is really getting at is how do you, what are the sort of the best practices around that? Like, how do you engage with as many different stakeholders as you possibly can? How do you think about the trade-offs, in terms of what does it cost them both? In terms of material things like what's it going to actually cost them to download your app. But also what it's going to cost them about these more sort of ephemeral ethicsy things. In full disclosure, the book is five of us; there are three computer science folks and two philosophy ethics folks. And so it was about two years of, like, arguing with these philosophy ethics folks and really changing my thinking about how we think about these systems. But so now I talk a bit more like them than I... I probably used to, uh, , but really thinking about what does it cost both in terms of money, but what does it cost in terms of some of these broader ideals that we've been talking about, right? Like, what does it cost 'em in terms of giving up control of their data? How does that affect how they think about it? How does that affect the trust that they may or may not have in it? And so in the book, it's really is a textbook for undergraduate students, really focused on trying to help them think through this process and giving them case studies through science fiction, which we thought they would be more engaged with it. That was why we did it. Um, but also lets them think about these things in a broader sense. It's tough to think about these things, and one of the biggest things I think that was a big change for me going through the book is that I come from a technology background. Used to be an engineer. I used to actually build things that people cared about. Now I teach students, which is a little bit different. Sometimes, you don't have to do everything, and sometimes, you are going to make these tradeoffs that are almost necessary. And sometimes, you need to think about the flip side of these. And one of the reason why ethics, especially the study of ethics, not in terms of what is generally sort of like people think about when you say ethics, like normative ethics, like the 10 commandments, like here are the rules, blah, blah, blah. Um, but sort of more of this idea of like descriptive ethics, right? What's important to people? What's important to people in different contexts and how do you think about those things? Because that's really what you're trading off, and you're managing those trade-offs. And so these ethical frameworks of things like ontology Or utilitarianism, which some folks may have heard of. They give you different ways to think about these tradeoffs. And I think it can be useful to use those frameworks to age your decision-making to think about what it means for some of these stakeholders, how you bring them in how you understand what their concerns are, and how you balance those. Because it's not always in a note. It's not always like dollars versus dollars, dollars in, dollars out. And I think this gets at some of the stuff that you've alluded to, this transparency, this trust, these things that we can put some dollar values on, maybe the consultant people can do that, uh, or they say they can, but really like, what is it worth to you as a company, what is it worth to you as a provider to have that trust. And maybe that's something that you just strive for that is independent of the costs that it takes. And you say, okay, we're not going to track this type of data. We're not going to use this type of data. And I think you're seeing companies make this decision, right? This is what all the new iPhone security stuff is all about. It's like, we're going to block these certain things because we take privacy as a virtue, right? We don't care that we could maybe make more money, right? In a utilitarian sense, by reselling this data, we, as Apple, to some extent, I'm not sure I don't work at Apple, so I don't know how honest they are about all of this, but at least the communication around it is, we take privacy as a virtue and we want to put that as a first class thing that we don't care what it's going to cost us. My in-laws use Apple all the time now because of that, right? Like, they sort of really buy into that. And so I think it has some value both in terms of as a great point. Bye. Ethically as being a virtue, but also monetarily in terms of how you do it. So it's very interesting to think about these things.
[00:20:25] KRISTINA: Absolutely. And I think we're seeing more and more brands, especially global brands, think about customer loyalty, brand value, things that are not quantitative. They're qualitative, more so than we have historically, and ESG is so much bigger than it ever has been historically. And so I think you're touching upon a really important point, which is, until recently, technologists were very focused on data modeling, data literacy, fusion skills, all of these technical skills. Are ethics a requirement, you think, for this next generation of technologists? I feel like you're going to say yes right out of the bat because you're teaching, right? But, you wrote the book now, and you just said you came from a different perspective, but I'm curious because are ethics a requirement for the next generation of technologists? And if so, what ethical skills do you think will be useful?
[00:21:14] NICHOLAS: Well, of course, I'm going to say yes, but I think there are other pointers besides just me being hippie about it. If you look at sort of some of these accreditation requirements, so like ABET, you look at ACM, you look at the IEEE, they all do have these requirements that in university, we teach a certain amount of ethics-based curriculum. And so the book is actually part of that. So if you go to a large state school, if you go get an accredited, better-credited university degree, which is the majority of them for computer science. Now, there's an ethics requirement. And so I think you're starting to see at least not even just the university, but the professional societies and the certification society say that, yes, like at least some engagement with the ethics of technology development is important. I think we're really asking our technologists and our technology companies to make decisions that they didn't used to have to make, right? One of the big things is about Starlink and the stuff that's going on in Ukraine. If you had told somebody 30 years ago, or even 60 years ago, that AT&T was going to decide the outcome of a war, you might not be that comfortable with it, right? But that's where we're at, right? The technology is really determining what's going to happen in these places. And I think if we don't really have our students engage with that process, don't have our developers and practitioners engage with the idea of that, the fact that these systems are touching people's lives. Not just every day, but like the majority of every day, right? When's the last time you went through a whole day without really interacting with one of these technology systems? I know it was for me. It was when I went backpacking with my brother-in-law for like four days in Colorado. So I didn't even have a phone, but you could constantly do these things every day. And as that gets deeper and deeper, and as that, the data recording gets deeper and deeper, as it intermediates more and more of our communication. , the implications are bigger, and we really are doing our students a disservice by not educating them in the space and sort of in this ethic space, especially the descriptive ethic space, giving them the tools to think about those problems and really articulate some of the issues around those things.
[00:23:07] KRISTINA: That's interesting because AI advances, obviously. They are inevitable; that's just a given at this point. It sounds like you're saying that we need to do more to foster ethical AI design, certainly with students, but I'm wondering how we also infuse that into those who have left the secondary education world already? How do we infuse that into the consulting services? How do we infuse that into the corporations? And even to some extent, how do we infuse that into the policymakers who are an average age of 64? And they think they're going to create the policies around AI on the Hill today, which you can see the smoke rising out of my head right now.
[00:23:50] NICHOLAS: And that's a great question. And one that I don't have a lot of answers to. You've seen a lot of these statements, there's now the White House's Office of technology policy has the draft AI Bill of Rights. They've got some, there's folks that I used to work with that are sitting in some of these spaces, , trying to advise policymakers on these things. I think the cost of getting it wrong in the industrial space can be millions and billions of dollars. You're getting it wrong in the policy space I think it could be even more dire. And how to get it right is, is I think it's a really hard question. And I think you're starting to see, there's a lot of these like I said, AI Bill of Rights, these, what do we want these systems to have? You're seeing a lot of think tanks put out white papers about these things. You're seeing like Mozilla, around data governance and things like that. So there's a lot of smart folks putting a lot of information out there about this stuff. The question is, how do we get that into policy? And that's at a scale that I can't; that's why I come, I came back to the university so I can teach the students. I have control over them while they're in the classroom, and a lot of these big questions are tough. And I think we're all moving into this space together, and who knows what's going to happen, but I think you're seeing a lot of really good stuff. I know the Berkman Klein Center for Applied Ethics at Harvard has this like cool graphic that's here are all of the ethics and data governance and AI governance policies that have been put out by, it's like 100 different these policies and statements and they go through them, but at the end of the day, these are just things that people are saying, and it's really up to individual companies and eventually, our representatives both at the state, local and national level to enact some of these policies around that. I think if you look at something we talked about GDPR earlier, I think that was really a sort of ethics-driven, community-driven. We want to embody these values in our data governance structures. And if we could get to that, I think we're moving in that direction around AI and around these systems. But we're in the U S we're behind, right? And so it's going to depend on people really demanding that these systems be accountable in some way. I think industry is a little conflicted when you ask them what policy should be around industry because it turns out that they, they, yeah, they do want to make money kind of. So I really hope that. A lot of things are getting said in that in those hearings are things that are either from GDPR or coming from a lot of these academic conferences are coming from a lot of these policy documents. These things have been out for 5, years. The things that are being said in these hearings are not new things, but at least I think it gives me hope that these, that writing things like books and really advocating for these things, like eventually you suffuse those ideas enough that folks do feel like those are the right ideas. Those are good ideas and things like that. So I think just continuing to be vocal about the things that we want around data sharing, the things that we want our systems to embody as individuals and then as smaller companies or maybe where you have more control, I think eventually those ideas of what are the rights of individuals, what are the rights of users and things like that, like being advocates for those things, I think eventually we'll get there.
[00:26:37] KRISTINA: I was going to ask you about this because it seems to me like there, maybe past us or passing that moment in time where there was a sense of panic around AI, AI is a risk for humankind, and it's going to take over, and we're going to have these issues and. Do we need to be thinking about guardrails to protect ourselves as humans against what is to come, or is it more along the lines of like, no, we really need a bunch of really smart GDPR privacy by design, safety by design components that are going to be like really smart, keep us safe, like seatbelts do, right? Otherwise, we're just going to do dumb things.
[00:27:14] NICHOLAS: I've done a few podcasts on this, and this singularity AI risk thing is it's a big hobby horse of mine, which I, cause I don't buy it. I'm like, come on. Um, it has a really weird history, which I won't get into, but if you're curious, you should look it up. It comes out of this cold war like everyone's going to kill our thing. Right when computers were brand new, they were like, they were sitting, they're sitting there trying to think about like, how are computers going to get us? And it's like, Oh, eventually, this will happen. I'm not a big X-risk singularity AI guy. But yeah, I think the really on the policy side having a lot of smart, like you said, a lot of smart guardrails around transparency, a lot of smart guardrails around data, a lot of smart guardrails around what we allow these systems to be making decisions about, or how they make decisions, I think are really kind of the best practice in a lot of these places. I think some folks get worried that having an algorithm to make a decision or having AI or an algorithm make a decision can be scary. But in a lot of cases, these algorithms are exactly what you or I would call, I don't know, a recipe or a checklist. It's like, if this, then this, if this, then this. And sometimes, even codifying that algorithm using AI, I'm putting quotes up for people who can't see me; in some of these cases is just coming up with rules that we're going to make this decision by, and I think in a lot of cases formalizing that process makes things a lot more fair if you look a lot of the places where they're looking at recidivism and bail bonds and things like that. A lot of times, they're actually trying to bring in some technology that's based on very straight rules that is not sort of left to a lot of open judgment. And I think the history of policing in the United States, we've seen rules work well, and we've seen rules work very badly. So, I'm not saying that rules always fix the problem. But in a lot of these cases where you have this interaction with stakeholders, interaction with the community, and then the rules and the policies are set in conversation with a lot of those folks. That's why I keep coming back to it. I think really understanding how it can affect all the as many of the communities that it's going to affect that you can and then codifying that into a piece of technology. I think that's the right way to do it. I think that goes also for this: singularity X risk AI is going to get us. It's like. Well, maybe if we just don't let them, they won't get us. And how do you not let them? You have good policies around what you allow to go into those systems. You have good policies around; what is it? What are the limits of decision-making? What are the limits of those systems? And I think there's a lot of good thought. There's a lot of good work going on in that space, but I would. Yeah. I'm not too worried.
[00:29:29] KRISTINA: Well, that's a great way to wrap us up, Nicholas; thank you so much for that tip. Get your policies right, folks, and wrap them up in a nice way that keeps you safe, balances out the risks and opportunities, and keeps you trucking ahead. Thanks for joining us today. Appreciate it. And to all of our listeners, I hope you found this episode engaging, enlightening, and most certainly thought-provoking because I certainly did. Thanks again, Nicholas. Really appreciate it.
[00:29:53] OUTRO: Thank you for joining the Power of Digital Policy; to sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.