S4 #12 AI Regulations: Red tape or green light for innovation?

S4 #12 AI Regulations: Red tape or green light for innovation?

S4 #12 AI Regulations: Red tape or green light for innovation?

Guest:
Guests:
Lewis Barr

Lewis Barr

Lewis Barr is a father, attorney, and privacy pro who has led legal and compliance teams as general counsel at growing tech companies for over two decades. His experiences working as a teacher, litigator, and U.S. Court of Appeals staff attorney inform his practice. His past companies have included telecom, e-discovery, and digital authentication and authorization. For the past two years, he has worked at Conversica. Conversica has put AI to work since its founding over a decade ago. It drives digital assistant conversations with its clients, prospects, and customers across chat, email, and SMS text channels. The AI-powered digital assistants work at scale to drive towards the following best action aimed at accelerating client revenue, whether that’s signing up a prospect for a webinar, scheduling a client sales meeting, providing customer service, or gauging interest to buy additional client products or services with a handoff to client sales personnel when a client lead is a conversation qualified or human follow up is requested or otherwise appropriate.

In this new podcast episode, Kristina and guest Lewis Barr, VP of Legal and General Counsel at Conversica, discuss the European Union's Artificial Intelligence Act (EUAIA). Barr highlights the Act's primary goal to implement a risk framework for AI development while addressing human rights concerns within the EU. He emphasizes the potential impact of the Act on both large tech companies and smaller enterprises, noting the need for transparency and human intervention in AI decision-making as per GDPR standards. Barr advises startups to integrate privacy and compliance from the onset, stressing the importance of understanding and articulating AI's societal impacts. The discussion also touches on how the EUAIA might influence AI regulation globally, underscoring the need for continued innovation within regulatory frameworks.

Keywords:
European Union's Artificial Intelligence Act, Compliance, Privacy, Transparency, Risk Framework, Human Rights, GDPR, Automated Decision-Making, Innovation, Legal Counsel, Digital Strategy, Tech Giants, Data Privacy, AI Regulation, Startups
Season:
4
Episode number:
12
Duration:
26:37
Date Published:
December 19, 2023

[00:00:00] KRISTINA: Are we navigating the digital labyrinth of the European Union's Artificial Intelligence Act, wondering if it's a green light for tech giants or red tape for innovation? Uncover how this groundbreaking legislation impacts your digital strategy with today's guest, Lewis Barr, Vice President of Legal and General Counsel for AI provider Conversica.

[00:00:20] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

[00:00:38] KRISTINA: Welcome back to the Power of Digital Policy, and thanks for making time to be with us today. I'm joined by Lewis Barr, a father, attorney, and privacy pro who has led legal and compliance teams as general counsel at growing tech companies for over two decades. That means he has a lot of great experience to share with us today. Louis's experiences working as a teacher, litigator, and U. S. Court of Appeals staff attorney informed his practice, and he's now really focusing on telling us today what is happening with the EUAIA, also known as the European Union's Artificial Intelligence Act. A lot of things to unpack here. So, first and foremost, Lewis, it's great to have you with us today. Welcome.

[00:01:17] LEWIS: Hey, thanks, Kristina. Great to be here.

[00:01:20] KRISTINA: Everybody's talking about the Artificial Intelligence Act, but start us off for folks who maybe haven't been paying close attention with what is really the overview here for this Act? What are the primary objectives?

[00:01:36] LEWIS: Well, the primary objectives of the AI Act basically is to apply a risk framework to developing AI technology, and to meet human rights concerns of the European Union. Having said that, this act is still in the formative stage, and we don't even know at this point whether there's going to be an Act for the EU Parliament to approve, along with the other two bodies that are interested. There's a big meeting coming up on December 6th. Apparently, there is still a lot of consternation and negotiation going on, especially about some of the act requirements that would apply to what are deemed higher-risk AI systems.

[00:02:26] KRISTINA: Does the AI act favor large tech companies, or how might this influence operations and marketing dynamics for just everyday enterprises?

[00:02:36] LEWIS: Great, great question, which you sort of, introduced in a sense in your, in the introduction, right? Because there's a large it's startup, but it's a significant startup in France called Mistral, which is an AI provider and they have representatives as well as some other companies in Germany and Italy that are concerned about this being a stifling thing, you know, to have these regulations hit them early vis a vis the competition, and especially in the United States. Having said that, I think a lot of smaller companies are also concerned, and they're actually small vis a vis, you know, Google and Microsoft at this point, but generally, there's a lot of smaller companies that are concerned, not only in the EU, but in the U. S. as well about the playing field may not be, maybe unleveled, if you will, by the imposition of regulations that certainly require a bit of resource to meet or so we just early stages right now, I mean, and we'll get into that, I think, in this conversation, but even if the AI act is, is past, it's not going to take effect for, for a little while. So there's time for folks to prepare, but there are, interestingly, some things that are already on the books in the EU that affect AI, and that's from the GDPR. So we can talk about that as well.

[00:04:08] KRISTINA: Yeah, do tell us a little bit more about that. I mean, we've all been talking about GDPR since May 25th of 2018. Remember the day well, the sky didn't fall, wasn't it?

[00:04:17] LEWIS: Yeah, right.

[00:04:18] KRISTINA: We're all still here, but certainly, I think it's easy to forget at times that GDPR, as well as other data privacy regulations around the globe, apply not just to traditional channels but also to emerging digital channels, such as AI.

[00:04:35] LEWIS: Right. And there are a couple of articles in the GDPR that I think are worth noting. The first one is Article 22, which has to do with Automated Processing to reach automated decision-making, and that only applies if personal data is involved. So, no personal data, no worry about article 22. But if you are working with personal data, then if an AI system is applied to determine a decision to reach a decision that will have an impact, a legal effect, or some other really significant effect on an individual in the EU, then that is going to Require that there be certain precautions taken, and there will be a right of the individuals who are affected to ask for human intervention before the decision is imposed on them. And I know this sounds quite abstract, but we'll get to an example in a second, but to have that human intervention effectively is a check on AI, you know, on the machine making a very significant decision that can impact a human being. And we're talking about things like a credit risk scoring, right? Or a could be a health decision made. Okay, whether somebody gets a kidney or not, you know, in a transplant situation. So it's pretty important. The other interesting piece that people should realize in the GDPR is articles 13 and 14. They deal with transparency, and I think it was looking ahead to AI becoming much more, I guess, prominent in its use; of course, AI has been around for a long time. It's just now we're dealing with these generative AI and LLMs that have a lot of power behind them. So it's become, I guess, more to the fore. But this transparency requirement is pretty particular. And if you don't mind, I'm just going to take a second and go to actually the article itself. So, this would be AI systems that are engaged. It should provide or the owner should provide meaningful information about the logic involved as well as the significance and the envisioned consequences of such processing for the data subject being the individual that could be affected. And so this then raises questions about, well, how much of an explanation, how deep should it be? Should we get into if we're talking about the logic involved, we'll be talking about how these models work. So it's, it's interesting because these same or similar transparency requirements are in the AI Act, which is still in draft form.

[00:07:32] KRISTINA: That's interesting because you just pointed to the fact that you need transparency. We need actually to be able to have explainability. We need to have the human in the loop, so to speak. What does that look like at the end of the day?

[00:07:51] LEWIS: I think it looks like, because when you think about the power imbalance between an individual and a large corporation using AI and making decisions, and let's think of maybe a large insurance provider, right, that's making decisions on what premiums should apply to an individual. How does the individual even know what the heck is going on? And so I think this, I think at the end of the day, we've got to get a lot better about notices and a lot clearer and so explicit. And for example, the GDPR requires in the situation of automated processing that affects individuals certain, you know, certain effects, legal effects or significant effects that there be explicit consent. That's, and so you know what is needed. I think what's needed is a really clear articulation of what's going on and how that could impact the individual and let the individual know. Yeah, you can question this, right? And I think that would take the form of, you know, if somebody got hey, here's how we determined your premium, I wouldn't say disclaimer, but note about hey, you can contact us to discuss this and here's how we reach this decision.

[00:09:00] KRISTINA: What does that look like from a digital operations team or sort of an operational team perspective is what I'm wondering as well because a lot of times, folks don't necessarily understand the logic that we're talking about, right? It's a little bit of a black box, especially with the generative AI models and some of the foundational models that you mentioned earlier. And so, how do we provide notices? How do we provide transparency when maybe sometimes we don't understand the process or have visibility to the black box? And, you know, is there an opportunity from your perspective to parse this? Should we maybe be thinking about logic separately from the data that goes into the box and maybe writing some of the data?

[00:09:38] LEWIS: There are a lot of systems that are not using personal data at all, or that filter out the data or that translate if you will, the data and anonymize it, whether it's putting it into some coding structure and separating it. There are many cases where probabilities are being determined through large language models that will be applied to a variety of use cases where there are no personal data involved at all. Having said that, of course, well, you will be using personal data in a lot of this decision-making, but it depends on where it comes in, right? Whether that personal data is used in the AI rule setting if you will, or how it's applied. So I think it points to the need for training all the way around for a better understanding of what's going on and then for helping policymakers to help work across the organization to not only understand but to articulate and work with the marketing team as well to articulate it. In a way that is, let's say, consumer-friendly, what's going on so that consumers can really make intelligent choices.

[00:10:50] KRISTINA: You have this extensive background in legal and compliance roles, which sometimes I think folks tend to admire; sometimes they're afraid of people like you. But I think what's really great are you sort of set up this intersection, I think, of being an enabler and making sure that the business isn't going to get into trouble, understanding the technology and really problem-solving all around, it seems. So, what is your advice for some of these major challenges that companies are facing or might even face under the new act,, what should they be doing right now to prepare themselves?

[00:11:23] LEWIS: I think especially for startups, it's a little bit different startups as opposed to, of course, significant players, right? Those who have the resources, have a dedicated privacy group, and the compliance team can afford very experienced lawyers on the outside, but for startups, this is critical; they should think of compliance as a critical need and start early. Mean, we have all heard many times privacy by design, privacy by design, and security by design security. Well, it really is super important in this, you know, with this aspect, because for people to claim all this is just too much right now. We're full speed ahead, and we've got to get market share, whatever it is, we can't slow down, that sort of nonsense because if you can afford to raise whatever, 20 million, 40 million in some round, or more to be a significant AI player, then you should be able to afford at least expertise on the outside to help you start from the beginning because it is so important minimizing the use of personal data if you can, avoiding it altogether if you can, and also figuring out working across teams so that everybody knows, and has a clear understanding of the use case, right? And then the safeguards to be built in. I mean, if there are so many companies coming out with chatGPT, like, products and putting them out there actually, even you could say they're nascent competitors of ours, but what they don't have, and so people say, Oh, look, wow, this price point is fantastic. Wow, I just watch this demo, but they don't understand the safeguards aren't built in, right? That these people and not pointing to anybody in particular, but some of these people, they haven't thought about, what goes in, because I heard recently it's that the data is the code, right? So it's really spending time understanding the data and limiting it as much as possible. And then also understanding the, not only the inputs and being responsible with the inputs, but the outputs as well. And the impact on society,

[00:13:42] KRISTINA: That's interesting because a lot of startups that I talk with, they say, well, you know, it's expensive. It's time-consuming as we try to kind of get to an MVP. So I'm wondering, is it a chicken or an egg type of thing? Because if you're a startup and trying to get to an MVP, it will slow you down a little bit. It might take a little bit longer. Not a lot, but a little bit. Potentially, it'll certainly take resources a little bit or a lot. At what point should a startup really be thinking about getting that privacy by design, right? The safety by design...

[00:14:13] LEWIS: ...on day one, on day one. And I'll give you a good reason why. And that sounds very self serving. But hey, let's, let's talk a little, realistically, the price of a DevSecOps lead engineer, which you're going to have to have, operating in the cloud, the price of a lead engineer, somebody who can help build out an AI team, that's not inexpensive, and I'm not saying you have to have an expensive lawyer either, or even a lawyer, right? Just have somebody who's well experienced in privacy and collaborative, who knows what's, you know, knows how to get things done. It's just tremendous help, and here's one example, and this really isn't necessarily having to do with AI, but in general. You want somebody in there who is already thinking about the requirements, for example, data deletion, right? And data hygiene. Okay, so you're building out a company, and you need to talk to the engineers early, if you can, about how basically be hosting the data, right? Do you want it to be spread across all sorts of buckets, or do you want to have really nice individual accounts? So when it comes time to termination, you don't have to go look and write code that scans, scans, you know, across a zillion places, right? It's a simple command, boom the data set for deletion and deleted, the schedules with the cloud provider. And just one example,

[00:15:43] KRISTINA: It sounds like you're pointing to the need for a sound steward, right? I always like to think of a steward who doesn't necessarily have to know it all, but has to understand that already and has to be able to collaborate with people who do know it all so that you can really stay on track.

[00:15:58] LEWIS: Correct. And it's not to say that, I mean, I've been so impressed by so many engineers whose privacy was not on their job description, and neither was security, right? But they come up with great ideas and they raise concerns, on their own. And so it's not to say that that knowledge and that interest is not there. But I think it's extremely important to have somebody who's really well-versed in the compliance landscape and with knowledge of the particular laws that have to be met because can't expect folks who haven't spent time dealing with that stuff to know about that and to take that into account.

[00:16:37] KRISTINA: I don't think I remember seeing future teller in your bio, and I don't know where your crystal ball is hidden, but we're talking a little bit about GDPR and, certainly in 2018, I'm not sure that all of us could foresee the tapestry of data privacy laws around the globe that are happening as well as in the U. S. How might the EU AI Act influence AI regulation in other regions, you think, like in the U. S. or Asia? How do you see that playing out?

[00:17:03] LEWIS: I think Asia is a little bit different in that there are some different models at play in Singapore that have been very strong, especially in putting out governance frameworks. I do not claim to be an expert in what's going on with regard to the laws of countries in Asia, but I do follow China a bit, and they also have already got on the books and act on governing relating to algorithms. The thrust is protecting their consumers. And putting down quite a bit of requirements, but fundamentally making sure that nothing threatens the People's Republic of China, the government, if you will. So that's a little bit different, right? And, and that's very top down the U. S., historically a little bit more, let's say fair, but I think we've been very influential and early as well in some measures taken now, of course, it's more the government regulating itself and setting up requirements for itself. There's the AI and Governance Act, right, which was passed a couple of years ago to establish a center of excellence. And then we've got the NIST. Doing fantastic. I mean, super impressive work, sort of like CISA, but NIST has put out a great governance framework that I've used even though smaller company, mine is but you can pick and choose from it, and they've got some great examples to follow as well, but it, so long-winded way of saying that the U. S., I think, has put forth some good models and examples to follow and has gotten into the discussion. Not only that, but we are taking from the GDPR and putting some provisions into bills that have been placed. Well, they're in committee, I guess. And there's a number of them. I think one just came out a couple of days ago. I forget the name of it has to do with algorithm accountability, but the thrust is it gets to the same type of regulation or oversight on automated decision-making that we talked about earlier, right? That's it's already in place with the GDPR. And there are also transparency requirements. And so I, I see there is a little bit of back and forth again, the AI act, it's not going to be passed tomorrow. And even if it is enacted sometime before the European Parliament elections this coming summer. It's not going to take effect then for probably another year or so. So there seems to be kind of, this a seeding, if you will, of the ground back and forth and ideas being exchanged. And I think this is one area where the U. S. My impression is that that a lot of government officials are realizing this is a time where it's more important to step in earlier because they did it on social media, which is interesting, right? Because, hey, that's AI, you could say, and I will say, in some cases, run amok in terms of some of the videos that are generated, let's say, right through the algorithm, extremist videos or violent videos, things like that and no action has been taken yet. Although there are certainly tools there. So, remains to be seen what will happen in the U. S. But I think there is a general sense that we need to get something in place. It's more than just voluntary commitments from the major players, not only to safeguard the public. But also, maybe to make sure that power is not concentrated further in the hands of the big, the Google, Facebook, Amazon, and the like, and that's antitrust, that's more of an antitrust concern that Lena Kahn has made pretty clear that she's interested in looking at.

[00:21:03] KRISTINA: We're talking a lot about laws and regulations, what's coming, the risks of AI; you've worked extensively and have a lot of experience in telecom, e-discovery, digital authentication, regulated industries, basically, so there's already pre-existing laws, nothing new to you there. How do we ensure that especially those in the regulated arena, still have the opportunity to innovate and create, given the fact that we're actually looking at regulating AI? Like how do they get that balance? How do they reach the point where they can still be effective and competitive in the marketplace?

[00:21:36] LEWIS: I just view AI as another tool. It's another technology like a hammer, which is regulated or it's uses regulated. There are all sorts of laws that apply already and various U. S. agencies have made it clear that everybody should be on notice that the existing laws, especially in the area areas of discrimination will be applied vigorously to the use of AI, just like any other technology. I dig with a big grain of salt any complaints about being stifled or because again a lot of these companies are getting a heck of a lot of investment. And so date, I haven't seen any regulations that would be stifling. And they certainly have the lobbying oomph to make their case well heard.

[00:22:30] KRISTINA: So, we can anticipate continued innovation without any threat is what I'm hearing. We just have to be mindful and do privacy by design ahead of time.

[00:22:38] LEWIS: Well, that would be nice. But I think the American politicians are more sensitive to the possibility of innovation being stifled. And certainly a lot of members of Congress are very supportive of business being unfettered. But I think a real concern is the impact on society and especially a government with what has been demonstrated to be very detrimental already with regard to deep fakes, propagation of falsehoods, about the government. So I think there's a real valid concern here that is going to be addressed, and I don't think that should be stifling to most of the use cases and the I'd say legitimate purposes of business. I think that the big concern and this plays in with kind of the antitrust concern is that if you put in a rule set, you may stifle the smaller players. So that I understand. But we've got lots of legislation in this country that impacts just businesses with more than X number of employees or X number of revenues. The desire to foster by smaller businesses, I think that could be met, while we can at the same time properly regulate on the large companies.

[00:24:11] KRISTINA: Sounds like a very hopeful message. And so I hope that people also step away from that attitude after listening to you. If anybody's out there and wondering, how they prepare themselves personally, not so much from an enterprise perspective or their company perspective, but really how do they prepare themselves for whatever's coming ahead AI act or any other AI regulation that we're going to be facing in the next three to five years around the globe. What advice would you have for them? What should they go out and do right now?

[00:24:39] LEWIS: Well, I think, to the extent they still need to better understand the technology, it's getting that understanding through training, through watching videos. Something as simple as maybe getting on the wonderful Khan Academy, where there are some great instructional videos on what AI is, how LLMs work, what's a neural network, and stuff like that. But then joining societies or groups that can help them get up to speed and where they can attend conferences or at least be privy to regular feeds of information that kind of are pertinent to this topic. And I'd recommend it to anybody who's not a member of the International Association of Privacy Professionals; it's a great organization. That's really been a leader here. I think subscribing to various sites and feeds like NIST doing a lot of reading, and I mean, at the end of the day, a lot of it is just reading and understanding and then talking to folks who know more than you do, or who are working on this, it's not just the paper, it's the people, and it's really understanding the technology.

[00:25:44] KRISTINA: Great. Well, thank you so much for being one of these people that we can rely on to come and talk to us, share their knowledge, share their time and insights. Really appreciate you doing that because I think we're going to be continuing to listen to you as well as others as we continue to learn and evolve. So, thanks so much for making time to be with us today.

[00:26:03] OUTRO: Thank you for joining the Power of Digital Policy; to sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

You can reply to this podcast here: