S5 #4 Navigating the GenAI frontier: Ethical dilemmas and opportunities in digital policy

S5 #4 Navigating the GenAI frontier: Ethical dilemmas and opportunities in digital policy

S5 #4 Navigating the GenAI frontier: Ethical dilemmas and opportunities in digital policy

Guest:
Guests:
Jim Kaskade and David Maher

Jim Kaskade and David Maher

Jim Kaskade is CEO of Conversica, a company that provides Conversational AI solutions for businesses. He is a seasoned executive with over 20 years of experience leading various technology companies.

David Maher is Chief Technology Officer and Executive Vice President of Intertrust Technologies. He has over 30 years of experience in secure computing and is responsible for Research and Development at Intertrust. Intertrust is a software technology company specializing in trusted distributed computing and Digital Rights Management.

On this episode of "The Power of Digital Policy" Kristina talks with industry veterans Jim Kaskade and David Maher. As leaders at the helm of Conversica and Intertrust Technologies, respectively, Jim and David bring decades of experience in technology and secure computing to the table. This episode takes a deep dive into the intricate world of Generative AI, spotlighting the recent regulatory buzz from the Netherlands and its implications for day-to-day operations inside enterprises.

Keywords:
Generative AI (GenAI), digital policy, AI governance, data privacy, ethical AI use, AI-generated misinformation, data governance, AI regulation, technological innovation, secure computing, AI in business, digital ethics, AI risks and opportunities, corporate responsibility in AI, transparency in AI applications
Season:
5
Episode number:
4
Duration:
32:45
Date Published:
March 20, 2024

[00:00:00] KRISTINA: Amidst the global race to lead AI, the Netherlands recently took a bold step forward, aligning with Europe's landmark AI act and earmarking over 200 million euros to fuel the future of generative AI, setting the stage for a transformative journey in the heart of Europe's tech.

[00:00:17] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.

[00:00:36] KRISTINA: Thanks again for joining me on the Power of Digital Policy with us today, our Jim Kaskade and David Maher. The first is the CEO of Conversica, and, the second one is the CTO of InterTrust Technologies. Jim is a seasoned executive with over 20 years of experience in leading various technology companies. David has over 30 years of experience in secure computing and is also responsible for research and development at InterTrust. We're definitely in good hands today as we dive into the risks and opportunities of GenAI in light of the recent announcement out of the Netherlands and everything else happening every day anew in this space. Welcome to you both, Jim and Dave.

[00:01:13] DAVID: Thanks,Kristina.

[00:01:16] JIM: Great to be here.

[00:01:18] KRISTINA: There are so many headlines, lots of stuff happening. Jim, you recently published a survey highlighting these concerns around AI-generated misinformation. Tell us first up: What are some of your key findings in the survey, and why should we, as digital practitioners and corporate representatives, really be alarmed, if at all?

[00:01:37] JIM: Well, I think a lot of folks are still trying to understand the exposure. And I think as you look to apply AI in your business, in whatever capacity and whatever function, there's always the worry about how I'm going to be exposed, legally. And then how my end customers are going to react when they understand what's being done with their data. So privacy and data regulations is top of mind for most. And I think folks are just starting to scratch the surface because they're busy triaging from 2023. I think our findings generally says that people aren't really prepared to understand and they're learning quickly. That being said, 2023 being the year of discovery I think every board and executive team has been looking at AI strategically and has been looking at their governance requirements. And I think a lot of the regulation that's coming out is becoming top of mind. So that's the kind of our initial statement , most of my peers now are paying attention.

[00:02:44] KRISTINA: Dave, at InterTrust, you specialize in authentication and data governance. How can technology provide the solutions Jim is talking about, including combating AI-generated misinformation?

[00:02:57] DAVID: What we specialize in is sort of our name is is trust and trust really, if we try to define it simply to be reliance on, because, you know, people say, if I say I trust you or I trust somebody it's not very clear what that means, but if we simply say, I rely on it or I rely on you, that's enough trustworthiness is a different question. And that's one of the questions we address. And the fact is, is that everybody is going to be trusting AI one way or another. They may not know it, but they're going to be relying on it. And so the question is, is how can you make it trustworthy? How can you make it more transparent so that you understand what it is that you're trusting? And you mentioned term of authentication. How do you know that you're relying on what you think you're relying on?

[00:03:51] KRISTINA: So, in the context of AI, when you're both looking at this topic, how critical is it that we pay attention to data governance? Data governance has been around for decades, but what practices do companies need to adopt to ensure data integrity today that are different from what we've been dealing with for these past decades?

[00:04:07] JIM: Yeah. I think data governance is kind of the key term, right? And you look at the. amount of regulation around GDPR and similar globally in each country. And that's the theme that's been you know, I think set as a precedent long before AI. So it just continues to persist. I think, for me, the governance, as Dave kind of alluded to starts with trust, with transparency. I think there's a level of fairness that people want to make sure that the AI is being held to. The overall ability to kind of, you know, oversee with humans to make sure AI systems are doing what they're supposed to do. These are the kind of three big considerations that I think most of my peers kind of fundamentally, talk about, even though there's much more to it. And I think governance around data, data is king and no one wants an initially their data to be submitted to public models. They don't want it to be used to train models They don't want it to be used in a way that creates harm and then they definitely don't want it to be used in a way that then blemishes their brand. And there are many situations where AI is applied that can be extremely brand impactful in a negative way. So it's it's yeah, we've got a lot of work to do to really appreciate how this new technology gets applied safely.

[00:05:28] KRISTINA: What are some of the challenges that you're seeing that are different today than in years past? Because if I think about AI, especially in the context of Martek, for example, it's been around for years now, right? AI and Martek are nothing new. Recently, we've done targeting based on demographic analysis, et cetera. So, there's not necessarily anything new on that front. But certainly, from a gen AI perspective, we seem to be delving into new territory. So, how's that shifting the brand equation beyond regulation, in your opinion?

[00:06:01] JIM: Immediately started thinking back when cloud was first launched in 2005 by Amazon web services. And no one wanted to put their data in public cloud. So there was a era of private cloud with VMware and OpenStack, etc. And everybody built their own private clouds. And then you had things like AWS Gov. And so that was a response to a very powerful inflection point of technology, which is just enabler to AI innovation today. I think kind of a redo here with AI. Now you're looking at is the equivalent to public language models, and capabilities that are hosted for a general tenancy or general use set of use cases or broad set of use cases where people are still very hesitant in kind of incorporating their own data into those public services. So you have a bunch of private, large language model solutions, privately hosted. And executed and it's kind of a redo. Then also the thing that came to mind for me was when we were distributing video and audio and content around the internet, and we had to make sure that that content wasn't misused, fair use was a term that everybody was very aware of and properly protecting by digitally watermarking, properly encrypting, whether it's, movie studio titles, or, large record label music releases, boom, you had a whole industry kind of wrapped around making sure that you protected the content. Well, here we are, we have people's proprietary photos and videos and, even you, your identity in terms of pictures and voice being used. Is it fair use or is it being abused? And can I digitally watermark it and protect it and encrypt it? We're right back to the exact same conversation. So, has it changed? No, I think it's the same kind of sequence to making sure that this transformative technology is properly used, governed, and we're protecting the rights of content that's being used as a derivative to powering it. And for those that don't want their content to be used, , excluded.

[00:08:15] DAVID: I would add to that, that data governance is actually because of AI is now a part of the bigger issue that is part of, and the focus of a lot of what we see for regulation, and that is asymmetric power. Asymmetric power is totally amplified by AI now those who have resources to obtain a lot of data. And use that data in, in their AI systems to influence to the way that decisions are made to influence the way that people think and those people who don't have that or who could be subject to the use of those things are, diminished.

[00:08:57] KRISTINA: So, for all of the data collected historically, is that fair use at this point? What does that all mean really within the context of the EU AI act, as well as what the Dutch government announced their intent is to enforce personal protections, right? I think about things like how many times I have called Amazon customer service. I'm sure those calls were always recorded. So, do they own that now? Can they use my voice? Is that cool? Jim, what's the deal?

[00:09:25] JIM: Yeah. The quick answer is no. I mean, if I want to be opted out, I don't want to my, my data, my identity, my voice, my anything to be used. I have to have the right, you know, so when you look at GDPR subject access rights, we need to be able to assert in any domain, whether it's AI or otherwise. So my data is mine. And I think when you think about the individual and the collective and the overall societal impacts with this, this is a technology that can operate at scale because of cloud and it's phenomenally human like and ability to do things that now I think magnifies the amount of disinformation and polarization of the public and I think it really does require all the things we've done in the past to be really fine tuned because the scenario of widespread disinformation is reality. I think even widespread use of AI driven surveillance will erode public trust in public institutions. And there's just this, I think, loss of human agency in critical decision making that AI could fuel in people's minds, because it's so powerful. So the AI governance starts with the data. I think it, we should be mindful of the harm to the individual, to the collective and to society as a whole. And at the end of the day, I need to be able to say, Nope, I don't want my data in OpenAIs large language model. I don't need it to be used to train Mistral and I don't want my photos, my voice, my videos, my content to be used as derivatives of work. Maybe never or minimally only if you attribute my content as the source or maybe you need to monetize or help, you know, pay me for, for the use of my content. So this is going to be big. I think it's going to be hard to mitigate the potential harm against the individual for sure.

[00:11:26] KRISTINA: And for anybody who's not sitting in the EU and thinking that GDPR protects them, what is their kind of protection at this moment? Because we don't have a global data privacy law in place.

[00:11:40] JIM: Great question. Dave, you want to take that one?

[00:11:43] DAVID: Fact is, is that we don't have any kind of strong way of even letting people know that they're talking with, or will be talking with a chatbot. They're getting, you know, fairly good. And in fact, my feeling is that, I've been experimenting with so many of them. The creators or the sort of tuners, I would say are tuning them to be convincing and authoritative and friendly and things of that sort. That's what they're spending their time and effort on it seems to me. And not so much the accuracy and my experiments are showing really bad accuracy and in some fairly critical situations and a lot of bias and even if you ask the chatbots themselves about their bias they give you so many different answers that it's it's hard to make sense of.

[00:12:33] JIM: Well, I will definitely add to that Dave, but first Kristina to your point on beyond GDPR, just like we took CCPA every country does its version, I think, following suit. I am excited that the AI Act was established because the EU has just been a thought leader, and I think their first mover status is not only just that, they moved first, but they've done it really, really well. So I think it's going to be a great platform for regulation that beyond that in the US, you know, clearly the establishment of bipartisan work with House of representatives, Mike Johnson and, and Jeffrey's, marks a significant step towards the thinking of AI regulation and legislation. But I come from a world where I've worked in the past with all the three letter agencies and the work that goes beyond just the U. S. Congress, where we look at these technologies as strictly. harmful as we would look at any cybersecurity technology. And so I think, whether it's the CISA or it's the DHS or NSA or CIA, FBI, you think of all the three letter agencies, U.S. takes this seriously and I think it will fuel a lot of the stuff that's happening on Capitol Hill. And we will kind of do our version and add our value and ultimately leverage the work that's been done thoughtfully by the EU. So that makes me feel comfortable. I do know that bad actors are out there and I've had teams, chasing them. So I know exactly what it's like, but I also know that we're strong in terms of adopting to match the misuse. And then to Dave's point around chat bots, I think, Oh my God, are they powerfully human? And they are only going to get more powerfully human. So yes, you will not know the difference. Between a human and a future digital assistant or concierge. What's going to be really, really important is transparency and disclosure of your use of AI. And so there will be a future where you're going to have a brand that doesn't disclose that they're using AI and kind of misrepresent it as human. And that's going to be frowned upon, I think, to a great extent. I think that the younger generation is very comfortable in engaging with AI. Matter of fact, they don't like to talk to humans in many cases. They just want to get the job done. They don't want to talk to a salesperson. They'd rather talk to a chatbot if it's good. And so I think ultimately, the generation will say, please disclose. And then, you know, please also disclose your policy and how you're using the data. That I'm exchanging with this particular digital AI capability.

[00:15:26] DAVID: Yeah. I think though that the stage we're at looking at AI right now, if you look at, let's say the way that the OECD has put out their documents that describe how to regulators, for example in their countries could deal with AI or in the case of the U. S. you've got the in the U. S. The Biden directive on A I safety that is now being dealt with by a number of different federal agencies. One that I'm involved in and with this is NIST. They're at the evaluation stage, looking at risk management strategies and, at the end of the day, though, we, you have to have a lineup of cooperating hours of statute, regulation, the enforcement you know, identifying the technological tools that you need. The standards and best practices and transparency has to be involved in all of that. And we're not quite sure how that's going to all work. Just looking at the risks is a great start, but how we're going to do that now we're going to enforce any risk management strategies. I don't think we have a clue yet.

[00:16:39] KRISTINA: I think it's so easy to get spun off about risks, but the problem with risk is that you don't do anything because it's too risky, right? We also want to fuel innovation. So, how can we balance the need for innovation and the furthering of AI with a need for regulation, ensuring ethical use and prevention of misuse? Jim, you alluded to brands and the risk to brands, but how do the brands, for example, get that innovation versus regulation piece right? Is there a way to really get the best out of it right now without regulation, or is it always a risk of going too far and not knowing it?

[00:17:16] JIM: We're going to see enterprise class solutions that come with all the protections that make sure that, we limit the class of risk, unacceptable is not going to happen. That's a class, right? Even high risk. I can tell you Visa, MasterCard, Amex brands like those, they're not going to take high risk limited, probably not minimal only. And so it's kind of like dabbling and minimal means what it's going to mean that, well, AI. Chatbots can't hallucinate. So how are you going to do that? Well, you're not going to use an open LLM, which is generally trained, you know, on a host of data. You're going to use a privately hosted one trained on only yours. For example, I mean, there's this really simple, simple, and I'll make it sound simple because I think when people start to really put their minds to it, they can take what seems like, okay, it's going to be Skynet and it's going to take over the world, or it's going to make and trash my brand, or it's going to sell a car for a dollar when I didn't ask it to and all these bad things are going to happen. No, we're going to be really thoughtful. Fortune 100 is going to show us how it can be done in a safe way. And then the rest of the world will start to breathe a little bit. We still will show a ton of governance because we have to protect the individual, not just the big enterprise.

[00:18:37] KRISTINA: Dave, what tools do we need for that? And do we have them? Do C-suite folks understand what ingredients are needed to get that governance piece right?

[00:18:46] DAVID: So I think we do have the tools in the sense that I can point to technologies that can provide authentication that I can and provide trustworthiness to organizations that want to point to how they comply with various, not just regulations, but the best practices. I think Jim was alluding to, and by the way, I do think the evolution of best practices is going to be a big issue and it's something that you see in industries over the centuries basically in finance and other places. But the big issue we see is the interoperability is, is that if I'm trying to make sure that people understand that the provenance of a set of data and I apply cryptographic functions or hash functions or whatever, put them in blockchains or whatever, You need to have a certain degree of interoperability for people to be able to use the infrastructure that's being used for that. That's going to require a lot of work, and I am glad to see that some of that work is beginning. It we have done things like demonstrations of how we can detect, um, deep fakes. But the strongest technologies for detecting deep fakes are ones that actually use cryptographic functions or hash functions blockchains and the things of that sort on content itself that that you know, might originate in by news organizations, as an example. So you could determine that, yeah, that was a real newsreel, that was a real videographer from NPR or CNN or, or what have you, and that's where the person was, that's what device they were using, et cetera, and the data hasn't been modified, and we've actually demonstrated that despite the fact that there are, like petabytes of that kind of information produced, we can still authenticate it all, and we can do it in such a way that everybody can verify it. So that's not the issue. The issue is getting everybody to agree to ways in which they can verify that, yeah, that, that video hasn't been hasn't been tampered with.

[00:20:53] JIM: Yeah, I think the equivalent would be, sure, two factor authentication works, but you have to use it. So, so, sure, I think, yeah, I agree with Dave. There's the technology protections can be made available. It's a question of whether businesses, use it to protect their end customers or us as individuals. And I think then you just apply these technologies to meet that level of risk and you let, frameworks govern the fact that you are maintaining that level of risk as you disclose it so that people who are using your services can make their choices. Well, I'm going to use this service that's high risk because it can give me high return or I'm going to use this service because it's low risk because that's what I'm comfortable with. And at the end of the day, it will become all about disclosure. I think if you go back to the days of people using copyright material and plagiarism, we've just upped the game there. And so what do you have to do? Well, the AI has to disclose the content sources. And so if you can get something that's created by, let's say, you know, OpenAI's and you, you do, you work with chat GPT and you ask it a question and it gives you all the sources of data that it's used to compile the answer. Don't you feel a little more trusted, of that answer. And so if I get, you know, an article that was written and it says in the, in the footnote, AI generated, And then in the content are all the sources. Then I know I've got a really good summary for my benefit, but it hasn't been, you know, produced unlawfully, and it doesn't have questionable content because I don't trust those sources, for example.

[00:22:47] DAVID: We're not quite at the point. However, that, I mean, I've been testing whether or not there's been improvements in various chat box including ones that use natural language models then rather than just large language models, Pi being one, one of them, Suleyman's latest creation. But, they're still making a lot of mistakes, and they're giving references that are wrong. So, what I do like is that there's now infrastructure being established whereby you can have the risks are prioritized, things that are larger risks you start to take a look at how to measure them, how to measure the reduction of them and determine, for example, that you've got relatively low risk that somebody is spewing a bunch of nonsense about some medical issue. And 'cause that's one of the things that I've been investigating and kind of surprised that that's still. something that you have to check up on. But the NIST AI safety consortium is at least trying to put together things like Red teams to test those kinds of things, and. Those things that don't pass I think some fairly rigorous tests that are being recommended I think is what Jim is saying is that they're reputation wise, they're going to have to, to buck up and get those things fixed.

[00:24:05] JIM: Industry opportunity for fact checking. Here we go.

[00:24:10] KRISTINA: A new career model is coming up. Well, I think you're both also pointing to a very important aspect, I think, of where AI is heading, which is really public awareness and the role of the individual in the ecosystem. How do we get to the point where we start to onboard society as a whole and citizens individually to a point where they understand how to use AI when to trust, and when not to trust?

[00:24:37] JIM: Yeah. I mean, I think education is always the core answer, and so our academic institutions are going to step to the plate are stepping to the plate. I think brands who are big and also very ethically focused know that their, their differentiation will be that form of communication as well as they kind of set themselves in a class of their own in terms of being thoughtful and ethical AI use and in policies that they endorse. But I think interacting with AI. Whether you're a business or an individual consumer you're going to have to be made aware of the ethical considerations, the fact that how it's applied could be unfair and appreciate that, obviously non discrimination, hate, all of this everything that Dave said in terms of public trust, again, comes down to you. An education for the individual to demand that their privacy be maintained, that their data be properly governed, and then that finally, that there is a level of transparency that is over the top. I want to know exactly how you're applying AI to anything I interact with your brand on. And that is going to be something where the brands that get that will gain trust faster and the individuals will start to understand what to ask for, right? It's not just again, like a GDPR SAR request. I want my data deleted, for example or I want to know all my data that's been used and how it's been used. You'll have that same opportunity in an AI system but it really comes down to educating people on how much more powerful the AI is in terms of making decisions on the next thing you see on a page. Which offer your 're being suggested to you as an individual, what other item could be put in your market basket. All these things are now data collected of you and your behavior and AI more powerfully segmented you, understanding you and, trying to make your experience better.

[00:26:49] DAVID: I'm a little skeptical of education just from not because I'm cynical about education, but it's just too slow. It can do our educational systems could do a better job of teaching critical thinking, which is what's required for it to apply to Jim's talking about. But I actually think it's going to be up to news organizations to communicate what's really happening and what kind of risks and bad things are happening and what kinds of really good things are happening. That's something that you could be really pessimistic about. But I read a lot of really good news sources and I just wish there were more of them.

[00:27:28] KRISTINA: You've both pointed to some really interesting ingredients in the ecosystem that underline the trust, that underline the privacy protections we need. As you look ahead specifically for brands, because many folks listening are part of either a localized or globalized brand, what do you want the C suite to do? Not in 2024, maybe because the plans have already been laid out, but as we look to the coming few years. What should they be doing tactically? What can they lean into? What should they be sponsoring within their corporations? What is the bet that you think that they should make? Jim?

[00:28:05] JIM: When we surveyed a bunch of C suite to understand where they were, only 6 percent of them had guidelines for ethical use of AI. I think the first step is, they all need to get their clear guidelines established. And I think then you can go even further than that and say, well, how many of them actually put those guidelines into practice? And then how many people understood, leaders understood how they're currently using AI. All of this becomes an awareness in your own business. Like one in five leaders actually knew and admitted clearly that they understood how they were using AI and most were limited to to not even understanding that. So there is an awareness within the business for technical leaders that needs to be raised up. And I think starting with clear ethical guidelines on the use internally for their employees, and then the use of AI that impacts the end customer is my kind of call to action.

[00:29:07] DAVID: The key to this is going to be how do you get people to pay attention to the various kinds of issues that we've been talking about and thousands of more issues, Branding, I think, is something that, that can help but and it does take a while for people to start trusting or distrusting certain brands, and there are a lot of things that are simply going to be hidden. They're just not going to understand who these, you know, new new actors on the scene is. And even, you know, for example, the fact that we were going to be trusting a lot of things in the background and energy systems, for example, where you've got AI actors that are going to be making decisions that have to be made quicker than anybody else. I'm not quite sure people are going to have the time to understand whether or not the risks associated with the way those things act have been properly addressed. So I think that somehow or another we're probably going to come, have to come up with a set of easily understood marks of, yeah, these this chatbot complies these set of regulations that somebody has actually checked it and, you have an easy way of understanding that kind of thing. The same in the industrial applications. I think we are going to have a shorthand for knowing and trusting the different kinds of issues that inevitably are coming up nowadays.

[00:30:31] JIM: So the number one issue that our peers in our survey said was accuracy of the use of this technology. So when we started using large language models in 2020, it got it was 40 percent accurate. So less than one out of two times you got things right. And if you're ordering pizza and you want to get pepperoni to get Canadian bacon instead, maybe that's okay. But if your visa or MasterCard or some big brand, AT&T, name the industry vertical healthcare provider, you can't operate a business on that in 2021, it was 60 percent accurate. 2022 was 85, 2023, it's getting higher at the end of the day, most brands will not deploy unless they feel like they can get a hundred percent accuracy. Why would you want your brand to have anybody misrepresented, an employee, let alone a digital something, right? So I think we're, we got high bars to meet for big brands and they will set the tone in terms of how this technology evolves much more maturely. But, you know, the good and the good news is it's possible. It just takes a lot more work. It won't be something that's generally consumable. It's going to be very specific to a particular customer in a particular vertical but that's how it starts. And then I think eventually the masses will see the benefit for sure.

[00:31:56] KRISTINA: Well, that's a lovely and an upbeat way of summarizing the conversation and really setting us up for a great challenge for all of us listening today and learning from you, Jim Kaskade, really appreciate your insights. Dave Maher, thanks for joining us. It was lovely to talk with both of you. And get the lay of the land and also understand the impact of where all of us that's ahead in terms of working in this emerging technology space. Thanks so much.

[00:32:20] OUTRO: Thank you for joining the Power of Digital Policy; get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.

Feel free to respond to this podcast here: