Sultan Meghji, and Nick Reese
Sultan Meghji most recently served as the inaugural Chief Innovation Officer at the Federal Deposit Insurance Corporation of the United States Government. A noted expert in AI, Cybersecurity, Quantum Computing & Web3, he currently is a senior advisor to Reciprocal Ventures as well as America’s Frontier Fund, and also serves as a scholar at the Carnegie Endowment for International Peace, a fellow at the George Mason National Security Institute and is a distinguished member of the Bretton Woods Committee. Mr.Meghji is also a professor in the graduate programs at Duke University’s Pratt Engineering
School with a focus on AI, Cybersecurity, and Crypto. Mr. Meghji has a 30-year career in technology, including launching, scaling, and selling a number of startups in frontier technologies, biotech and fintech. From the first web browser to one of the first cloud computing companies, to the first cloud-based clinical genetics startups, Mr. Meghji has been at the forefront of innovation for decades.
Nick Reese most recently served as the first-ever Director of Emerging Technology Policy at the US Department of Homeland Security, where he advised the White House and senior Cabinet officials on national security implications of emerging technologies. He is the author of the DHS AI Strategy, DHS’s Post-Quantum Cryptographic Transition Roadmap, and the 2022 DHS Space Policy. He was also the lead DHS representative for the development of Space Policy Directive-5, National Security Memorandum-10, the National Space Policy, and Executive Order 13960. A noted expert in cybersecurity, quantum computing, artificial intelligence, and outer space, he currently holds faculty positions at New York University and George Washington University, teaching graduate courses related to emerging technology and geopolitics. Mr. Reese has a 20+ year career in the US Military, Intelligence Community & Homeland security with a focus on operations and bringing technical solutions to high-stakes national security challenges. From the front lines of the War on Terror to building Quantum Computing Policy he has been an instrumental actor in protecting our nation with an eye on today and the future.
Dive deep into a stimulating conversation with Sultan Meghji and Nick Reese, exploring the dynamic facets of AI technology adoption across various sectors, including academia, government, and private industries. Unpack insights into why organizations might be holding back and the transformative potential a proactive approach to AI integration can unlock. Sultan and Nick, backed by diverse experiences across sectors, unravel real-life instances of AI as a transformative tool, emphasizing its influential role as a force multiplier and an indispensable asset for future readiness.
Discover pragmatic advice on fostering an innovative ecosystem that harmoniously encourages exploration, education, and evolution with emerging technologies. Join us to navigate the evolving terrains of technology, aiming for a balanced perspective that champions innovation while acknowledging and mitigating inherent risks.
[00:00:00] KRISTINA: AI will effectively become an extension of automation processes and can uncover a vastly expanded breadth and span of information, helping to evaluate complexities at greater and greater speeds. So many opportunities await us that we can only start to imagine, but how might we anticipate and prepare for the risks that will arise?
[00:00:19] INTRO: Welcome to The Power of Digital Policy, a show that helps digital marketers, online communications directors, and others throughout the organization balance out risks and opportunities created by using digital channels. Here's your host, Kristina Podnar.
[00:00:36] KRISTINA: It may often feel as if we have one foot on the pedal to the metal and no brakes in sight. With us today to help ascertain if that is really the case are Sultan Meghji and Nick Reese of the Frontier Foundry. Sultan most recently served as the inaugural chief innovation officer at the Federal Deposit Insurance Corporation, which you might know as FDIC. He's a noted expert in AI, cybersecurity, quantum computing, and Web3. He's also the senior advisor to Reciprocal Ventures, as well as America's Frontier Fund, and serves as a scholar at the Carnegie Endowment for International Peace. He's a fellow at the George Mason National Security Institute, a distinguished member of the Bretton Woods Committee. Sultan is also a professor in the graduate program at Duke University's Pratt Engineering School with a focus on AI, cybersecurity, and crypto. I have a sense he doesn't sleep. Also with us is Nick Reese. Nick most recently served as the first-ever Director of Emerging Technology Policy at the U. S. Department of Homeland Security, where he advised the White House and senior cabinet officials on the national security implications of emerging technologies. He is the author of DHS AI Strategy, DHS's Post Quantum Cryptographic Transition Roadmap, and the 2022 DHS Space Policy. He was also the lead DHS representative for the development of Space Policy Directive-5, National Security Memorandum-10, the National Space Policy, and Executive Order 13960. Nick, I think you're giving Sultan a run for his money and also not sleeping. You're both very busy. Thanks for making time to be with us today.
[00:02:10] NICK, SULTAN: Happy to be here.
[00:02:12] KRISTINA: So, okay. You're both really busy. Crazy space. Lots of stuff is happening. AI, emerging technologies in the news. Last year it was the metaverse. This year seems like AI is really the hype. Is it hype or are we at a point where we're seeing the dividends paying off from decades of investment made around AI and other emerging technologies? What's happening?
[00:02:33] SULTAN: Well, it's not either one of those. It's actually both the same time. So is it a hype cycle? Yeah, absolutely. I mean, when you see people raising hundreds of millions of dollars in valuations with a PowerPoint presentation, you know, you're in a hype cycle. And there are a lot of companies out there that just like we saw a few years ago, whether it was the metaverse or FinTech before that, who have just raised all this money with basically nothing. Okay, great. That's, that's fine. And so you see that now underline a lot of this is a fundamental evolution in the technology. I've Got my first NSF grant. I want to say about 31 years ago for an AI. And so, all the gray hair starting to really be apparent. But I will say that, 31 years ago, it was a lot of scribbling on whiteboards and using multiple supercomputers to get, two plus two equals four. Now we're at a point where we can do some pretty amazing stuff on pretty limited technologies. I pointed out, that everyone's very excited about, a lot of the LLM work that's going on right now, whether it's what Google's doing or Microsoft or open AI, but in many cases, that's three, five-year-old technology. I can run an LLM on my laptop now. And that wasn't true a few years ago. And so the underlying enabling technology has really progressed to a point where you can do some pretty astonishing things with kind of nominal technology investment.
[00:03:49] KRISTINA: And that's interesting, Nick, because I think, as Sultan speaking, you both have been in this space, and you've been playing in it for a while, you've been doing cool stuff. But for many folks, especially in the C-suite, it feels like they're just hearing about this, right? It hasn't been mainstreamed. It's something that feels sort of new. They're trying to understand where we should be making capital investments and incorporating AI into organizations. What are some of the upticks that you're starting to see? What are some of the gotchas that you would tell people to be aware of? What should we be thinking about, as we are mainstream now?
[00:04:23] NICK: Yeah, so, I think that as we're looking at AI and in a hype cycle and kind of creating a lot of new opportunities that we haven't seen before, I think the place that I would go is, is thinking about technology convergence. And so it's taking AI and maybe some other technology or technologies and, putting them together and creating a capability that is beyond what the individual capability might be, and I think what we're seeing is, when we talk about AI as a general concept, it's really hard because we talk about AI and well, what does that mean? Do we just, do we just add a little bit of AI to the problem? Well, that's not that's not exactly helpful. And sometimes it can feel very abstract. And so one of the things that I think is starting to happen is we're starting to see AI be talked about in specific use cases. And when you talk about it in specific use cases, you can bring your mind around it a little bit better. And so I think going back a few years when I was a policymaker in the government, we were doing some policy and strategy work around AI, largely because it seemed to be the thing to do, we should be doing AI policy and we did it, but it was very broad, very abstract. And I say that as someone who literally wrote it and then we had kind of a law, we had a bit of a drop and I think world events kind of pushed away from that, that focus on AI. And then really when chat GPT came out, a lot of people looked and said, Oh my gosh, we have to, what does this mean now? What do we do? And so a lot of government agencies and things like that have dusted off policies that they had in place three years ago and they're updating now and so I think it's, it's really an exercise in us not speaking about AI as in such abstraction and also realizing that there are a lot of other technologies that are converging with AI that are creating, I guess, more capability and maybe fueling a bit of that hype cycle now.
[00:06:17] SULTAN: I'll just add on to the next comment. I think he's absolutely right. And it's one of the reasons we've kind of gone down the path we've gone on is it's not just AI, right? You know, there is a lot of unbelievable, amazing technology out there. The value prop on so many of these technologies is becoming so overwhelming in some cases that it's overriding a lot of existing technologies. So yeah. Most enterprises that we interact with are really struggling because they're, they have a bunch of different technology silos, right? They have a bunch of little point solutions or they have one big enterprise system, but it's 15 years old and these technologies, the ability to go from zero to one with incredible value is just so fast. We're talking about days or weeks, not months or quarters or years or presidential terms. A lot of CIOs out there are used to doing, you know, the three-year transition program, knowing that they're only going to be in the job for 18 to 24 months. And so they can't get blamed when it fails and the new guy comes in, and then that person gets blamed for failure. But now we're at a point where, and we're seeing it on a daily basis in our current business you can take something in two weeks in demonstrate unbelievable value with these underlying technologies. And so it's changing the economics of how you think about technology and by approaching it from a use case perspective you're able to drive that value very quickly.
[00:07:35] KRISTINA: How does that work from a user's perspective, such as mine? I tried to log into Teams this morning, and Teams wasn't working for me. And, I still put on my Oculus headset sometimes. And I don't have limbs, which is annoying because we don't have the graphics cards or the internet capability to fully send the packages back and forth that are required to have a whole body in that context. And so, I'm sitting here going like, this all sounds really exciting, but how far away are we from having this work if we can't get the basics right? Where really are we on the paradigm of adoption?
[00:08:09] NICK: Well, it depends on the market you're talking about and the delivery mechanism. So for example, we're using AI right now, the underlying network technology and servers that are making these podcasts work, and we're in three different places having this discussion. There's a lot of AI built into that, but it's not consumer-facing. There's a radical difference between what I would say, infrastructure technologies, middleware enterprise technologies, and consumer technologies, those three things are characteristically different, and the consumer is going to, in a lot of ways, not be a leading indicator. In fact, I think over the next two years, if you look at the really successful AI companies, the vast majority of them are going to be in the other two categories. They're not going to be doing something with a virtual reality system, or it's something that a human interacts with. Now, will you touch it on a daily basis? Sure. You know, a few years ago, we put the first AI that looked at real-time payment fraud into the banking infrastructure, and so every time you tap a card, swipe a card, use Apple Pay, use Google Pay, there's AI making sure that that's a good transaction, right? The evolution that we're in right now is that so many of the infrastructure technologies go back to the eighties and nineties. So many of the enterprise systems go back to the two thousand and teens. That there's just so much legacy technology that, that's so much of these use cases is focused on managing legacy technology. And so that's, I think, what we're going to see far more of it. And we're already seeing it.
[00:09:28] SULTAN: Yeah. What I would add to that is that I think that in addition to that, it's a lot of AI that will improve and streamline processes. And these are processes that humans have historically done and taken up a lot of their time and things like that. And so, one of the things, and we actually have a piece about this on the Substack, is about AI introduction is about augmentation of humans, not about. replacing humans. And so, the kind of joke analogy is, imagine that you never ever had to fill out a travel voucher again for your job. How much time would you save? How much time would be available to you in your day that you can then use to apply your brain and your critical thinking skills to the big problem, whatever that looks like in your organization? And there's actually a lot of that out there right now. And you think about, like, chatbots on websites, assistants, bots, things like that, that are, they're not all perfect, but there are many of them that are able to at least pass on some information to you or point you in the right direction or do any number of things that would have years ago required a human to pick up the phone. And so I think it's that, it's that process streamlining and augmentation that takes that human and says, I can cut, I can make more time available to you and better data available to you so that you can be better at what you do.
[00:10:54] KRISTINA: So, Nick, what does that mean for the C-suite leaders who are looking to make capital investments today? They're looking to get ahead. What should they be really focusing on right now?
[00:11:05] NICK: Well, I think first, it's the AI that you bring into your enterprise that does not have to be this massive solution that is going to just solve all of your problems. You can start small, and you can say, Where do I have inefficiencies? Where do I have road work that I can cut down on time or can I send some time back to my people? Look at those use cases and start there. And I think sometimes there's a, there's an enjoyable thought process around AI, which is, we're going to bring in this big system that is going to completely transform everything about the organization. It's one system to rule them all. And that's just not reality. And what we see the most is actually organizations that are improving their processes and streamlining the things that they do every day. And you have enough of that across your organization. And all of a sudden, you have real organizational change. So I would say for the C suite folks that are looking to invest, pay attention to some small use cases. Show some success, grow that, and you will be surprised if you do that enough times at scale, you'd be surprised at what happens.
[00:12:14] KRISTINA: Sultan, do you feel like leaders are being aggressive enough now? Because I'm thinking about some of the headlines that I'm seeing. For example, Samsung recently said you know what, we're going to put a policy in place that says no, can't use chatGPT. Which might be valid, right? They had a little bit of a crazy use case where they saw semiconductor equipment. I think it was like there, how to use code showed up in chat GPT, which is a little bit scary. But there's also this notion: do you go the other direction and get to a point where you're too conservative and you're not leaning into the latest technology?
[00:12:52] SULTAN: Oh, absolutely. I mean, the short version is absolutely most organizations out there are being far too conservative and not taking advantage of it. And at the same time, you know, we're seeing it. I think the Samsung example is a good one. There are a bunch of others, a bunch of other big enterprises are basically saying, no, you're not allowed to use this technology. I vehemently disagree with that. Whether it's in your daily life, it's sort of like saying, Oh, Siri just came out. No, no, no, you're not allowed to use Siri at home. Right? Like, that's just ridiculous. We have built so much infrastructure, especially in larger companies that are, that's just wildly inefficient, whether it's in academia, in the private sector, in the public sector and government. I don't think there's a single human who's listening to this voice who hasn't had to do a silly process just because it's always been that way or because it was invented in the era of fax machines. And that's just how it has been right. And, we tend to notice that organizations that are positioning themselves better for the future are ones where they're not just allowing people to grab technology and make use of it, but encouraging it. And so to Nick's point about very targeted use cases. I worry about organizations that, that, kind of treat AI like a mainframe, one AI to rule them all one model to rule them all. That's just not going to work. Not only because the tech doesn't do that, but also just it's a silly way to think about things for whether it's compiling for software developers or end users or using the chatbots to write content or create scripts, things like that. Or even as a college professor, I tell I tell this story fairly often, It usually takes about two weeks to build a course, two weeks of full-time work for me to build a new course from scratch. I'm teaching a new course this fall. We're two weeks in It's on an emerging technology area and I wrote the entire class, the entire curriculum the structure, all the Homework assignments, the midterm, the final , every presentation that I give, the PowerPoint that the students go through the notes, everything in about 45 minutes using two LLMs. While I was actually doing something else, I won't say I was recording a podcast. I was not, but it was almost that level of engagement. And when I can get that amount of efficiency, that literally gave me 79 hours back to then go off and do something else. And so you made the very complimentary comment at the beginning of the recording that it seems like neither of us sleeps. Well, I'll tell you, we both sleep very well because we're using these technologies and we're not killing ourselves. Yeah, obviously, we have an early-stage startup. It's, and, you know, I will take, I won't get on the soapbox about all the great stuff we're doing there, but we have plenty of time to do all of this stuff because we're embracing the technologies and our customers are doing the same thing. And we're seeing unbelievable value created in that process.
[00:15:31] NICK: Yeah, and Kristina, what I add to that is, both in my former government role and in my academic role, I also am a professor at New York University; what I see is that the conversations around AI, especially the LLMs are very risk first, meaning what is the risk of using it? And I'm not saying that is an invalid thing, right? We should think about some of these governance issues and I focused a lot on that when I was at DHS, but that kind of approach to the AI conversation really causes problems because it's you're afraid of it first rather than thinking about how you might use it and apply it and just like Sultan I'm teaching this evening tonight at NYU and one of the things that I'm going to tell my students is we're going to talk about chat GPT and we're going to talk about your use of it. Now, is it going to solve all of your problems? Are you going to learn better? Are you going to be able to produce papers that are passable? Probably not, maybe, but it is a tool, and that tool can be used to create some efficiencies, to do any number of things that can help you. And these students are going to have access to this for the rest of their lives. This is not a flash-in-the-pan kind of thing. And the same thing is true in government and other organizations. I won't just pick on the government, but when you think about the risk of using AI, the risk to you, and that's all you think about you, you miss that there's a risk of not using AI, and there's an opportunity cost associated with that. And if you look across the government documents right now that have come out about AI, there's a lot of strategic documents and a lot and they're almost all risk focused. But there's not very much implementation and then lessons learned, Hey, we actually tried this, and here's what happened and here's the contribution that will now make to the broader community. And I think that's where I think there's a big opportunity cost where if we sit back and we just keep saying we need to write another strategy document about risk. We never get to actually implementing a use case and then getting lessons learned and value from it.
[00:17:39] SULTAN: Well, and this is, Kristina, this is kind of a unique feature of the two of us talking right now is we've all that we've both been in the private sector. We both are in academia. We've both been in government, and we've done everything from writing the policy paper that creates the academic research to being a PI on the academic research into commercialization, into taking it to market, scaling it all the way through IPO, and, you know, kind of then back around. And the fact is, is what was true 20 years ago, 30 years ago, 40 years ago, in terms of how we take policy connected to investment, connected to actually building the businesses and then actually creating that broader value, whether it's back to the government as a contractor or just into the market itself, the vast majority of organizations don't live across that full ecosystem. It doesn't matter how big you are or how small you are. And so we have stove-piped in a lot of ways how we as a civilization operate. And it's, I think, very important for us to think holistically. So is there a risk around technology? Yes, absolutely. But just as there's risk in technology, there's risk in people. If someone doesn't know how modern technology works and makes their passcode on their iPhone, you know, 11111, that is equally dangerous to a piece of AI that accidentally hoovers up some IP and puts it publicly available on the Internet, right? Or just as there is offensive use in the cyber capacity of AI, there is defensive use that is critically important as well. And so it's don't blame a hammer if someone uses it, but maybe we should learn how to use a hammer to
[00:19:11] KRISTINA: What's your advice for individuals who are sitting right now listening to both of you speak? They're dying to start playing with more of these technologies and start having some of the opportunities you're talking about, but the leadership doesn't understand it yet. There's always the fear of the unknown. What would you tell them? What can they be doing? What ought they to be doing in order to get buy-in?
[00:19:39] SULTAN: Well, I mean, there's no one-size-fits-all here. I'll tell you a few things that we're seeing that's been very successful. So number one is we're seeing organizations create sandboxes, places where they can go and they can explore, and they put money and people and, really do that. I did that when I was inside. The government, all, most of our really interesting customers have created some, some version of an internal sandbox capacity. So that's number one. Number two is, one of our first customers; I took the chief investment officer of this big hedge fund. And I said, listen, you need to do 20 hours of Python. Cause you need to understand how Python works. And he was very hesitant. It took me a few months to kind of convince him. But once I did, he's gone off and done it. And now he spends hours a day and it's made him a far more effective leader. It's made the firm far more money there, there is value there. And so, you know, that's the second thing is to get educated and get hands-on yourself, whether they're at the most junior level or the most senior level. So that's the second one. And the third is realize that if you're not going to the person you're competing with will. So in academia, if you're not building new curriculum for this, some other university will, and they're going to get the students you want, in the private sector if you're not investing in this, some startup is going to come along or some other big competitor of yours is going to do this. And they're going to steal your people, steal your customers, steal your value. And then on the government side, every single nation state competitor that the United States worries about, whether it's economically broader, politically, militarily, et cetera, is investing tremendous because these are huge force multipliers. They might not have the same educated workforce, but guess what, with a bunch of AI, it sure gets close. And because the threat landscapes are so broad nowadays, we have to do that. So even if you don't know where you're going with that journey, you need to start putting energy into it. And so I, I think Nick and I have a lot of very strong opinions about, kind of at the nation state level where we should be putting this and the simple places we have to move forward from just having policy positions and in essence, regulatory systems that block off its use domestically. Like we have to think about it in terms of an offensive capability and a tool that we can use to win brought in a more broad setting.
[00:21:49] NICK: And to add to that, , I think first it is worth double clicking on something that Sultan said, which is basically upscaling, upscaling the workforce. And you have to have a workforce that can understand these technologies, not necessarily, everyone does not necessarily need to be able to code their own algorithm, but they do need to know what this stuff means, and they do need to know the context around it. And so if you can upskill your workforce, give them some trainings, whether that's Python or whether that's something else. Extremely important to create that opportunity for people to develop their emerging technology skills, which is different than having, say, cyber security skills, right? Those two things are not the same. And then the other thing is I, I faced this exact challenge in government. And one of the approaches that I took was kind of what I call the middle out approach, right? So I would go to not necessarily the executive and not necessarily the kind of line officer. I would go to middle to slightly upper management level. And that was who I tried to convince, who I educated first, because I could go to them and I could hold, like, let's say I could hold an event and at the event I could walk through like, this is what blockchain is, or this is what any number of things. And I could say, okay, now, now that we've talked about this. What in your mission space could be improved by this and and it kind of open people's eyes because a lot of people, whether this is government or private sector are very used to their procurement cycle, their budget cycle, whatever that is, and just doing it again and again and again. And maintaining the value that they have, but there's opportunity here for power law increases in value, and that's that could be economic value. That could be mission value kind of however you slice it, but it requires the that kind of middle level domain experts to be able to point at a use case and say, you know what, there's an AI, a specific type of AI, not AI, generally a specific type of AI that can solve this use case for me, and I know it, and I can now add that to my budget. That's, I think what we need to foster.
[00:24:05] KRISTINA: You both came out of the government space, like you just mentioned, Sultan, you've had this experience sort of cradle to grave, if you will. And I'm thinking about this through the national lens perspective and the United States being competitive. It's really great that we have in the corporate space the ability to go out there and compete potentially, but what does this mean from a holistic national perspective, educating the next generation. I recently went to my kids back to school night and I was really amazed that, yeah, you know, we're using Chromebooks, which, okay, that's cool, but they've been around for a while. But nowhere was anybody talking about things like chat GPT, or, how do you tickle a machine with a prompt to get the right type of outputs that you want? And not to say that we shouldn't value traditional skills like writing essays, right? Those are still really important and they're valuable in context, but we're talking about also needing to educate 10 year old, 12 year old, 16 year old for what's coming. Are we missing the boat here?
[00:25:14] SULTAN: I don't know if we're missing the boat, but we really not doing a very good job at all. I mean, K 12 is how you solve this problem for downstream issues. I have of my 50 students this semester, 45, something like that. I have two Americans. And that's the most I've ever had in the last five years. It's the first time I've had Americans in three years, and I teach graduate computer science because these students just don't, they aren't engaged. They don't get to a point with the technology so that when they are in college, they aren't aimed in this direction, the vast majority of K 12 curriculum are not up to date remotely and you look at traditionally critical thinking skills, which is for me, fundamentally, the most interesting thing we're teaching kids how to take tests. We're not teaching kids how to critically think or analyze or have a well rounded skill set. I care as much as someone who's taking a Shakespeare class and can get on a stage and perform and communicate effectively as I do, they can sit down and do some Python coding. And the fact is, is the vast majority of high school students I'm exposed to don't have 75 percent of the skills that I would be looking for. And when I do college admissions work, it shows up the students that we're graduating out of that infrastructure just aren't there. And yeah, there is something to be said for paying attention to what our competitor nations are doing and how they're focusing on STEM in particular. But, I would say the challenge here is not necessarily getting the curriculum in the schools. It's making sure that the students understand when they're in the schools that you can't just grow up and be an Instagram influencer. And make a strategic 40 year career out of that. And the problem is, is because of how much K 12 has turned into, I would say, outsourced babysitting it's, versus actually building students. That's a, that's a real challenge, but you, you see it. I mean, every job we have open right now, the number of people who are, I would call your standard American kids applying for them is single-digit percent. Because they just don't have the skills, they don't show up. When I do see one, I get excited and Nick and I, like there was, there was, we were very excited we had this one job and it's like, Oh, wow. You know, it's, you know, let's, let's try to make it a little slightly more diverse population and didn't pan out. Right. Just didn't show up the right way. So it's a real challenge for us. And I think, now it's not as much of an issue 20 years from now, I'm really concerned that we just won't have a workforce in line with what we need.
[00:27:38] KRISTINA: Are you seeing the same thing, Nick, or what would you like to see more of?
[00:27:42] NICK: Well, I think what I'm seeing is that we're in a place where even for non engineers, non computer scientists, non kind of traditional STEM degrees, it's no longer sufficient to just say, well, I'm not an engineer, so I don't really know Python, or I don't really understand AI or something like that. Now we're at a place where you actually have to know it, and it's not that you know it exists. You actually need to know it at some depth. And so one of the courses I teach at NYU I actually go into quantum mechanics, like from the baseline level when we talk about quantum computing. And it's not because I hate my students, it's because I want them to understand the difference between a bit and a qubit and why that matters. And, and be able to say it to me and even more importantly, be able to say it to some future quantum physicists that they are working with when they're trying to understand the impact of the technology to their mission. And, and so I, I think that the development of emerging technology as its own discipline to me is, is actually putting more STEM into education, not less and even for some of the traditional STEM degrees, putting in some curriculum around the context around great power competition, what our adversaries are doing, intellectual property theft, and how big of a problem that really is. And I think if we can, I guess, blend the two a little bit better, we can end up with this cadre of professionals. that can actually sit down and do some coding or that can point the coders toward the problem that is the biggest problem because they understand the full context. And so that's kind of how I'm approaching the education side of things.
[00:29:31] KRISTINA: Everybody who's listening right now is probably sitting in a role where they're obviously working in digital, right? That's what we're all doing is figuring out, like, how do we keep the lights on also looking kind of forward. They either are in a position to influence somebody in a K 12 position, or they have a K through 12 or at home, so they can also influence that. What is the one thing you want them to do? Not like thinking about doing, but like going do for themselves and the person that they can influence. What should they go out and actively do? Is there one thing you're like, go do this right now?
[00:30:08] SULTAN: That is such a huge question and such an amazing one that I feel like, like there are a million good answers to that, but I would say I, for any, for, and I don't think this is going to resonate with everyone, but let's say for half of the people, there is going to be a company that is going to be on the S and P 500 and next two to four years that will be worth billions and billions of dollars that is being created right now by a single person using a laptop, and it is going to put a big public company that employs tens of thousands of people out of business with less than 100 employees. And that's going to happen many times in the next five to 10 years. Now, do you want to be somebody who's negatively impacted like that? Do you want to be working for that big company and get laid off? Do you want to be in the government and all of a sudden have a major vendor disappear? Do you want to be in academia and all of a sudden not have a career? Or do you want to be the person who's building it? Or do you want to be the parent of someone who's building it? Or do you want to be the person who fosters the creation of that company because it makes your life easier and defends our nation more effectively or something like that? That's the question. Are you going to be defensive? Are you going to be a protectionist? Are you going to be sitting back there waiting for, you know, just waiting to retire or waiting to timeout as, as we see in DC, there are a lot of people like that here. Or do you want to be on the other side, actually doing something about this? Just because a huge percentage of people are sitting on the sidelines just waiting the time out, that doesn't mean that the rest of us don't have a nation to protect for the next century.
[00:31:41] NICK: Yeah. That's so well said. What I would add is is my one thing, is exposure. So if you are a parent, an educator, a leading organization, an innovator or something like that, expose the people that you are in a position to influence to these different types of technologies and that could be on such a small scale or it could be on a large scale It could be you know, sending your kid to coding camp. It could be going to a science museum It could be a weekly roundup of tech news that you send out. Anything like that to make sure that you're exposing people to what's out there because you don't know who out there actually would love to take a Python course who would love to code something that makes the entire organization much more efficient and really bringing it up to another scale. The person that if you just kind of push them in the right direction, they're the ones that create that next company, that value that Sultan was talking about. But in order to do that, you have to expose them to it. And so if you can just do that each day, each week, however it makes sense for you. I think that right there will make a huge difference.
[00:32:52] KRISTINA: What a great way to challenge us to rise to the opportunities and certainly recognize the risks, but don't be intimidated by them. Thank you so much for your time today. Really appreciate you being with us.
[00:33:05] OUTRO: Thank you for joining the Power of Digital Policy; to sign up for our newsletter, get access to policy checklists, detailed information on policies, and other helpful resources, head over to the power of digital policy.com. If you get a moment, please leave a review on iTunes to help your digital colleagues find out about the podcast.