- Innovation
- Education
- Empowering State and Local Governance
In this episode, Mike sits down with Chris McKay, founder of Maginative, to unpack the urgent need for AI literacy, especially in underserved and underrepresented communities. They discuss how AI is reshaping work, education, warfare, and daily life, and why just knowing what AI can do isn’t enough. From building chatbots to reimagining classrooms, this is a candid conversation about the world we’re already living in and the future we’re co-creating. Chris reminds us that the most powerful tool isn’t the AI itself, but the questions we ask of it.
WATCH THE VIDEO
>> Mike Steadman: Welcome to Frontline Voices, a podcast brought to you by Stanford University's Hoover Institution, where we explore leadership, service, and real-world solutions to some of our nation's most pressing issues. I'm your host, Iron Mike Steadman, a member of the inaugural class of Hoover veteran Fellows and a Marine Corps veteran.
Every time I open my laptop, I see a new headline about the so-called AI bloodbath. Some are warning us to brace for mass layoffs and disruption, which we are already seeing. Others are painting a utopian picture filled with opportunity and innovation. The truth it's hard to separate the signal from the noise.
Which is why I sat down with Chris McKay, founder of Maginative, an AI literacy platform that provides high-quality, in-depth AI news and analysis as well as tailored AI training and education. Chris is one of the sharpest minds I know in the space, especially when it comes to AI education and entrepreneurship.
In this episode, we explore what AI literacy really means, how it's reshaping the future of work, and why community, not code, is the key to navigating this next wave of technological change. From the ethics of AI in warfare to how young people in developing nations can leapfrog barriers through access and education, this is a conversation for anyone who wants to be part of building a more informed, inclusive future, not just bracing for it.
As always, I hope you enjoy today's show and look forward to hearing your feedback. Chris, my brother, thank you for making time to come on Frontline Voices today.
>> Chris McKay: Mike, it is an honor to be here. Thank you for having me.
>> Mike Steadman: So the interesting thing is I've been thinking a lot about artificial intelligence and here, let me set it up for you.
So two weeks ago I started working on a chatbot. Okay.
>> Chris McKay: Yeah.
>> Mike Steadman: Last week I closed four clients with that chatbot and I wrote about it in a couple entrepreneur groups that I'm a part of. And I just remember after it happened, I got to the end of the week, cause it happened so quickly, right, two weeks ago, start working on it, start sending people to the link to test it.
Then I went live with it for a week and at the end of that week I closed four clients. And I remember, you know, it's New York City Tech week here last week. And I remember sitting at a restaurant and just kind of taken back by it all, just how quick everything had happened.
And I was just sitting with it and I was in my world and I'm like looking around, looking at the restaurant and I just feel like I'm living in the future a little bit like I have done this thing that seems so crazy, and I want to talk to somebody about it.
And I didn't have anyone around me to immediately talk to me about it. And I asked the bartender, I said, you know, hey, do you ever mess around with AI? And she's like, I use chat, GPT, et cetera, et cetera. But that's pretty much it. And I was like, man, I gotta talk to somebody about that.
Because I'm seeing all these articles talking about, you know, the layoffs are coming, you know, the reduction of the workforce due to artificial intelligence. And when I saw the work you're doing with Imaginative, and I saw a couple of your videos online talking about AI literacy, I just thought it was a perfect opportunity to get you on the platform and talk about it to our listeners.
>> Chris McKay: Well, first of all, I love the fact that you're getting your hands dirty and you're building, because I think if you are an entrepreneur or just interested in creating, now is the most exciting time in history as far concerned where you want to be building. The technology is so fascinating, it's almost magical, as you said.
And the access is there. It's much more accessible than it was before. And we have an opportunity to really transform not just aurali, but society as a whole. And my hope, my prayer is that we invest in the right technologies in the right people to make this as accessible as possible, to change lives across the world.
>> Mike Steadman: Take us back. How did you get into this space, first of all? And then I want us to bring us up to speed on how you define AI literacy.
>> Chris McKay: Sure, so I own an agency. I started imaginative 10 years ago, the Digital Agency and we worked at the intersection of education, digital technologies and design.
Right, and I love that because for me, education was that thing that brought me out of a farm in Jamaica, where I was born, to the heights of my career professionally. And so I look at education as this great equalizing force. And the more we can make that accessible and available to people, the more we give them a chance and an opportunity.
And so I've spent a lot of my career working with school districts, universities, technology companies within the education paradigm at all levels, From K through 12, all the way to postdoctoral programs. We have designed curricula, we have looked at building tools around making technology more accessible in the classroom.
It's something that I'm deeply passionate about. And throughout that time, I, of course, really learned about product development and product design. And design thinking was something that was around during the peak of my agency days where we realized that Design wasn't just about how something looked, but how it functioned and the problem that it was solving.
And specifically, human centered design was going to change everything. And so I had an opportunity to really work with great companies with great teams to build great products over the years. And then I came across a book called Life 3.0 by an Act Tegmark, I remember reading that book.
That's the first time I actually saw the company named OpenAI. If you haven't read that book, it is an amazing read. It starts off as fiction, but what I loved about it is that it has this level setting glossary at the front where it just defines terms. So when I say super intelligence or when I say AI or AGI, we're on the same page because we're working with the same definitions.
And we may not necessarily need to agree on the definitions, but at least we can understand where the other person is coming from because we're working from the same baseline. But I loved it because it really looked at the opportunity with AI technology and what it could mean for society, but also the dangers, how it would change things like the future of work or things like equity.
And so for me, it was important that I wasn't just sitting on the sidelines. I was taking an active role in terms of the future that I wanted to create. And so, fast forward, two and a half years ago, we founded Magnetitude, an AI literacy platform. And our goal is to really accelerate this AI's future.
But we want to do so responsibly. We want to support the companies and the people that are really working hard to make AI more accessible and as beneficial as possible to mostly as many people as possible.
>> Mike Steadman: Three years ago, I was at a veteran entrepreneur event put on by the Institute of Veteran and Military Families in New Jersey.
And the head of IVMF was there, and there was a panel on stage, et cetera. And one of the things he talked about was AI. This was three years ago, and he said he's been having a lot of conversations with leaders at Fortune 100 companies, Fortune 500 companies, because these are a lot of the big donors to the Institute of Veteran Military Families.
And the thing that was always keeping the thing that was keeping them up at night was AI. Now, this is three years ago, this is before ChatGPT was a household term. A lot of us, he was saying AI, but we didn't know what it really looked like. We hadn't experienced it yet, et cetera.
Fast forward to today now, Claude ChatGPT, LLMs, right, there's all this language, and at times it feels kind of slightly overwhelming, but we're in it now. And yet you, you know, even from the very beginning of this interview you mentioned, I feel like you came from a very kind of positive mindset.
Right. There's like the doom and gloom kind of people, and then there are people like yourself that are trying to force us to reimagine what right looks like with AI and how have you managed to kind of keep that?
>> Chris McKay: So I'll, I'll probably reframe that a little bit.
I think you have the doom and globe, the doomers and then you have people that are probably AI Utopia. They think AI is going to sol of humanity's problems. I'm probably somewhere in the middle. I think of myself as a bit more rational, a bit more pragmatic in terms of the opportunity.
For me, AI is a technology. It's a very powerful technology. One that I think will be a lever and a force multiplier for every single industry. Just like electricity or the Internet really changed how work happened. It changed jobs, it changed lives. I think AI is going to do that at a much faster rate and at a much larger scale.
And so when you think of the history of technology, you recognize that technology in and of itself is not good or bad. That isn't to say that you can't have devastating consequences and outcomes because of a technology existing. However, if we have an opportunity to use something like AI to make healthcare more accessible to more people around the world, or to bring education into the pocket of any child anywhere in the world with a personal tutor, or to bring financial literacy to more corners of the world, that's an opportunity that we have to take.
It's a chance that we have to really redefine how our world works. And I think, yes, there will be challenges, there will be problems to solve, but if we move forward with optimism, but at the same time learning from the mistakes of the past and minimizing the risks, minimizing the downsides, the hope is that this can be a really positive outcome for humanity.
>> Mike Steadman: You know, one person who I look up to has really helped me kind of shape my thinking. There's a couple people, number one guy named Christopher Lockhead, he's the founder of Category Pirate, Tim and Eddie Yoon, and they write a lot about category design and changing the world in these new markets, creating jobs before that didn't exist.
That's really what we do. But when it comes specifically to AI, it's also been will I own from the Black Eyed Peas, and he founded a company called FY. I think it's fun, you'd ideas or something along that lines. And, you know, I was listening to him speak at the, I think it was at MIT or something and YouTube, right?
Access to education, okay? So me and my living room in Harlem, I can watch all these amazing presentational AI and he was saying, what does music look like in the world of AI? Who says an album or a track has to be one song? What if it's playing 24 7?
And he just came hit that with, like, that imaginative mode. And that's really kind of grounded me in terms of, like, I don't have to accept the past. I can create a. A new kind of future. But I've been leaning in. I read Cointelligence by Ethan Molik, right, it was a podcast on earn your leisure.
Her name is X With Tay, which was super helpful, but outside of that, I found myself really having to kind of dig around, find people like yourself to stay at the edge. Because when I built that chatbot and I shared that into my group, I realized I was operating in the edge of a space that a lot of people weren't.
>> Chris McKay: Yeah. So here's the reality of where we are today. I think the bottleneck in society is literacy. If you think of what happened with the mortgage crisis and the lack of financial literacy and how it devastated so many communities, in fact, if you think that to Covid just a few years ago and what the lack of health literacy did, misinformation and disinformation wrecking communities, especially black communities, minority communities, literacy is a massive problem.
And so when you think of AI and the risks that we have today, we need to bring more voices to the table. We need to ensure that our people are educated, they're informed, because the impact that it will have, not just positive, but also negative, the potential for negative outcomes is very real.
And so for me, we need our lawmakers and our policymakers to be informed. We need our business leaders and our entrepreneurs to be informed. We need our consumers to be updated and educated in terms of how to use this technology. And we have already seen use cases where people may have been scammed or you have had even deaths that have happened because of AI technologies.
And so. So for me, it is very near and dear to me to say the focus on AI literacy is something that as a society, we should be invested in heavily.
>> Mike Steadman: What are some assumptions that people need to have, though, with regards to literacy? Because when I hear the term literacy, I think of reading, writing.
Right. And you think about these early models. Right. If you can't communicate if you can't write, if you can't read, you're going to be so far behind. And that's not to say literacy doesn't mean something kind of in the future, but like that's one of the things that keeps me up at night currently.
Right. Because I'm a prolific reader. Right. I tinker, I play around, but I'm always reading. And I had already written my first book, so when it came to programming at that bot, I had already kind of had some ideas and a way to conversate, whatever. But I think about people that aren't like spending time on the old school craft of like reading and writing.
>> Chris McKay: Yeah. So when I, when I say the term AI literacy, it's not just the direct term of reading and writing in the traditional sense, it's one understanding the technology, not in the technical levels. Because of course literacy and AI literacy will be a spectrum for most people.
You don't think of how a computer works. You don't think of what's happening with the GPU and the cpu. You have a car that you're driving, you don't think of how a car actually works. Most people don't actually know how their car works yet still the car is very useful for them.
Your computer can be something that you use to make a lot of money on a professional basis, on a day to day, without knowing how it works. And so I think the first misconception is thinking that you need to understand technically how AI works in order for you to leverage it.
What I will say is it's not about the answers to the questions that you need as much as you need to know what questions to ask. And one of the coolest demos that I saw with AI years ago was when I saw chatgpt speaking in Patwell to somebody in Jamaica.
No, Patwell is the spoken language in Jamaica, it's not written. And so there are a lot of people that primarily are talking Patwell that may not have gone to school and got tied into the education system. And so they're left out of the economy and know in their pockets they have a tool that can understand them, that they can talk to and it will understand them.
That is so powerful to me and that is the opportunity with AI, the fact that you can bring more voices, more people into the conversation, into the economy. I think this is going to change so many lives all across the world.
>> Mike Steadman: You just gave me an insight because back in 2012 I was a platoon commander in Helman Province, Afghanistan and I did not speak the language there, okay, we had to carry interpreters everywhere we went in the hostile environments, and we had the.
You know, trust that the interpreter was translating the right thing. Now I had great interpreters, love my interpreters, they were in the thick and thin of it with me. But when I think about now, that kind of empathy, that understanding we had, situations go sideways. What would imagine what would have happened if I had a translator that was AI that allowed us to speak the same language?
>> Chris McKay: Yeah, it's, it's a powerful idea to think that you can have computers that can understand us in whatever language we're speaking, and that we can understand them because they can speak back in our language. And it's important, I think, to also recognize that the technology is more accessible today than it ever has been, even today itself.
OpenAI reduced the cost of their most frontier model, the O3 Pro, by 87% in months. Like this idea that we can have frontier intelligence accessible to anyone, maybe from a mobile device, and that we can keep on iterating with open source technologies to push to make it more available.
That's just, it's an amazing opportunity for us and it's something that I couldn't be more excited about.
>> Mike Steadman: So we know education, right? There's, I mean, I had a previous conversation about public education. Okay. And you know, the beauty of the world that you and I operate in, in entrepreneurship, we can make pivots, we can shift super quick.
Okay. But the classroom still looks very similar to how I look, you know, 1910, et cetera. One teacher up front, bunch of kids in the back. How worried are you about our current education kind of process of being able to keep up and actually teach and implement this stuff?
Right.
>> Chris McKay: I think it's an important question to ask. I, I generally don't approach the future being worried. I am generally a believer that humanity will solve its problems over time. And this is certainly a problem that we see increasingly both from just a resource investment standpoint and an roi or kids are not learning as we would want them to learn.
They're not having the experiences that we think of when we think of what education should look like. But I think we're also starting to see the opportunities where technology has been introduced in some classrooms in responsible ways. And I'll caveat that by saying we won't get it right all the time, we won't get it right maybe initially, but when you have a more literate community and society, it means that we can ask better questions and we can inquire about how things are being done.
Ideas will come from anybody and maybe the person that is going to solve the model for education isn't even in education today, but because they have access to technologies like AI, they'll be able to build a solution or propose an idea and need a startup company, and they'll be able to solve the problem that we have.
What I have chosen to do is to leverage tools and technologies to teach my kids. I have two kids, 5 and 6 years old, and I've seen how they have embraced technologies. And they're AI natives. They talk to the devices and they expect the devices to respond. At one point they asked me why the car wasn't driving itself.
And I did chuckle because I think of the world that they live in where they can just say commands and the world responds to them. But I also see that they're learning in a much different way. If they have a question, they can just ask it and they can get an answer.
But with all of this comes this burden of responsibility that we have to ensure that the technologies that we're building, that they are not biased, that they have the right information, and that we're ensuring that it's accessible to as many people as we can. And so while I look at the future and things like education and I'm excited to see where AI is entering into the classroom and we're looking at things like Duolingo and how that is gamifying learning and the insights that we can get from it and what Khan Academy is doing with Khanigo, I'm excited.
I don't think we have necessarily solved any of the main problems about the classroom of yesteryear, but 100%, that old model has to go. I don't see a future where we're going to be able to educate kids and help them to become autodidacts if that's the model that they're they're using moving forward.
>> Mike Steadman: And that's what I was hitting at earlier when I talked about reading, testing, experimenting. Right. You sums it up perfectly. A world where you can be an autodidact, right. Where you don't have to have all the answers. Right? You have. Everybody has an opportunity to create specialized knowledge.
Right. Each of us has the ability to create our own worlds. That's really kinda where we're at. But yet, when I talk to some university professors here, locally in Harlem, et cetera, they always push back on AI. You know, they're like, we don't want it in the classroom, or X, Y and Z.
And I, I've sparred with them at coffee shops. So what is a history paper in the world of AI? You know, maybe you can create a whole movie on a topic and you can guide it. So maybe you're not necessarily writing a paper, you're writing a script, but that requires just a certain level of imagination.
And so when you have people that are so used to legacy systems, they're not incentivized to change it. But the world is accelerating at a rapid rate.
>> Chris McKay: It is, but, but there's an argument for conservatism and an argument for progress. Progressivism. I was in Morocco a few weeks ago and I was moderating a panel with a professor.
And similarly he, he was very adamant about the role of AI and how it was ruining his students brains. And I listened to him because I think that's what is important now more than ever. Listening to dissenting opinions, listening to people that disagree with you. Now I do think when it comes to education, this idea that how do you teach things like critical thinking when the answer is going to be readily available to everyone?
How do you get people to think creatively? How do you get them to envision. These are things that I don't think we know the answers to as yet. But they're questions that we increasingly have to ask. I understand a lot of the concerns that professors have because of course, the model of education, of grading students and assessing their, their, their, their knowledge, all of that will have to change.
All of those locks have been unlocked now with AI. And so how do we move forward? I, I think it's important that we don't necessarily just rush into a future of unknowns because the collateral damage could be significant. And so we have to be thoughtful. So I welcome professors that are willing to say no.
And I encourage more debate because I want them to ask the questions and challenge technologists as to how they want to solve and answer those questions to push the technology to get better. Because sure, there are real issues with AI today. You have issues around hallucinations, you have issues as to whether or not this is something that will necessarily scale beyond a certain level.
And so I, when I say I'm very pragmatic in terms of my approach to AI, it's also in that optimism to say, yes, I am hopeful that AI will completely transform education. But I understand that it's not going to be just an easy road to say, hey, everybody should just start using these tools today, or every classroom should start adopting these tools today.
Because we don't understand the ultimate outcomes. And I'll give you just an example. With one of the models that Claude had released. No, sorry, with one of the models that OpenAI had released, they spoke about the fact that their models are starting to become and mimic humans a lot more closely.
And when you're talking to an AI, you can interrupt it, you can just cut it off mid sentence and interrupt it to correct it. And for kids, they will learn this behavior, but they may also think that it's okay to do that socially with other humans, which would be a massive issue.
Right, and so there are sometimes unintended consequences to technology that we don't even think about until we see it in the real world. We've seen that with social media, where, yes, you were thinking of connecting everyone, and then all of a sudden you had this voyager society where everybody was just kind of following other people.
And it completely changed how people were behaving in the real world. But also online bullying changed with. With the era of social media. And so I do think we have to be thoughtful about even the questions that we don't know yet to ask. And just be mindful that there could be side effects to this rose, the future that AI will bring for us.
>> Mike Steadman: You talked about imagination. Well, being in New York City, I get the privilege of being a advisor to Cornell Tech, right? So I get to be there, part of their venture studio and meet the little founders, et cetera. But the reason I want to do that is I want to see how people who are operating at the edge we're building and thinking about, you know, AI.
And just in the past semester of doing that right, my OODA loop is spinning because no longer am I just sitting here pontificate. I'm watching. So I'm able to get that feedback loop going. I'm able to learn kind of in real time. And it just feels like we're all kind of building a future together, like nobody really knows what right looks like.
So, you know, like you mentioned before, those definitions and stuff is shaping how do you navigate in this world.
>> Chris McKay: So there are a couple of things that I've tried to do over time. One, I've tried to always reassess my priors to not be so opinionated or fixed in my opinions that I refuse to change this idea, that if I see compelling enough evidence, I should be able to reassess whatever assumptions I made previously and change my mind.
This flexibility is important for me. There are things that I believe today that I may not believe in. A few years may not completely believe. There are things that I believe today that I probably didn't believe in the past. And so I do think having that flexibility, that mental flexibility, is something that has served me well.
I try to speak to people that are passionate about whatever it is that they're doing. I find that people will share a lot in terms of maybe the challenges, but just their experience. And that feels to me so contagious. And that helps to ground me in the reality of what I'm doing.
Another thing that I do is I also grow myself in the reality of the work that I'm doing. My wife works in a pediatric emergency medicine ed department, and the work that she does literally saves lives on a daily basis. And so sometimes when I'm thinking, I'm going to write this article, this isn't to diminish the work that I'm doing by any means.
But there is something very humbling when you realize that there are people all across the world doing all these different types of jobs, having all these different experiences that ultimately will shape this future that we'll have collectively together. It isn't going to be written and built by one individual in Silicon Valley or in New York City.
The future of this world belongs to all of us. And so getting out there, meeting people and being flexible and open mentally to change has been something that has served me well, of course, with AI and because everything is changing so quickly. On a more practical note, I read a lot.
I consume a lot of research papers. I have select people that I follow on social media platforms because I see them doing the work and doing the research and sharing content. And that community has been super helpful. And so there's one thing I can recommend. If you can find a community that you can be a part of, where you can learn and exchange ideas, it is contagious.
And before you know it, you'll be doing the same thing.
>> Mike Steadman: One thing that Chris taught me over in Category Pirates was a lot of people right now are treating AI like a bolt on, like it's an add on, instead of treating it like a foundation, like a soil.
So, you know, co authoring a book with you, co founding a startup with you, like really bringing it into being a core part of the conversation, right? But the only reason I know that and I think like that and I practice like that is because I got that education loop going.
I read I'm invested in communities, et cetera. And it goes back to how I started this conversation though, is I worry about the people that are not in these spaces. You know, the people that are working, let's say a minimum wage job at retail, and then one day you look up and now it's harder and harder.
We've got a whole generation of college graduates now that are still living at home with their parents, still unemployed because they can't get jobs in this kind of current workforce. And so how do they navigate this space? How does the everyday American, you know, that may or may not be as tech savvy, you know, that really doesn't have that kind of autodidact mindset yet.
How are they surviving and thriving in this rapidly changing AI driven world?
>> Chris McKay: I think it's an important question to ask and to keep asking. I think we all have a responsibility as people on the edge to really keep promoting and spreading the information. But it is that adoption curve with any technology.
You're going to have those people that are just so excited and fanatic about it, that are going to be on the bleeding edge, that are going to adopt it and maybe they're going to start companies and they're going to keep pushing the frontier forward. And then that will also reached down through society.
The computer didn't start off as an iPhone in your pocket. It started as massive mainframes and entire buildings. And little by little, that technology became better because of the people that were passionate about it, that wanted to derive value and maybe drive businesses from it. And so I think encouraging culture of entrepreneurship is going to be key.
Culture of innovation will be rewarding for us as a country. If we can have policies that encourage people to take risks to start companies, maybe there's a company to be found around somebody that's bringing this education to the masses in ways that aren't being done today to creators that may be able to simplify the technical terms or the complex issues and challenges and break it down to anyone.
I think the world isn't short of the need for great ideas. There are a lot of problems to solve and ones that AI will solve or at least will help us solve. But what I'm most excited about is that right now, today, somewhere in the world is a boy or a girl that may have an idea.
And we don't know their name as yet. And they're going to change the entire. History of our world. And I think that is the most. Fascinating thing about being alive. When I was growing up, the richest. Man in the world was Bill Gates at one point. And I used to think, man, Bill.
Gates is so rich, how will anybody make more money than him, Right? And yes, somebody did. And I think that is amazing about, and just how diverse and complex our world is, we have to believe in the next generation, we have to believe in each other. We have to believe that the future.
Will be brighter than it is today. It's not a guarantee, but if we. Work together, it's something that we can hopefully ensure happens.
>> Mike Steadman: I was reading something online, and maybe it was online, it was a podcast, and there's so much content these days, it's hard to keep up.
But I read somewhere that a lot of third world countries are a lot more optimistic about what's possible with AI than a lot of us are in this kind of more developed world. Do you know why that is?
>> Chris McKay: I don't know if I necessarily believe that, but I can assume that it's true.
But I will say the opportunity for developing countries is where they can leak certain technological eras. And so if you think of mobile. With countries in some part of Africa, they didn't have to go through in their financial system, they didn't have to. Deal with traditional banks. And everything that happened, our own whole banks were set up and clearing hosts, etc.
They went into digital currencies very early with things like M Pesa. And so it was super easy for them to be sending money with their mobile phones. And I think with AI, the opportunity is that they won't have the legacy. Systems of the past that a lot. Of companies in the developed world will need to solve for and to get around.
Meaning they can be more nimble and they can be more agile. The questions that they may be asking. And the impact that they will see from the technology is likely going to be much greater earlier than for developed countries. And if you think about it, it makes sense, right?
Somebody who maybe is a B or a C performer, if you give them a tool like ChatGPT or Claude, and they're able to use it to augment their work by just asking prompts. They can probably get close to an A performer, maybe even outperform an A person, right?
But the person who is already really good at their job, they're going to. See all the errors, they're going to. See the issues with the models and. It will take time for the models to get there. So I think especially for developing nations, it's important for them to have an AI strategy as countries.
But if you're there and you're thinking. Man, what can I do? Start by downloading the apps, get the. Free open source models, start asking questions, start learning how to prompt, start learning. About the opportunity and that you can use AI to solve problems in your very own world, in your small universe.
And see how AI can do that and then scale.
>> Mike Steadman: One thing I'm starting to see with a lot of universities is they're starting to offer more online courses. Now there were some that have been doing this for years, but now, you know, Harvard is leaning a lot more into their online products.
Right. I just finished up a course at UC Berkeley that was offered online. So we're going to be able to create these, provide access to these world class institutions. And now you're going to start see builders come out the woodworks, you know, because there are some people that have been trying to make a way out of nothing and then now all of a sudden they have access to this education.
And for me, that's the more exciting part. But at the same time, like you said, this brings down a new level of complication because what happens to our traditional university system? You know, for myself, I've long thought about pursuing a PhD, but what is the purpose of a PhD in the world of AI when you can create new knowledge right then and there?
You know, where I don't have to teach in the classroom, I can do a podcast, right. I can launch a substack. And so it's just challenging our understanding of the world. And again, it creates opportunities, but it also causes some chaos in traditional systems.
>> Chris McKay: It does. I don't envy the school leaders, the educators, the policymakers.
The pace of change and the impact. That this technology is going to have. And already is having is incredible. And the time that they have to respond to it is yesterday. And so a lot of the work that we're doing with companies, with school districts, with institutions, is to help them to process all of this.
One, how do they quickly level up their staff and their faculty and then how do they think of delivering education? What is the role of the teacher moving forward, or will their work change? What is the role of a student? What is the role of school? These are very real questions that a.
Lot more families are going to start asking when you have access to so. Much information right at your fingertips.
>> Mike Steadman: Now, I am a military veteran So I do got to bring this topic up is right now there's a lot that's happening in Ukraine. Drones are all over the place.
You think about what's happening in some other parts of the country. Right. And now we're going to be bringing AI into warfare. I have to assume some of this keeps you up at night.
>> Chris McKay: Yeah, is a complex topic, I've followed very closely. Companies like Andy Rogue, Palantir, the defense industry has certainly shifted a lot.
The promises that companies made years ago about not using AI in warfare and to support the military, they've all walked back. And this isn't to say that it's. A good or a bad thing. It's just to show that the, the dialogue across the country has changed. What I will say is having so many family members that are either currently in the military activity or retired.
War is a horrible thing, but there's a very real need for the military. We have to be thankful for anybody that decides to serve in our military and to defend our way off life. Because it is threatened on a daily basis. And the safety and the beauties that we're able to enjoy are sometimes on the backs of people that sacrifice everything to give it to us.
And so if there's an opportunity for us to use technology to make that better, to make that experience insignificant in terms of the impact that it has on soldiers' lives and their families, I think that's something that we have to explore, all right? And again, it's a situation where we have to encourage debate and dialogue because.
One person won't have all the answers. But the only thing that you can. Do is to inform more people about. The technology, the potential, the dangers, and then get more people to participate in the conversation. Because when you think of the fact that you can have a war happening where things like drones can be deployed.
And the human lives don't have to be put at risk as much. That's a good thing, right? When you think of cybersecurity and defending our nation's top secrets and how we can leverage AI to potentially help with that, that's a good thing. And so there are lots of pros to AI and it's a no brainer to have AI used in military and defense.
It is, it is not a question as to whether or not it needs to happen. It will, and it already is happening. And the companies like Palantiran and Anduril Scale and so on, no, Meta apparently is also doing some work with the military, it's important that our best minds and our best technologies are being put to use to protect our country.
And with that said, of course, if you have a technology this powerful, it will be misused in the hands of the wrong people. And so how do we minimize the impact of all of that? I think there are multiple approaches that you're seeing government leaders, state policymakers try to address, but it's a very real threat, if our adversaries get access to a technology this powerful before us, what does it mean for our way of life and our society?
Those are questions that we have to ask. And the military has that responsibility and burden to us even before the civilians do. And so, like many of the things with AI technology, my hope is that we're able to solve and answer a lot of these questions in a way that is beneficial to our society and one that will minimize the harms that it could cause.
>> Mike Steadman: So I'm going to put a disclaimer here because we are having a conversation. So this is a very human centered conversation. I, you know, I operate in the dual use space, right? I help support dual use founders, et cetera. But I would be remiss if I didn't admit that, you know, there's a term mutual issue or destruction, right?
And there's all these kind of memes and stuff online with Sarah Connor. You know, watching us build technology is going to destroy ourselves. Now, we can joke about it, we can laugh about it, but we also know as humanity majors, right, art imitates life. There's stuff that we saw in movies and stuff 30, 40 years ago that's starting to come to fruition.
And when I think about the tank entering World War I, the machine gun, the Gatling gun, and what that did to the amount of casualties we start to sustain, and I see drones in Ukraine and whatnot, I wonder what drones are going to look like. Here. And what happens when we have local, you know, we got protests taking place in la.
What happens when protesters that. Not the good protesters, Right. The bad ones. Right. The ones that want to cause chaos, etc. What happens when they start leveraging some of these new technologies, you know, and are we now giving them the power that they need in their hands to be able to build these things, et cetera?
Because it's only a matter of time for the dark Web version of ChatGPT and some of these other technologies are built. And I do think we gotta have these conversations.
>> Chris McKay: Exactly, and again, that brings me back to AI literacy, because the dangers are certainly real. Right, and so to ensure that we can protect the future that we all deserve, I think we have to lean back to our morals and the principles that we believe in and empower those as much as possible and empower the people that support those with the technologies that can help that.
Now, these may change over time. And that's why we exist as a society to engage in dialogue. And our democracy is one that hopefully can survive and can be perpetuated in conversations and debate around the future that we want, because to your point, yes, the technology is super powerful, and yes, it is getting increasingly into the hands of our adversaries.
That's the reason that was the thinking of limiting access to chips, for example. So maybe this software you could control, but maybe if you can limit the hardware and the access to the hardware to run these technologies, you could somehow have control over them. But again, you're seeing people find ways.
Whether it's through smuggling, life finds a way. And so these are questions that we'll always keep on asking. The stakes are only getting higher as the technology becomes that much more critical to our everyday lives and to society as a whole.
>> Mike Steadman: I started this conversation referencing, you know, someone three years ago who was saying, hey, this is what's keeping Fortune 100, Fortune 500 CEOs up at night.
You operate a lot more in this space now than I could only imagine. What does that answer look like today? What's keeping people up at night?
>> Chris McKay: I think two things. I think there's this real fear around business leaders that they're going to miss the boat. They see the speed at which everything is changing.
They see the technology, they see the impact in their real world. And I think many companies look at an existentialist threat to their future with AI technologies. Not that AI itself will replace them or their company, but that maybe another company is going to come out and just completely blindside them.
And I think professionals are also wondering what the future of work will look like. Will it be an AI model that somehow just takes their job and it disappears? Will it be somebody else that's using AI tools? There are a lot of people in the workforce that are not yet close to retiring that are going to have to go back and learn new technologies and learn a completely new way of working.
These are all very hard problems to solve, but that we have to start working towards. And I think, of course, governments are thinking, what does the balance of power look like when certain countries have access to AI technology than other countries don't? I think AI itself presents this magnificent future, but it's also this fear, almost fomo, that if you don't get on the boat, if you don't act, if you don't do something today, it'll be too late.
You'll be the dinosaur that people read about in the future because you waited too late to respond.
>> Mike Steadman: I wonder too if there's some, what's the word I'm looking for, hubrisness going on too. A lot of people don't really understand these technologies. They're not playing with it themselves.
Right. They're just jumping on buzzwords. We need an AI strategy. We need this. And it's like you haven't even paid $20 for Chad GPT yet, you know?
>> Chris McKay: Yeah, there is some of that. I do think there is some of that, often think that agency is what's missing with most leaders and professionals.
They'll read the articles, they'll watch the videos, but that, that, that, that hurdle, that inertia to, to go and actually build a bot like you did, to go and write some code or to, to use one of these tools and they don't hit a final step. And so that agency is, is, is lacking.
And so if, if I could encourage one thing for anybody listening to this podcast, create an artificial sense of urgency to just go out and start doing things. It won't be perfect, but I use AI in my day to day life all the time. The cool guy comes and he's looking at a new system and I whip up.
Out my phone and I point chatgpt to it and I'm like, hey, we're having a problem with the pool. This is what's happening. What do you think? And it comes up with solutions. And the pool guy is like, whoa, how can it do this, and I'm like, aren't you using ChatGPT?
And he says no. And, and so now you have one more person that's using it. Right? My, my daughter comes and she, she's working on like addition and subtraction and I asked Chat, I asked Claude to build a game for her to practice and she goes and she plays it and she has fun.
And I'm like, man, no, she wants to use it to build things on her own. And so get that sense of urgency to use AI in your day to day life. Solve the problems that you have. That way you can start understanding what it's good at, what it's not so good at, where it should be used.
Because the one thing I'll say if I, if I can leave you with something, because AI can do something doesn't mean that it should. And increasingly the questions that we'll need to ask won't be around what AI can do, but what AI should do. And it will be up to all of us to decide that as a society.
>> Mike Steadman: Well, Chris, I appreciate you making time to share your insights and pontificate and spar with me on some of these kind of hard, heavy topics that really none of us have figured out. But, but again, you're leading on the cutting edge and we have listeners and viewers tuning in from all over the country, all over the world.
I would love for you to talk a little bit more about the work you're doing with maginative and how people can follow you, where they can find you to stay up to date on the research and articles that you're putting out.
>> Chris McKay: Sure. Thank you. So maginotive.com go to maginative.com if you want to stay up to date on the most important stories in AI.
Our goal isn't to cover the noise, it isn't to cover the rumors. It's to provide a space for especially business leaders and professionals. If you're trying to stay up to date with what's important, go there and that will help you to level up very quickly in terms of what's happening and where, where you need to pay attention.
We have videos that we put out on YouTube covering important conversations with leaders, with technologists, with humanists, because we want to have more conversations around the role of AI where it will fit into our society. You can find me on LinkedIn, you can follow me on X. I have a course on LinkedIn around AI adoption.
If you're thinking about how AI can maybe accelerate your business or course provides the usage framework, which is a pragmatic way for you to think through AI adoption. And we have our AI maturity matrix that we're working on with LinkedIn also in the coming months. So some amazing resources are out there.
If you engage with me online and you have a question I'm more than happy to answer, I can point you to podcasts, to free resources online. The point I'm making is that there's an entire community out there of people like me that are sharing content that you can learn from.
And the hope is that as you learn, you'll also join the community and you'll start sharing. And because to your point, Mike, we're all in it together. So join us at imaginative, join the movement, join the community, let's push the literacy as high as we can in our communities, in our personal lives, in our families.
It is your responsibility. You're here listening to this podcast today, go ahead and download the app or take that technology, have a conversation with your friend, have a conversation with your spouse or your family member and talk about how we can use this technology to do something, anything in your life.
>> Mike Steadman: Awesome. Well, thank you, Chris. I'll be sure to include a link to everything you just talked about in the show notes. And for all our viewers, especially those veterans out there that are interested in the Veteran Fellowship program, please head over to Hoover.org/VFP and apply today. And if you haven't subscribed to Frontline Voices on your favorite podcast hosting app, make sure you do so.
Until next time. Peace, love. Have a great rest of your week, everyone.
>> Chris McKay: One love.
ABOUT THE SPEAKERS
Chris McKay is a reporter, educator, and strategist on a mission to make AI accessible and actionable for organizations ready to embrace the future. He is the founder of Maginative, a platform operating at the intersection of AI media, enterprise strategy, and education. Chris covers the AI revolution from the front lines, spotlighting key trends, breaking stories, and interviewing pioneers shaping the next wave of innovation. Behind the scenes, he advises Fortune 500 companies on responsible AI adoption using Maginative’s proprietary USAGE framework and AI Maturity Index. Whether leading workshops for executive teams, publishing in-depth analysis, or quietly building the next big thing in enterprise AI, Chris is helping leaders make smarter moves, faster.
“IRON” Mike Steadman is a former Marine Corps infantry officer, three-time national boxing champion, and the founder of IRONBOUND Boxing, a nonprofit in Newark, NJ that provides free boxing and entrepreneur education to youth. He’s also a professional business coach, brand builder, and category designer who helps underdogs and misfits, veterans, Black women, and those used to being “one of one”, launch purpose-driven brands and ventures. Mike is a Hoover Institution Veteran Fellow, where he sharpened his thinking around leadership, public policy, and the role veterans can play in solving some of America’s most pressing challenges. He currently trains CEOs, advises emerging brands, and helps underdogs and misfits build businesses and tell stories that matter.
RELATED SOURCES
-
Life 3.0 by Max Tegmark
-
Co-Intelligence by Ethan Mollick