Dr. Calyam is the Greg L. Gilliam Professor of Cyber Security in the Department of Electrical Engineering and Computer Science at the University of Missouri-Columbia.
He is leading efforts to establish a zero-trust cybersecurity approach for battlefield communications. He is the Director of the MU College of Engineering’s Cyber Education, Research and Infrastructure Center (Mizzou CERI) and an Investigator in the C2SHIP (Center to Stream Healthcare In Place). He also directs a research group in the Virtualization, Multimedia and Networking (VIMAN) Lab.
Summary of the episode
Dr. Prasad Calyam discusses his journey in cybersecurity and his work in the Virtualization, Multimedia, and Network Lab (VIMAN Lab). He also talks about his role as the Greg L. Gilliam Professor of Cybersecurity and his involvement in the Mizzou CERI, which focuses on cyber vision, cyber intelligence, cybersecurity, and cyber training.
Dr. Calyam mentions the importance of cybersecurity in various fields, such as healthcare and battlefield communications. He emphasizes the need for diverse backgrounds in cybersecurity and offers advice to students and early career professionals. The conversation also touches on the future of AI in cybersecurity and the importance of ensuring its safety and ethical use.
Listen to the episode
A complete transcript of the episode
Steve Bowcut:
Thank you for joining us today for the Cybersecurity Guide podcast. My name is Steve Bowcut. I am a writer and an editor for Cybersecurity Guide and the podcast’s host. We appreciate your listening.
Today, our guest is Dr. Prasad Calyam. Dr. Calyam is a professor at the University of Missouri Columbia. We’re going to be discussing cybersecurity education opportunities at the University of Missouri. Before I bring him in, let me tell you a little bit about Dr. Prasad Calyam. He is the Greg L. Gilliam Professor of Cybersecurity in the Department of Electrical Engineering and Computer Science at the University of Missouri Columbia. He is leading efforts to establish a zero trust cybersecurity approach for battlefield communications. He is the director of the MU College of Engineering’s Cyber Education, Research, and Infrastructure Center, Mizzou CERI, and an investigator in the Center to Stream Healthcare in Place, C2SHIP. And we’ll talk about both of those things, hopefully. He also directs the research group, excuse me, in the Virtualization, Multimedia, and Network, or VIMAN, Lab. So with that, welcome, Dr. Calyam. Thank you for joining me today.
Prasad Calyam:
Good morning, Steve. Nice to be here with you.
Steve Bowcut:
All right, we appreciate your time. This is going to be fascinating. You’ve got quite a background there and several directions that we could go with that. But before we get started with that, let’s learn a little bit more about you. I know I would be fascinated to learn, I think our audience would as well, to learn more about what your journey has looked like. How did you land in cybersecurity? How did you get there? And why is it that you’re so dedicated to it?
Prasad Calyam:
Yeah. I started my graduate school career at the Ohio State University in 2000. This is when, you can imagine, the internet was getting more interesting for video, for a lot of file transfer, Napster, audio. So there’s a lot of excitement about applications on the internet. And at the same time, we were looking at all these interesting applications with the internet. There was a lot of cybersecurity incidents or issues with intellectual property theft, attacks that were impacting university systems. So I got fascinated as I was working in the CIO’s office of the Ohio State University. I got this opportunity to work with all these cybersecurity professionals purchasing computers on campus. And so I was like, “What’s going on? What’s the excitement about?”
And so as I developed my career, I found that the part that excites me is that I want to accelerate the performance of application. My PhD was how to make video conferencing more effective at very large scale. Here we are with Zoom. So that was actually my PhD work, and also how to monitor large-scale network infrastructures and improve the quality of infrastructures for large file transfers and video calls.
So as I was working on performance engineering research problems, I always had this issue of securing the system. This balance of performance and security always came up. And so primarily in my PhD, although I focused more on performance engineering, I always had an eye on what I would call today as security engineering. And as I became a faculty at University of Missouri and started doing more research, I found this balance of doing research where I’m bringing in issues of both performance engineering and security engineering when I look at any application on the internet.
So if we are thinking of virtual reality on the internet today and using the cloud for it, we are looking at both how do we build classrooms in virtual reality to teach cybersecurity. At the same time, we are looking at how to secure virtual reality systems against attacks where we’ve shown attacker can make you run into walls or intentionally induce cyber sickness or steal private information, especially if you have students who have maybe autism or things like that. So sensitive information can be stolen.
That’s been my journey, is I started off more in the performance engineering side, but always had this eye and worked with people who are doing a lot of security. And as I became a more seasoned researcher and career professional, I think I found this balance of looking at performance and security and engineering both together as a more fascinating path to follow.
Steve Bowcut:
Okay, thank you for that. I appreciate it. That explains a few things because I noted as I was going through your background that you spent a lot of time in performance, multimedia, virtual reality, those kinds of things, networking, and so that makes sense now so thank you for putting all that together. Talk to us about the Greg L. Gilliam Professor of Cybersecurity. What is that and how does that work into what you do every day?
Prasad Calyam:
That’s great. That’s the recognition of all of the cybersecurity research and all of the outreach I do with cybersecurity with students all the way from middle school to people who are doing postdoctoral research. What that professorship allows me to do is it gives me resources and recognition to create new programs and research. It allows me to travel and give talks about my research at various places, and it also funds some of the outreach activities that I do.
So last week, we just had this Hacker Tracker camp, I call it. We’ve been doing it for nine years at University of Missouri where we bring in middle school, high school students and teach them a little bit cybersecurity, a little bit coding, and then we get them to work on a project where they actually learn how to do moving target defense and actually trace the hacker down using different kinds of cybersecurity tools. And in some cases, students are excited to even collect evidence to bring this case to court. And we make a case where there’s a bad online gaming company hiring a DDoS mafia to attack a good online gaming company. And so the good online gaming company which has been affected can actually sue with evidence and get compensation. So we do some fun things like that.
This professorship really allows me to extend what I would do as a regular faculty in cybersecurity to more outreach and expanding the areas of cybersecurity that I teach and also do research in.
Steve Bowcut:
Excellent, thank you. There’s two or three other things in your bio that I wanted to key on. So let’s start with the Mizzou CERI. Tell us about that, what it is, and how that works into security.
Prasad Calyam:
And so cyber, I would imagine everybody can agree, is a really important topic for our country, for the world because everything is online and we have so much technology that’s around cyber. So the way we look at cyber in our CERI Center is we have four, you can say, areas of focus. We have cyber vision, so bringing a lot of computer vision into the cloud, doing drone video analytics and doing virtual reality in the cloud. All that comes under cyber vision. We have cybersecurity, which is obviously what we’re talking today, is how do you secure these cyber applications when we are doing vision, and more recently, cyber intelligence where we’re doing AI on cyber. This has become a very important aspect. The third area is cyber intelligence, and the fourth is cyber training, so taking research-inspired ideas. So if we have solved a problem in drone media analytics or how to use a cloud platform to even accelerate scientific research in bioinformatics or whatever, we take the research outcomes and make them into training modules for the future workforce. That’s been a focus.
And in the recent last three years, we’ve been very fortunate VMware fund us to teach cloud DevOps as part of our cyber training focus to more than 500 students all over the country. And we’re expanding that in the next three years to a thousand or more students. And this is free education. We give them free cloud resources. We give them user-inspired application, user-inspired hands-on training and support that will have these students learn. So CERI, by having these focus areas of cyber vision, cyber intelligence, cybersecurity, and cyber training, is able to deliver this mission that we have to make cyber research education infrastructure widely accessible to the students in our university and outside our university also.
Steve Bowcut:
Very good, thank you. Appreciate that. All right. Then one of the other things was this VIMAN Lab, so that it’s Virtualization, Multimedia, and Networking Lab. How does that piece of the puzzle fit into your cybersecurity research?
Prasad Calyam:
The VIMAN Lab is what I would call as my department lab where it’s just me and my students doing things that we think are crazy or exciting, which feeds into the CERI Center. The CERI Center is more a university center where we are doing a lot of interdisciplinary research. We’re building solutions. So last year for example, the CERI Center, our journalism school which is one of the top in the country, so they had this collaboration with associated president. They came to us and said, “We want to do AI in the local newsrooms. Can you help us build AI solutions for the local newsroom?” That’s one example of CERI is this university or even national center where people come with problems and we try to help them with our expertise and with our smart people, I think, we have in our labs to solve those problems.
And VIMAN Lab is mostly, you can say, a place where we are not having any external inputs but we within our lab are looking at foundational problems. Seven years back, one of my students came to me and said, “We should develop this chatbot to actually provision cloud resources.” Nobody has ever thought about it. And today, in fact, we have been building chatbots, as I said, for the journalism use cases. We’re building chatbots for clinicians in the marketplace. So we’ve evolved very unique borderline idea into something more streamlined. All these ideas come from VIMAN Lab.
Today, we are looking at how do you use, for example, knowledge graphs, which is an idea in AI to start studying how to link knowledge from, for example, healthcare or smart grid, with cybersecurity knowledge. That’s a very early idea. I think we’re starting to do that in the VIMAN Lab context. I would imagine in the next few years, those ideas will evolve into solutions we will develop for smart grid resilience or the future of AI and healthcare. So VIMAN Lab is where I would say the ideas start, and then they scale up in CERI Center.
Steve Bowcut:
Excellent. Okay, thank you. That makes sense. All right. And before we get into the core of the discussion, which would be the educational opportunities, there’s two more things that I want to get you to comment on, and one of them is your bio talks about that you’re an investigator at that what we call C2SHIP or the Center to Stream Healthcare in Place. Talk to us about that, what it is, what its role is. I think that would be interesting.
Prasad Calyam:
Yeah. So C2SHIP is a very exciting center that we are affiliated with which is led by University of Arizona. We have Baylor School of Medicine, Caltech, of course Minnesota is part of it. And so there are a few other small universities also associated with it. It’s a place where industry and academics come together to solve problems in healthcare and healthcare in a non-traditional setting. So if you think of healthcare, we think like going to the hospital, going to a facility that has a lot of resources. Whereas if you think about elder care or even battlefield healthcare or disaster healthcare, there’s no infrastructure for healthcare. So you want to bring healthcare in place. You have to stream healthcare in place wherever healthcare is needed.
So what are those technologies, what are the problems related to that is what we’re looking in that center. And I’ve been fortunate to work with very interesting companies. For example, these days we’re working with a startup company called BodyGuide. As part of the C2SHIP efforts, they’re developing these anklet sensors that help us study cardiac risks. And so we are building a solution where the data from those anklets, the time series data that’s being collected, which looks at there’s a swelling in your leg or circumference of your legs, you can give a lot of indication of your issues with risk for heart failure and things like that.
So we monitor that for the clinicians and caregivers, and that data is securely transmitted to the cloud. So we look at security, how to keep it privacy preserving, not disclosing anything that is going to violate HIPAA or even things that patients are really concerned about in terms of their privacy. And then how do we use that privacy preserving techniques combined with security protocols that actually makes sense in this sort of IoT world we call it with sensors integrated to the cloud. And as we bring it into the cloud, how do we develop secure cloud architectures to scale this because we expect this system to support hundreds or even thousands of patients on a routine basis. So how do we keep that data secure and how do we create interfaces so that it’s again securely delivered to the right people in the right way for clinicians, for caregivers, for other healthcare providers.
So we are looking at cybersecurity just from a very interesting point of view. And this problem is only going to be bigger as we get more reliant on sensors for our healthcare and at home. And especially elder care is a very important domain where a lot of healthcare is being done for aging in place. And so what we are developing we hope will be a more general solution for this future where we’re looking at sensors and healthcare and cloud coming together.
Steve Bowcut:
I love that. I love that application of technology. Just from a very basic layman’s perspective, sick people are the last people that should be asked to go somewhere and do something. So if we can help them where they’re at, or I mean there’s a lot of things that you’d have to be face-to-face with a physician, but there’s a lot of things that you said with sensors, with the new technologies that you can do for people and their health without having them to go somewhere else. So I really think that’s fascinating. I hope that’s a big part of our future.
All right, so let’s pivot one more time and talk to us about zero trust cybersecurity. In your bio, it said that you’re working with zero trust cybersecurity and battlefield communications. I think that would be fascinating to learn a little bit more about that. And if you could, start at a basic elementary level if any of our audience is not exactly sure what we mean when we’re talking about zero trust. Maybe start from that space and then talk about how that works in battlefield communications.
Prasad Calyam:
This is a project that we’ve been funded by the National Security Agency and we’re working with a couple of very important DOD partners, Defense Information Systems Agency and the Naval Research Lab. And the problem we’re solving here is zero trust as a domain in enterprise is well understood. So if you go to an enterprise when we say we are applying zero trust, it means they’re monitoring everything that’s happening. Basically zero trust means you always verify. You never trust. And so even if it’s a employee, even if it’s a system that’s known to be a internal system, when an action or a task is being executed on that system or by that person, you verify without assuming they are trustable.
That basic principle has become very important in government industry today. And so if you look at the traditional data centers, zero trust has become something everybody wants to achieve. It’s a goal, I would say, but involves a lot of that is in monitoring, very tight access control, looking at information that is correlating to actions that are happening across, as I said, devices and personnel. It’s a lot of heavy lift in terms of analytics and monitoring.
And if you think of the battlefield, if you think of these tactical environments, you don’t have that data center-like memory, CPU, network connection. So the network might be there, might not be there. You might have some storage to log everything, you might not have. You might have computing to process some of these logs that look at interesting events and secure the system, but you might have intermittent disrupted denied environments. That’s the challenge we’re facing, is how do we take the best practices that we have understood in the enterprise setting for zero trust and put it in a what I call as DDIL, denied, disrupted, intermittent, and limited, environment with the battlefield.
And the way we are assessing this problem is when you look at a battlefield today, it’s a lot of technology that the person, the war fighter is having. They have radio communication. They have sensors. They have drones. They have some compute available at the tactical edge. And that information that they collect or they manage needs to go to a central command, which is where the big decisions are made. And the tactical edge mostly relies on the operational tactical command to provide the decisions. So if you think of battlefield, there’s a lot of important tasks that are being done from a communication perspective, from a competition perspective.
How do you secure them with a zero trust paradigm when you have lack of network connectivity? This is a very important problem. So if I have a drone for example, it is collecting important information, I lose connectivity with the drone, there could be many things that could be happening. It could have been captured physically by our adversary. It could have been taken into a different network and a malware could have been installed on it. There’s many things that can happen that we don’t trust that drone again when we see it connected back to our network. That’s a very classic scenario of the problem is network disconnections are going to be a huge problem.
And we’re going to be running, as I said, computation tasks. And today we run most of the computation tasks with Docker and containers. What do you do when you think a container is acting strange, potentially could have been compromised? Is there a way you would want to turn off that container? Is there a way you would offload that task to another container on a more trusted device? These are all very interesting problems. And the zero trust as a paradigm allows us to experiment basically with different security principles to ensure that at any given point of time, we are not trusting any device or any person, and all the resources in the system are always monitored and securely orchestrated. That’s been our challenge, is how do we bring in the zero trust that is invested in the enterprise into these DDIL battlefield environments.
Steve Bowcut:
Okay, perfect. All right, so this has been fascinating. And up to this point, we’ve talked about some very interesting and fascinating things that one could do with the cybersecurity education. Now, let’s focus on our core audience. Many of the people in our audience are students or early to mid-career professionals who are still trying to decide if cybersecurity is the right field for them, either in their academic journey or their professional career. So talk to us about what’s available at Mizzou and what kinds of programs and/or degree programs of those kinds of things that they could expect to find there.
Prasad Calyam:
And I always tell anybody in computer science or any engineering field, and these days, anybody I meet in any field, that cybersecurity knowledge is critical regardless of what you’re doing. You want to be a cybersecurity professional or you want to be a database analyst or you want to run a grid data center. Whatever you’re doing, you will need some minimum competence in cybersecurity. And so especially for students in computer science in our program, we tell them, “You need to take cloud computing cybersecurity as courses that you will require as you go into any career as you go forward.”
So in our curriculum, we have nicely organized how students can focus on cybersecurity. We have courses on web security, network security, software security, IoT security, data security from a healthcare perspective. We have advanced courses where you can look at adversarial machine learning where the AI is attacked, which can impact systems. So all the way from basic security to advanced security, we have a number of different courses that students can take. And we are also developing a lot of hands-on lab exercises. And we’re working with some industry and DOD partners on that. We have an industry advisory board that provides us guidance on our curriculum and what is relevant in the government, in the industry.
Our courses are actually designed in a way that you’re prepared once you’ve graduated from our program to be I would call a engineering leader in cybersecurity. So you will know how to solve problems, you will have knowledge of the latest technologies, and you have more than minimum competence regardless of what you’re going to be doing. You’re a cloud engineer, software developer, database administrator, that you understand cybersecurity concepts in whatever setting you’re put in as a career professional.
That’s our goal from our program and we have ways in which students can really get more involved. We have certificates we give. For example, a lot of students want to do a lot of machine learning courses and we will say, “Why don’t you take these three, four other courses, and then along with your specialization in machine learning, you can get a cybersecurity certificate, a graduate certificate? Or if you’re an undergraduate certificate in information technology, why don’t you take this cloud computing course and the software security or this network security course and you can get a bachelor’s certificate that can also be on your credentials?”
So that says, yes, you have a undergraduate degree and you know how to program. You can do things that can be a full-stack web developer. But, hey, look, I also have this certificate that is added in my transcript. Let’s say I have a BS certificate in cybersecurity. We are encouraging students to go a little bit about whatever they’re doing to add these certificates to their curriculum, which I think has been very interesting. And students definitely like the idea of getting those certificates in addition to their degrees that they’re getting.
And we also have in our program a pipeline where let’s say you do a BS in cybersecurity, we can get you to graduate school to do the graduate certificate pretty easily. You don’t have to take the full two years to do it. Within a year, you can get the graduate certificate. And if you do want to get into a PhD program, we also have a number of faculty who have positions to continue your cybersecurity research into the doctoral studies.
And one very important opportunity we have is the NSF Scholarships for Service. We have a number of students who will be paid throughout their PhD program by the government. And as a return for the years you’re funded in your PhD by the government, you have to go work in a federal agency or in a national lab or in a DOD set up for those years. It’s called the NSF Scholarships for Service. So we also have that program where we are funding several students to pursue a fully funded PhD in cybersecurity. So we have this pipeline on the way full-time.
Steve Bowcut:
Excellent. Yep, sounds like you’ve got all the bases covered. All right. Well, thank you. I appreciate that. Let’s focus a little bit maybe on some career insights and guidance for our audience. Given what you know and your experience, what are some of the common misconceptions about working in cybersecurity that maybe people should understand a little more clearly as they try and make those decisions?
Prasad Calyam:
That’s a great question, and I’ve been talking to many students and my advice has been you shouldn’t look at cybersecurity as a highly technical or it is only for those who can code or debug complex programs. I feel you should consider cybersecurity if you’re interested just in looking at behaviors of cyber criminals, or you’re interested in management practices related to how you do cybersecurity, or if you’re interested in the data science aspects because you get very interesting data from these systems when you’re looking for cybersecurity threat models and risk issues. There is an interesting data science problem associated. So if you’re doing data science, you like data science, you have an opportunity in cybersecurity.
I’d like to think for psychology, again, there is an opportunity. If you’re in economics, there’s an opportunity. If you’re in business, there’s an opportunity. So I encourage students, not just in computer science to think about cybersecurity, but if you’re, say, in electrical engineering, you’re doing power electronics, there’s a huge opportunity for smart grid resilience, which is going to be a very important future. I hope we won’t have to say it like that, but it’s going to be a huge threat to our country and to the world is somebody attacked the grid, and the new grid which is evolving will require a lot of power electronics engineers knowing a lot of cybersecurity. So think about cybersecurity as a goal, and to get to that goal, you can pursue any kind of educational program that you think that you can do. And if you can bring those expertise from behavioral psychology or power electronics or business practices, including the computer science skills, I think there is a room for all of these diverse background students to contribute to the cybersecurity domain.
Steve Bowcut:
I totally love that. And that’s been the thread that I’ve noticed through our whole conversation up to this point, beginning even with your career and then the programs that are available at Mizzou. You don’t have to be in love with cybersecurity. You can be in love with, like you said, behavioral analysis or something else, and there’s still a real need to understand cybersecurity and have that be part of your education. In fact, I would argue that I can’t think of any domains where cybersecurity shouldn’t be a part of any academic paths where cybersecurity shouldn’t be a significant part of that. It is that important that everyone understands cybersecurity and be able to integrate that into the work that they’re doing, even if it isn’t solely cybersecurity-specific. So I really like that idea rather than thinking, “Well, that’s the cybersecurity guy’s job to worry about that. I’m just going to work in my lab and develop whatever it is that I’m developing.” There is a cyber aspect to everything that we’re doing.
Is there any other advice that you would want to offer to students or early career professionals to help them succeed and be motivated?
Prasad Calyam:
Yeah, I would encourage them to find groups where they’re working on cybersecurity-related things. We have our Cyber Tigers group. I always tell the students who come to me to join the Cyber Tigers group. I’m the faculty advisor. They can go have talks with their other peers on different topics. They could host industry visits, talk to industry about what students should be doing. They could go to cyber competitions, CCDC and other ones where they can compete with other schools and show off their skills and preparation. There’s a lot of group activities that would be great to involve in, and you learn from your peers. It’s not just you’re going to class and learning things, but the more informal and the more, I would say, inspiring setting with other peers to learn. So find a group if they’re around your campus or even somewhere in the community, that they’re doing some activities and discussing or planning to go to cyber challenges or whatever. So go there and be part of those.
Steve Bowcut:
Excellent. Good advice. All right, so to wrap up here, we always like to end the show with a future-looking question to get our guests to dust off the crystal ball and look into the future. And it’s usually a pretty open-ended question, but I don’t want to limit you but I do want you to include in your response the role of AI. I read this article on Mizzou’s website just earlier today and it mentions you quite heavily in the article, but it’s an article about using chatbots to take certified ethical hacking examination. So talk to us about that and you can include anything else in future trends and innovations that you see coming, but if you include that, I think that would be fascinating.
Prasad Calyam:
Yeah, absolutely, absolutely. And so the way I think about it is you need AI to do cybersecurity and you need cybersecurity in AI. That’s where the world is going because the more and more we are using AI, we know that AI can be weaponized. And so you have to secure the AI itself to be able to use AI.
But on the other side, we can actually use AI to do better cybersecurity. And this article that you’re referring to is basically that is where as a cybersecurity engineer who’s probably taking the ethical hacking test or wants to think like an ethical hacker, we studied how AI can support such kind of tasks. So we had what ChatGPT and Bard, which is obviously Gemini now, but in the time we wrote the paper, we were comparing ChatGPT and Bard and giving these chatbots the certified ethical hacking test, which is one of the certifications you can actually do as a student if you want to become a pen tester or you want to understand forensics and things like that, or do instant response for security events.
So we gave this test to these chatbots and we tried to understand how they respond, how clear they were, how comprehensive they were, and in some cases were their answers wrong and dangerous even to follow. And that was true. So one chatbot did better than the other in some cases. And in some cases when we ask questions that were unethical, so like, “How can I launch this particular attack or manage…” So there were times when the chatbots would say, “Hey, I’m not programmed to do this,” or, “I don’t think I want to answer that question.” So we studied those chatbots and what we came up with in terms of findings is these chatbots are evolving fast. They’re getting better and better, but they are definitely something that could help us get a starting point when you’re thinking about a cybersecurity incident and you need some sort of quick insight from some experts.
So if you’re not very familiar with the idea or the topic that you’ve been asked to investigate, they could give you a good study point, but they could give you a lot of misinformation. They could even give you dangerous information so be careful about it. But we saw that how these chatbots are based on who’s developing and what kind of responses they provide and how could that be actually in terms of utility for the task you are pursuing be something you can rely upon. So that’s the study we did.
And as I said, the other aspect of in the future trends using cybersecurity in AI, this is another thing that is becoming very important. We are studying how to make AI more safe, how to make AI more user usable in contexts where there could be ethical issues, bias issues, diverse backgrounds, different languages, different cultures. That’s an area that, again, is becoming very important. And hopefully in the future, more of the students will look into how to make AI secure because as you can tell, we are going into a future where AI is going to be center of any profession. And so it’s all our responsibility to make sure AI is secure, to use all these important things you want to work with it.
Steve Bowcut:
Excellent, thank you. I appreciate that. It’s interesting to note that the conversation, the social conversation around AI and cybersecurity over the last couple of years has in many cases been a little bit negative and doom and gloom. What are the bad guys going to do with this new technology? And it reminds me of the discussions that we’ve had about quantum computing over the years. What are the bad guys going to do when they get to that point? And you have to remember, well, yeah. Well, we’ll get there first, right? So we’ll understand quantum computing and we’ll understand what AI can do. We’ll use it in a positive sense before the bad guys will adopt that technology. They’re not innovating. They’re taking the technology as it becomes available and applying it incorrectly. So we have that going in our favor.
Prasad Calyam:
And that’s what I love about cybersecurity is you worry about performance first. When you get it right in performance, you can start putting the security angle. But again, I also teach that you have to look at performance and security from the beginning. You don’t want to get too far ahead in performance and then worry about security because that’s too late to add security. So I think where AI right now is and how it’s evolving as we are making it more useful for people, I think we need as much pay attention to make sure it’s being safe, it’s being done ethical, and in a way that is more adhering to our social expectations when used,
Steve Bowcut:
Right, excellent. All right. Well, we are out of time. Prasad, thank you so much for spending some time with us today. I’ve enjoyed this conversation and I think our audience will enjoy it as well, so thank you. We appreciate that.
Prasad Calyam:
Yeah, I appreciate this conversation also. Thank you so much, Steve.
Steve Bowcut:
All right. And a big thanks to our listeners for being with us today. Please remember to subscribe and review if you find this podcast interesting, and join us next time for another episode of the Cybersecurity Guide podcast.