Dr. Ricardo A. Calix is a professor of Computer Information Technology at Purdue University Northwest with expertise in machine learning, natural language processing, deep learning, and AI applications for cybersecurity.
His research focuses on leveraging AI to enhance cybersecurity measures, including developing offensive and defensive strategies.
A summary of the episode
Dr. Calix discussed how his initial interest in AI and machine learning led him to explore the practical applications of these technologies in the field of cybersecurity. He highlighted the challenges of implementing AI for cybersecurity, such as the difficulty in obtaining relevant data sets and addressing ethical concerns around bias in AI systems.
The conversation also touched on the potential for AI to be used in both defensive and offensive cybersecurity measures. While AI can be a powerful tool for detecting anomalies and intrusions, it could also be leveraged by attackers to create more sophisticated phishing attacks or generate malicious scripts.
Finally, Dr. Calix shared his perspectives on the future of AI and its potential impact on various industries, including cybersecurity. He emphasized the importance of staying informed about the capabilities and limitations of AI as the technology continues to evolve.
Listen to the episode
A full transcript of the interview
Steve Bowcut:
Thank you for joining us today for the Cybersecurity Guide Podcast. My name is Steve Bowcut. I am a writer and an editor for Cybersecurity Guide and the podcast’s host. We appreciate your listening today. Our guest is Dr. Ricardo Calix, a professor of Computer Information Technology at Purdue University Northwest. We’re going to be discussing academic pathways for AI and cybersecurity. Let me tell you a little bit about Dr. Ricardo A. Calix, PhD is a professor of computer information Technology at Purdue University Northwest with a PhD in engineering science from Louisiana State University. Dr. Calix, specializes in machine learning, natural language processing, deep learning, and AI applications for cybersecurity. His research focuses on leveraging AI to enhance cybersecurity measures, particularly in developing offensive and defensive strategies. Dr. Calix’s work is highly regarded in both academia and industry, where he explores the practical applications of AI to secure critical infrastructure, passionate about education. He guides students and early career professionals in navigating the evolving landscape of AI driven cybersecurity. With that, welcome Ricardo. Thank you for joining me today.
Ricardo Calix:
Thank you for having me.
Steve Bowcut:
All right. This is going to be fun. So AI is a hot topic. Everybody wants to learn more about that, and you’re the guy to answer our questions. So that’s a lot of our questions, as you know, are going to be focused around AI, but we do also want to help our audience explore what some of their academic options are for becoming as well versed in the topic as you are. So let’s start learning a little bit more about you. So can you tell us what inspired you to pursue this career in information technology and then maybe how that grew or led into your specialization of artificial intelligence or natural language processing?
Ricardo Calix:
Yeah, yeah. So basically it took a really good advisor and a whole university. Very good. Well, back then when I started, this was around 2006, I was doing a job and I was like, no, I’m not liking this. So I decided to go for my PhD and I really had no knowledge. I think I saw a presentation of somebody and they were talking about technology in general, just that, and I was like, yeah, I like that. And so speaking, public speaking, and all that. And so I had a lot of friends also who had done PhD, so I knew about it and I just decided I’m going to pursue this. So then at that point, you have to find, I had gone to LSU already, so I wanted to go there, but you need to have an advisor, so I needed to find someone. So I found this professor, Gerald Knapp, he’s still at LSU. And then just so you know how uninformed I was about everything. He asked me, so what do you want to do? And I just said, and this is true. I want to do something like Google. That’s as much as I knew about AI because in 2006, it’s still a little bit like fantasy, right? AI.
Steve Bowcut:
Oh, you were into AI before? AI was cool, obviously.
Ricardo Calix:
Oh yeah. Oh yeah. I was into AI before deep learning and all that revolution. So yeah, so in 2007 I told them something like Google, so I remember the first class they put me in was Information retrieval systems, which makes sense, right? That’s what Google did it back then and the search engine. So people didn’t say, oh, Google’s AI or Google is natural language. They said, information retrieval systems,
Retrieving data. And so that was the first course. And then I say it took a university because I took information retrieval systems, I took a lot of statistics courses, lots of different things that were not called AI or machine learning. Eventually, some courses did appear in that realm. And one thing where I was lucky is I wanted to do that, and as I got into it, I wanted to do it even more. So I really liked it. And then I specialized in a lot of back then what you did was either vision systems or you did text processing. So vision is images,
Processing images, or you did text, books, or basically the internet, what Google does, and I just really quickly gravitated towards NLP. That’s the NLP part natural language processing. So I really liked that. I just felt like I didn’t like images. They were more like I thought engineering and stuff like that, whereas I thought text and speech and language, because we can say whatever we want, it’s basically where intelligence lies. I was just curious about that. I want to learn more about intelligence. So that’s kind of how I started. So a good advisor, Dr. Knapp, who was my PhD advisor until I graduated, and then a lot of great professors, a lot of courses that worked different, they were all from their own perspective, but they were touching on data science back then. And this is 2007 to 2011, that timeframe. And then along the way, because I was doing specializing in computing and things related to the internet, there were some cybersecurity courses. Actually there was this professor, Dr. Peter Chen, and he had created a course in cybersecurity, like an actual cybersecurity course around 2008, which is kind of rare back then
And I took that course and then I worked for him in the summers. He was writing a book and I was just collecting information. So that’s where I really first got exposed to cybersecurity as just coursework. But I really just focus on really learning AI really well. And that’s what I graduated in my research focus and most of my academic studies were in machine learning and natural language processing.
Steve Bowcut:
Okay, interesting. So just to make sure that I understand, so LSU is where you started your postgraduates. Did you have undergraduate work also at LSU or did you go somewhere?
Ricardo Calix:
For No, I’m originally from Honduras, so I studied industrial engineering there and I moved to the States. Yeah.
Steve Bowcut:
Okay. So LSU is where you did your graduate work. And it sounds like from what you’re saying that your interest in AI and cybersecurity grew together, right? They kind grew up together. In your mind, you learned about ’em both simultaneously or did one lead to the other?
Ricardo Calix:
It’s interesting. Okay, so you want to know the connection. So my PhD was only machine learning, research. So you can take courses and then there’s what you do actually for research where you have to publish and the thing that you have to defend at the end.
So I wasn’t defending anything related to cybersecurity. I was specializing in natural language processing, which is today large language models, ChatGPT, all that stuff, that’s the same thing. It’s just a different architecture and a different set of techniques. So I did most of the machine learning, but I was lucky that I took that course with Dr. Peter Chen on cybersecurity. So when I graduated, I applied for positions and one of the positions that I applied for was here at Purdue University Northwest in 2011. So Purdue in general is very well known for cybersecurity. They have one of the serious, one of the best cybersecurity centers I think in the United States. And what happened was when I came to PNW, they had a very strong interest in cybersecurity. And I told them I had taken some courses in cybersecurity, but they hired me for the AI part.
So I basically came into the university and I had taken some cybersecurity courses, but they were also very interested in having a strong cybersecurity program being consistent with what Sirius does in West Lafayette. So I arrive and I’m like, okay, alright, this is what I’m going to do. And I started to think about the problem of cybersecurity. So I know how to apply machine learning to natural language processing, and I know how to apply it to vision because that’s what everybody does. How do you apply this to cybersecurity? I’ve always thought of cybersecurity as an application domain. So my tool set is machine learning, and then I can apply that to, as you said, healthcare or cybersecurity or a factory. It doesn’t matter. So those are the places where you apply it. So I just started thinking about that problem. How does it work?
I know how to take text or images and feed ’em into a model and train it. How do I feed cybersecurity things? And so I remember I started researching reading papers and I found this dissertation, this is around 2011, 2012, where someone had applied machine learning to a network intrusion detection system. So I read that whole dissertation and I understood, okay, they took the packets and they extracted the information and the samples and everything. And at that moment it clicked, I’m okay, this is how this works. And then I just started from there. I had that clarity of, okay, this is straightforward. And then I started looking at malware analysis and other things and just, it’s basically you just have to understand the domain that you’re looking at and then find a way to extract the features that represent the samples and then you can go ahead and train your model.
And so from that point, it was clear, it was like I know how to do this. But yeah, I remember the reading that dissertation, I still remember the moment. It kind of clicked in my head because you don’t know, I mean it seems trivial now, but when I was like, Hmm, how is it different? And really because there there’s no algorithms for cybersecurity. The same set of machine learning algorithms that you use in NLP you can use in any other domain such as cybersecurity. So it just comes down to the feature extraction. How do you take a packet and represent it an image or something else?
Steve Bowcut:
Excellent. Okay. Thank you so much for that. One of the things that I always try and highlight in the show is that cybersecurity doesn’t have to be your first love in order for you to work successfully and contribute significantly to the field of cybersecurity people whose first love in your case was machine learning can contribute significantly to cybersecurity. And we need people to understand that you don’t have to have always wanted to work in cybersecurity to follow that path. You could literally, anything that I can think of, the skill sets are needed in cybersecurity. So you could be an English major who loves words and writing and creative kinds of things, and we need those kinds of people in cybersecurity. So I appreciate your perspective there. Maybe you can explain to us a little bit deeper about how machine learning and natural language processing is currently being used in cybersecurity. And I also want you to, and maybe you can expand that both offensively and defensively. So looking at it from both aspects, how are the bad guys using AI in their attacks and how are the defenders using it to protect against that?
Ricardo Calix:
Okay. Yeah. So I think defending is more straightforward because, so a lot of the machine learning, so now there’s two ways that you can look at machine learning. So before it was usually a classifier, you’re classifying something or you’re providing a measure of something like, okay, it’s green but only like 80% green. So something like that. And now we also have generative models which basically can generate text or can generate images, things like that. So machine learning traditionally is used for classification. In cybersecurity, you have the exact same problem. You’re always trying to find the intrusion or the anomaly or the problem. So that’s a yes or no question. Is this a problem? Yes or no? So basically machine learning is actually perfect for that. In terms of defense, in terms of you basically machine learning, you just have to take the information and represent it in this thing called a vector or something.
It’s a representation format if you will, that you always arrive. I teach in my classes, this idea actually always create this X and Y, and they have these shapes and if you understand how to take any problem and convert it into this X and Y, then it fits nicely into the rest of the algorithm. That’s really the key component For defense, I would say classifiers are a natural solution. For offense, I was thinking about this and without AI, you can just write a script with if statements and for loops and that can do an attack, a denial of service attack or something like that. So really where I think AI shines in attacks, it’s in the fact that you want to fool people probably. So I think that’s where AI right now is starting to have an impact like phishing attacks. So you can use a large language model to generate really good phishing attacks, something that is so well done that it psychologically tricks somebody. So I can see how that AI can have a great advantage otherwise. I mean AI for attacks, I mean, yeah, just masquerading things, hiding itself. That’s what I think it’s like using that additional intelligence to fool people basically, or fool systems if you will.
Steve Bowcut:
As I’ve thought about that. Another thing, and you can let me vet this through you maybe is valid, maybe it’s not, but it kind of seems like AI has lowered the intellectual cost of entry, if you will, so you don’t have to have the same skillset, I think to be an offensive attacker. If you can write the right prompts, you can use AI, it’ll do a lot of the heavy lifting for you. Is that true?
Ricardo Calix:
You’re right about that. I’m sure it’s possible and I am sure somebody is probably trying to do this, but yeah, I mean nowadays a lot of people were surprised that these large language models can code. And so there is that coding, whether you’re developing a web application or whatever, you could use a large language model. So I can see somebody that trains a large language model to create scripts for attacks. For instance,
There used to be this term script kiddies and they would just get the script and then run it. Well, actually now taking this further with AI, you don’t just have to get the script online. You can go to a large language model prompted, as you said, and then it can generate a script kiddie script that’s more specific for what you want to do. And I can see that you might be able to train a model to do that. I’m sure maybe someone has done it, it would be, it’s a bad thing in a sense, but I think it’s possible.
Steve Bowcut:
Okay. That’s one of the things that I know people are always afraid of. I dunno if this is a good analogy or not, but steroid for script kiddie. So now you take somebody who’s got minimal skills, but if they’ve got some knowledge of AI and know how to use it, they could probably enhance their abilities and therefore be more dangerous than their skillset would normally. Yeah,
Ricardo Calix:
Right, right. Yeah, I can definitely see how you can train this model and I mean, what I was saying before is that what it can give you is it can start to add these components of how do I trick people? How do I exploit behavior? And it just makes them more powerful I think.
Steve Bowcut:
Yeah, I really hadn’t given a lot of thought to that and I appreciate you kind of bringing that to the forward because a lot of, more and more I think of what our cyber adversaries are doing is social engineering and if they can use AI to enhance that by learning more and making it look more real and making those phishing, for example, attacks be more realistic and therefore more effective, that’s a little bit of a daunting prospect there as well.
Ricardo Calix:
Yeah, yeah, definitely.
Steve Bowcut:
So what are some of the challenges in implementing AI for cybersecurity? Is there anything that kind of stick out in your mind, things that we’re grappling with or that you are grappling with or that you’ve seen us as a culture maybe grappling with?
Ricardo Calix:
Okay, the difficulties of applying,
Steve Bowcut:
Yeah, the difficulties.
Ricardo Calix:
Applying machine learning. Well, one difficulty is data sets for instance. So one reason why the natural language or vision systems have been so successful is there’s been a lot of data available, like social media, you used to be able, I don’t know if you still can, but used to be able to get a lot of the data from Twitter and you could either buy it or get a lot of the data for free, scrape it, and then you could train your models and then you can have a really good natural language model or you can get a lot of images. But things like in healthcare, electronic medical records or proprietary information of a bank that’s not available, obviously no one wants to give you data and say, oh yeah, by the way, this was vulnerable and we didn’t do a good job securing it. So that’s a problem. Getting data and machine learning requires data. So that has anyone that tries to do a project and machine learning and cybersecurity will probably run into that. How do I get the data? So I think a lot of the companies that do this, they work with the company only use their data in there and that’s basically it. So I would say that one, I mean applying the algorithms is not difficult, but getting the data is probably the most challenging.
Steve Bowcut:
Does that hamper the research that you do or that your students do as well?
You build these models and prove these hypothesis, but if you don’t have the data, then how do you do that? Right?
Ricardo Calix:
Yeah, that’s a good point. So when I was doing this before, I was always limited by some data sets I had to do, I forget the name, but a famous IDS system from 15 years ago,
NSL-KDD, I think it’s called. So I had to use that and I was limited to that when I’ve done work with malware, we had a hundred samples of malware. So whereas deep learning requires millions, and so it’s always very difficult to get the advantages of algorithms like deep learning because you don’t really have that much data. You can generate a lot of synthetic data in the network and we’ve tried that, but still it’s not at the scale that you’ve had with natural language processing or image data. So it has been challenging. I think that’s probably actually one of the reasons why people have wanted some people in the cyberspace have wanted to shift to other areas is because there just aren’t enough data sets unless you’re working with a company and then you’re limited a little bit. So they look for other options like open source looking at models themselves. So that has become a big thing nowadays, looking at all these how you can face has all these models, so they’re looking for vulnerabilities in them because they’re already trained. You see what I’m saying? They had massive amounts of data, but that’s already in the way. So that model, so they’re just focusing on that instead.
Steve Bowcut:
Interesting. Okay. Now anytime we talk about AI and data and put those two together in the same conversation, there’s going to be some ethical considerations that come up or that people are concerned about. Can you speak to that a little bit?
Ricardo Calix:
The ethics of using machine learning or the ethics of disclosing the data?
Steve Bowcut:
Well, I guess in particular what I’m thinking about is the ethics of using machine learning in cybersecurity. So how do we stay on the ethical side of what we’re doing when we have such a powerful tool? I guess there’s a tendency to kind of drive us to maybe cross that line.
Ricardo Calix:
Okay. Yeah, so I actually did work on a project similar to this thought, so maybe I’ll speak about that
Steve Bowcut:
Okay
Ricardo Calix:
That’s my one use case. I’m sure there’s many ethical issues, and I won’t touch on all of ’em, but the one we were working on a paper we published, it was called “Saisiyat” it’s “Where it’s at”. And basically what we were looking for was initially we were looking for bias. This was a large language model. We were using a Roberta XLM large language model for something called “Named Entity Recognition”, which is finding actors or places, things that have names. And so we were looking for bias. So basically the whole idea there is people, I’ll give you this example. People in companies maybe are using AI to vet resumes to filter out some, and so is that fair for them to use?
That’s the whole point. And so well, if systems have bias, then it might be that you people that should have a fair shot are not getting it because of something. So that’s what we were basically testing. So we looked at named entity and we looked at names and we were trying to see, okay, can we manipulate the name to make this more noticeable to the system or less noticeable to the system? And we actually did that. So I’ll give you this example. We took a last name, let’s say a Chinese name, like last name like Jang for instance. So Jang gets detected with point 60 accuracy, let’s say. But if I put Jang Sun such as in Robertson Davidson, then Jang son just by Concatenating, that would improve. And basically yes, when we did that exercise, the score went from point 60 to point 70, so we were able to manipulate it so that it’s more easily detected.
And then we started to look at, okay, can we do the opposite of that? How do we concatenate a syllable basically of certain characters and make it less detectable? And so we did that. That’s where, that’s the name. Saisiyat comes in. We looked at the Saisiyat language, which has really weird, it’s a very unknown language. We extracted certain syllables from there that had very low frequency in the training model. Then when we concatenated that, so I did Jang and then UST, let’s say. Then when the system looked at that, it basically had a score of point 40, so it went down. The point of that was that we could just by the name, the system was detecting something more or less. And so that’s a simple experiment, but the idea is yeah, it will have bias. It had bias towards sun, for instance. That’s for sure. It prefers somebody named Jang Sun, and so we didn’t do the resume example, but you could think then maybe it’s going to start picking certain resumes that have certain things that it likes versus other resumes that just, although they’re equally qualified, just don’t have the same wording. So that’s one possible ethical issue with AI systems today. That’s a big issue actually, bias, everybody knows about it and is worried about it.
Steve Bowcut:
And I’ve heard that talked about quite a bit. That is interesting. Thank you for bringing that up. And it begs a question, I don’t expect that you could answer this, but it begs the question in my mind. So people have biases, and so we always want to work those out. And now these systems can also have biases at this point in where we’re at currently. Can you even estimate, do these systems, are they less biased than people or are people?
Ricardo Calix:
Less biased? That’s a very good question because machine learning basically learns what we teach it. And we teach it based on examples. We speak and it reads news articles written by humans, and basically that knowledge that is encoded in there is what the language model learns. Also, and I say this to a lot of people, machine learning itself like the algorithms are meant to be biasing machines. So that’s what they do, they correlate. So I give it examples of this and this, I give it 10 of that and two of another thing. It’s going to learn that these 10 are more than these two, and it’s going to be biased towards that because that’s the whole point of how it works.
And so they’re already biasing machines. And that was what we wanted initially, is just that now people are starting to see these side effects of that power. And so I know for sure that they try to remove bias. And I don’t know if you heard, there was this controversial thing that happened I think with SORA, with Google recently in the last two years, SORA generates videos and they did something to try to de-bias it. And so I think it’s trying to be careful here how I say this, but it generates images. It generates images, and it was supposed to generate images that we as people, they’re iconic, so we know what they should look like. So they de-bias it so much that it generated images that didn’t make sense in the context of what they should. And so my point is, even Google or OpenAI, whoever it was, release these tools, they are trying, they are. And sometimes they go too far to the point where people will also complain that these are not accurate. You’re being fair and they’re being generated for all demographics, but this is not accurate. So there’s people on both sides there. So they are trying, but it’s a difficult problem. I think they’ve made some progress in some respects though.
Steve Bowcut:
Yeah, that’s fascinating. Alright, let’s kind of turn our attention maybe towards, one of the important reasons for this podcast is we want to offer advice or options for students and early career professionals that may be considering cybersecurity. So can you talk to us about that? Maybe from an academic perspective, what would you recommend? What should somebody be considering if they want to get into cybersecurity?
Ricardo Calix:
Just cybersecurity.
Steve Bowcut:
Well, no, because I hate to limit it to that, and you’re the prime example of that. Maybe not just cybersecurity, but maybe they want to work in cybersecurity, but that’s not necessarily, they’re not maybe a techie and maybe they are. I don’t want to exclude people an interest in technology, but what kinds of classes do they need?
Ricardo Calix:
Classes? Yeah, okay. If you want to learn, I’ll give you right now if you want to learn cybersecurity. The thing that helped me the most, when I looked at this 2006, 2007, when I first started out, there were very few books. I mean there were some books, but they were theoretical, like math books really.
And they were trying to lay, and it makes sense. You want to lay a framework, a foundation like cryptography. Cryptography for instance. It’s very well-founded. I really like, it’s got a really good foundation and everything. But that’s a specific one of the classes basically of cybersecurity. But to create a foundation for the whole of cybersecurity is challenging, I think. But so a lot of the books were theoretical. They talked about principles. I didn’t like ’em. But the one thing that I did is the seed labs. I don’t know if you’ve ever heard of that. So it’s a project that started 10 to 15 years ago at the university at Syracuse University, I think in New York. And it’s a series of NSF grants. I believe the main PI (Principal Investigator) was Wenliang Du, a full professor there. This is to me, the best. He’s got a set of books, the seed labs, I don’t know the acronym, what it means, I don’t remember.
But basically it teaches a very hands-on approach to cybersecurity. So you’re going to learn there. He’s got a VM and you go in and you use a lot of tools to encrypt, but he also has attacks a lot of attacks on. So he’s got a network security sign, there’s a web security sign, there’s a classic SQL injection thing, malware intrusion detection. So basically a lot of the main topics of cybersecurity explain really well, just in a practical way, how to buffer overflows like exhaustively discussed at the C level. And very technical but very rewarding and satisfying. And I’ve been teaching cybersecurity for 11 years, I think here at PNW And that’s really, I looked at a lot of things and just like, no, no, no, I don’t like this. And students don’t like it. And students love the seed labs or they just work on the problem solving. And I also like it. I thought so interesting. And it’s just problem-based. It is like, okay, let’s do this attack on the machine, or let’s look at the defense on the machine. So that has been to me, in my experience where I’ve learned the most just by going through those materials and just for cybersecurity itself, like pure cybersecurity.
Steve Bowcut:
Got it. Okay. So kind of a hands-on environment then
Ricardo Calix:
It’s a hands-on environment. Yes, very technical. Yeah. Not so much you have to know how to code a little bit, but it’s not about coding. I mean it’s a lot of Linux command line type of things, which is what you should know I think is a good foundation in Linux. And then from there you start to see a lot of the examples of how they work and everything like that for defense attack.
Steve Bowcut:
And that’s useful information right there, I think. So that a foundation in Linux in your estimation then would be a good idea for anybody that wants to work in cybersecurity, even if cybersecurity isn’t their first love, maybe something else is, but they want to contribute to that field that they should probably have a good, it’s a good place to start, right?
Ricardo Calix:
Yeah. I mean cybersecurity is about attacks and defense. So yeah, you got to do some attacks just to see how they work and what it is, and then study the mediums. Like the mediums will be files in the operating system or packets in the network or websites, and then code is part of it. But yeah, that’s kind of how we do it. I mean, we have a class in software assurance, we have network security, we have cryptography, forensic. I don’t teach forensics, but that’s also a part of it. And so yeah, that’s kind of how we do it. The students take a system administration class, which is a Linux class, just to have the background in that. They take some programming courses and then after that we just start going into, okay, one course on just software assurance, one course on network security. They know networking also. They do take some general networking courses.
Steve Bowcut:
Yeah. Okay, good. So those are good foundations to have. Alright, thank you so much. So as we wrap up here, I would like to get you to dust off your crystal ball a little bit and look into the future for us. And just give us from your perspective what the future involving AI might look like. Where do you think we’re headed with AI or machine learning and where it may be headed?
Ricardo Calix:
Yeah, I wish I had a crystal ball. So AI, I always, when people ask me about the future or what do I think about what’s going on, I always try to qualify that with the fact that there’s been a lot of boom and bust cycles in AI since the forties and fifties. And right now I’m worried that a trillion dollars has trillion dollars has gone into investment of AI. What if none of that succeeds? So then we would definitely have a bust for a while, and that usually affects research. Right now a lot of money is going into it, so there’s a lot of growth and they’re trying out lots of things. But if that dries up, then people and people actually become horrified by AI and don’t want to talk about it. I remember actually my advisor had done neural networks and he was in one of the cycles and got burned a little when I was doing this. He wasn’t so keen on me doing neural networks. I did other things.
So we do have those cycles. And so I don’t know if this is going to continue for a long time or if something will happen, but it does seem a little bit different. And we do have three things, I think at least that are different. One is the computing power that was not available before. In 2007, Nvidia released one of their first GPUs that could be adapted to machine learning, and now the infrastructure is even more massive. So that’s new. We also have a lot more data. So that has been another advantage with the internet and everything that was not available. And now we don’t just have models that classify, but we have models that generate, they can generate an image, they can generate text, they can generate why not a sequence of packets that attack something in a novel way. I mean, that’s within the realm, I think. So it does seem like it’s different this time. If it is, then I think there’s going to be a lot of disruption in the marketplace. Maybe. I don’t think that old jobs will go away, but certainly fewer people are going to be needed, I think because instead of having five marketing people generating five different reports, you just have an AI generate the five reports and then have the one marketing people review them. You always have to review them.
So eventually though, they’ll get better and better and better at everything. I think. So for cybersecurity, I do think from what I see, everyone is trying to use machine learning algorithms in some fashion. So maybe knowing AI if you’re interested in cybersecurity is probably a good idea, at least right now.
Steve Bowcut:
Yeah. Okay. That’s interesting. I hadn’t really thought about that, and I certainly agree with your perspective there. And it seems to me like maybe the other aspect of why this seems a little different is because of the adoption of both industry and consumers. I think consumers are at that point where they’re ready to accept AI and use it and actually use it. And that’s probably what fuels the growth economically. Where the money comes from is from industry and from consumers that are willing to pay for these products. And so businesses are chasing those dollars and that’s why so much advancement is happening right now. So until I guess we get to that point where we’re not offering anything new to consumers or businesses, those dollars will continue to flow and therefore research will flow and those kinds of things. But that does seem different that people have accepted it like two years ago.
Ricardo Calix:
You’re right, that could be an additional thing that people have. It used to be, some people kind of laugh when I was doing my studies and telling them what I was doing. I’m not kidding about this, but I didn’t care. I loved it. So that was the great thing. I was always really happy with what I was doing today. People don’t laugh, they take it seriously. They’re actually worried, which is the other side of people. They either laugh at something or they’re terrified by it.
Steve Bowcut:
I always try and calm people’s fears in that regard by using, if you look at journalism, I’m a journalist. If you look at journalism, it’s changed enormously just over the last two or three years. Many people don’t write articles like they used to write articles, ChatGPT writes the articles. So then the question is, well, what do the writers do? Well, the writers become editors, as you pointed out. They’re reviewing and changing and fact-checking and those kinds of things. So learning to write prompts is probably as important right now as learning how to write because you’re going to be writing prompts in the future for a lot of journalists.
Ricardo Calix:
Yeah, I agree. I think it’s like in the seventies or eighties or whenever it was, the computers started to come in into the workplace and they displaced a lot of people. So I think maybe it’s something like that. It’s like a new tool. It’s just a more abstract tool, right? It’s not a physical thing. It’s like in the web, we’ve always had the web, but it’s definitely, it might be something like that, a new tool that we just have to get used to using differently.
Steve Bowcut:
So I like to think it’s not going to displace people’s jobs. It’s just going to change what they do and probably improve because some of the grunt work, the boring kinds of things, machines don’t get bored. So they’ll do that and the more exciting things that people can do. So, all right. Well this has been a fascinating discussion. Thank you so much for your time. I really appreciate it. I think this is going to be very useful for our audience. Thank you.
Ricardo Calix:
Great. Thank you.
Steve Bowcut:
Alright, and a big thanks to our listeners for being with us, and please remember to subscribe and review if you find this podcast interesting. And join us next time for another episode of The Cybersecurity Guide Podcast.