Richard Harknett is a Professor of Political Science and Head of the Department, Co-Director of the Ohio Cyber Range Institute, and Chair of the Center for Cyber Strategy and Policy at the University of Cincinnati. He holds an affiliate faculty position with the School of Information Technology at U.C. and a professorial lectureship at the Diplomatic Academy Vienna, Austria, where he served as Fulbright Professor in 2001. In 2017, he served as an inaugural Fulbright Professor in cyber studies at Oxford University, U.K., and in 2016 as the first Scholar-in-Residence at United States Cyber Command and NSA. His publications and research interests focus on international relations theory and international security studies with a particular focus on cyber strategy. He also regularly advises at the U.S. government and state of Ohio levels. Full faculty bio
As always, we like to start by learning about how you first became interested in cybersecurity.
We’ve got to go back to the early 1990s. I was doing some analysis for the U.S. Defense Department on the difference between nuclear and conventional deterrent strategy; how we deter war in a conventional environment versus in a nuclear environment.
I had some distinctive modeling at the time, and they came along and said, “Hey, we’ve got this thing called a browser. What do you think? How’s it fit into your model?”
I did some analysis and came back and said, “This is going to cause us some real problems.” The strategies and the manner in which we organize at the national security level do not map to the dynamics that I was projecting would flow from a publicly available Internet, and that’s what the browser did. Right? The browser moved us into this pathway toward a ubiquitous global internet.
For the next 20-some odd years, I would take my professor hat off every so often and do some analysis for the U.S. government. Every four or five years, they’d ask me back and say, “Hey, you know, that stuff that you did last time seems to be panning out. What do you think now?” I’d say, “The same thing I told you last time, except it’s getting worse.”
So fast forward to 2016, and the leadership at U.S. Cyber Command and the National Security Agency, said, “Okay, Harknett, enough of this being loyal opposition on the outside. You need to come in and take a look at all our base assumptions.” They created a position of scholar-in-residence for me, and that’s what I went and did.
So you were involved from the very beginning of looking, at least from a national security perspective, at what cybersecurity could become?
Yes. It was a very nichey, small cohort of folks, not looking at it purely as a technical problem, but from a national security dynamic problem.
In those early days, did what you foresee anything like what we’ve come to realize today; the global internet and the cybersecurity problems that have come with it?
I’ve been remarkably consistent in arguing that this is not an environment in which deterrents can be the central anchoring strategy, below the threshold of war. Deterrence works at what clearly is war, and it doesn’t matter what adjective you put before it, conventional, nuclear, cyber. We have seen that states and non-state actors, everybody who’s playing around in this space and challenging each other’s vulnerabilities in network computing, are staying well below that threshold, and so the strategies needed to change. They were not changing, so I was arguing on the outside for quite some time that we needed to think about this quite differently.
One thing that I was involved with early in the 1990s during the Clinton administration — and I’ve done work for every administration since H.W. Bush, to me, it’s U.S. national security, not republican or democratic national security — was a general assessment of what ultimately the vulnerabilities, particularly to industry and infrastructure, were going to be. The basic analysis on the threat side has held up pretty well over the years.
We didn’t call it identity theft back then, but we had that in the mix, along with everything else in between. We said, “Given the scale and scope of these vulnerabilities, the private sector is going to need help.” This is because most of our digital infrastructure back then, and still today, is in private sector hands. When we talk about vulnerability and cyber insecurity, it’s primarily at the private sector level.
The role of the state is to secure but at the national level. We needed to determine if a vulnerability was going to cumulate up to a national concern. This was the development of the notion of a public-private partnership. We have heard about that for a very long time.
I’ve argued for a while now that we got that analysis wrong. The analysis that we got wrong was not the threat side, but that the private sector would want to partner with the government, and the government would have the scale to deal with the scope of the problem through a partnering mechanism.
What we didn’t anticipate was how much profit you could make leveraging a vulnerable system. The efficiencies that came from I.T. and leveraging information technology outweighed the cost of fraud and exploitation. You were able to build it into your business model, and so the deep incentive to work together on security as partners never materialized.
I’ve been arguing that partners require a starting point that they agree on —a mutual interest. But I would submit to you that the business community that wakes up in the morning with their primary objective as profit, which is appropriate, starts off the day with a totally different mindset than the state, from a provider of national security standpoint.
If I see malware on my electric grid, and I’m a private sector owner of the electric grid, what’s the first question that I ask? The lights on? And then the second question I ask is, “Are lights going to be on tomorrow?” And then I ask, “How much is it going to cost me to get rid of that malware that’s not doing anything because the lights are still on?” I might then make a business decision about that profit-wise.
If I’m a national security state thinker, I look at that same malware sitting on that same system, and I say, “Who’s going to leverage that in a crisis?” Because that gives somebody a power advantage over my country relative to their country. Those are two different mindsets.
And so my argument has been, as of late, is that we have to accept that. That’s fine that business wakes up seeking profit, and the state wakes up seeking national security. What we have to do is align those interests. You can pursue those interests, but you have to align your activity and behavior, and that’s different than saying we have to partner from a shared interest, because I would argue we don’t start with the same interest in the morning.
That is fascinating. The picture that you just painted indicates that there’s a need for more regulation. If the public and private sector motivations are not aligned, particularly if you’re talking about critical infrastructure, somebody needs to make sure that it is protected. Maybe regulation is part of that alignment.
Yes, I agree. That is part of the alignment. I understand that it is generally a word that we don’t want to touch and think about in IT. But look, every sector of the economy has some regulations for it.
The last time I checked, my refrigerator has more safety protocol requirements than the software running my laptop. And I would submit to you that the software running my laptop has a lot broader impact on a whole bunch of things, at the individual level, at the corporate level, and at the national level than my refrigerator. But refrigerator companies haven’t gone out of business. They’re still selling those refrigerators. So there’s a way to manage the space to start to get increments of better security.
Part of the solution is just setting those parameters; the industry itself adjusts. It’s not a coercive kind of threat. It’s just setting the playing field, and if everybody has to play in the same playing field, everybody makes those same adjustments.
I think there has been, I call it the fallacy of fragility. The IT sector has been able to suggest that if we engage in any sort of standardization of security protocols, i.e., regulation, it will literally crush innovation.
And that’s just not empirically correct. It doesn’t do that in any sector of the economy. The IT sector can’t be that fragile that if we asked it to meet just a little bit more in proactive security, it would collapse. It can’t be that fragile.
I don’t say that those are illegitimate concerns. But every business has that concern. Every company wants clarity about the rules of the market that they’re going to play in. But then after that, they all have to abide by the rules and manage some level of regulation.
We drive cars today that are much safer than they were 30 or 40 years ago, and the number of people that survive crashes has gone way, way up. That’s a good thing, and the car companies were able to adjust, and the market was able to change. It doesn’t have to be heavy-handed; there just needs to be an alignment. You have to measure it relative to the potential outcomes.
The exploitation of vulnerabilities in network computing at the national security level is such that some nation-states seek to undermine American power without having to cross our borders with an army, navy, or air force. Nobody wants to fight a war with the United States. We’re a pretty big, strong country when it comes to military force. But if given the option to undermine American power without crossing that threshold of war, why wouldn’t they try that?
I’m starting off with the assumption that the business community’s interest and priority is profit, so I want to build a system, i.e., a regulatory environment, in which they can make lots and lots of profit. The worst thing is what we’re doing right now. That is that we’re reacting to the threat of the moment, and calling big tech in front of Congress, and wagging our fingers at them, and asking them to band-aid issues, rather than sitting down and having a comprehensive, long term strategic conversation about how they advance their interests while being cognizant of the downsides of insecurity.
I want to jump forward a little bit here because there are a couple things I want to make sure that we get to, and one of them is the Ohio Cyber Range Institute (OCRI). Can you talk about that a little bit? Tell us how it got started and what it does.
My view of cybersecurity is that it’s not a technical problem. It’s a political, social, organizational, and cultural challenge in a technically fluid environment. And so if you understand it in that broader context, you have to bring together technical expertise that doesn’t just look at code for code’s sake but contextualizes it in how it can be misused from a political standpoint, from an organizational perspective, from an economic standpoint, cultural standpoint, et cetera.
Folks like myself, security study strategists, wake up each morning thinking about what bad things can happen. We’re not a lot of fun at cocktail parties, but somebody’s got to do it. I spend a lot of time interacting with faculty from information technology, computer science, computer engineering, and information systems. We started to develop some programming at the University of Cincinnati designed to break down barriers. Because even across the computing sciences, there are differences that anyone thinking about going into this profession should understand.
You can really run with your interests. If you’re interested more in how hardware and software link up through processes, then you’re thinking IT.
If you’re thinking about the theories that drive software development, computer science may be an area you want to go into. If you want to build the physical systems, the chips, routers, and everything else that goes into this, then maybe computer engineering is for you.
All of those folks don’t tend to talk to each other regularly , and they rarely speak to us over in the social sciences — those of us that study human behavior. And so, the Ohio Cyber Range Institute was born out of this recognition that we needed to bring together all these different perspectives to solve the overarching problems.
There is nobody in the 21st century that’s going to advance themselves without touching a digital space. Cybersecurity is a problem for everybody, not just the technician. If I am a company manager, I need to understand the technology, but I don’t need to become a technologist. But I need to understand both its pluses and its minuses. Its vulnerabilities and its opportunities.
I helped write a bill in Ohio that led to a commission that Governor John Kasich put together. It looked at how we could advance Ohio in cyber education, workforce development, and economic development — recognizing that you need to link those things.
Like what I said about the government and private sector, the other side of this is education and the private sector. When you talk to a lot of people in industry, they will tell you that folks that are coming out of university with degrees, while they’re really sharp and brilliant, they’re not trained up in the right skills, and we as a private company have to go and train them up.
We wanted to create a pathway, starting at the high school level, that gives people a better pipeline from early education. And align that education as best we can to the needs of the private sector. This will create the basis for our 21st-century workforce, and in doing so, it will also aid those companies and other types of researchers with innovation. This is a field that’s constantly changing. If we learn about the needs from a security standpoint and build that into education, we can then start to innovate more effectively.
The Ohio Cyber Range Institute, cut our ribbon on March 3rd in the state capitol. We had the lieutenant governor and the adjunct general of the National Guard and the chancellor of higher education here in Ohio. What it does is, on the education side, is create a library of modules from just a single class lesson all the way through a whole course, at different levels of competency, that educators around the entire state of Ohio can access.
There are pockets of people who can teach about technology or computer science, but high school teachers don’t have the time to build their own curriculum, and this is a field that doesn’t have a textbook. If you wrote a textbook, the textbook is out of date before it even gets to print. We needed a dynamic curriculum that could come right off the shelf.
We also work on the workforce development side. In the state of Ohio, just in the technical fields, we have over 7,000 job openings that are tied to cybersecurity. And then, if you broaden that definition, it becomes even larger. Now, that’s during this pandemic and high unemployment. We ran a boot camp in the summer for unemployed and underemployed folks that would want to earn an industry certification. We had some federal funding, so we would, if you took the six-week course, pay for the certification tests, which is a couple hundred dollars. We got 4,000 applicants in 12 days but could only handle 150 of them. It tells you something about the kind of programming that we’re offering.
Most small businesses have one person who’s handling their email, their website, and, oh, by the way, their security. And that’s why they’re so vulnerable. So if we can start to produce and help facilitate more people going into this with a security mindset, marry them up with some technical capabilities, we’re going to create an overall workforce ready for the 21st century.
What I mean by ready for the 21st century is that the internet is both simultaneously a system of vitality and vulnerability, and our goal at the Ohio Cyber Range Institute is to make sure that that vulnerability doesn’t overwhelm the vitality. To contribute to that we also engage in research to improve cybersecurity.
Let’s change our focus now, and talk to me, if you will, about the Center for Cyber Strategy and Policy (CCSP) at the University of Cincinnati.
General Paul Nakasone, Commander of U.S. Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service, published an article in Foreign Affairs last month formally announcing the new doctrine of persistent engagement. This is a fundamental shift in the way we are approaching national cybersecurity.
Persistent engagement means moving from a reaction force of deterrence, “If you do this, we will respond,” to a recognition that we’re in a space of constant action, and we have to anticipate the exploitation of vulnerability before that exploitation occurs. We must persist in the space, and, in fact, our defense comes from defending as forward as possible. Getting into the networks of adversaries so that we understand what malware they’re developing, understand what they’re going to use it for, and be able to work with alerting the private sector as well as taking, at times, actions to blunt malware use.
I’ve helped develop the doctrine of persistent engagement, so the Center for Cyber Strategy and Policy, which I head up at the University of Cincinnati, is a research arm of OCRI. While OCRI is doing the education work for us and the economic development side, CCSP is looking and studying at persistent engagement as a new doctrine; how does it advance greater national and international cybersecurity; what does that require us to change in our education and our workforce so that we are moving people into this new paradigm of cybersecurity.
If you were to build a reading list for people exploring cybersecurity, what books, papers, videos, or blogs would be on that list?
If you buy my argument that we are actually in a paradigm shift, then that means what’s been written for most of the last 25 years isn’t going to get you very far. It is, in fact, focused on the wrong perspective. We’ve been focused on cyberwar and the big attack, which are important, but as it turned out not the type of activity countries have engaged in within cyberspace. What we need to focus on is what we cyber strategic competition.
A Dutch colleague of mine, Max Smeets, and I published in the Journal of Strategic Studies an article, Cyber Campaigns and Strategic Outcomes, back in March of this year on this change of thinking about things as a strategic campaign in cyberspace rather than these episodic attacks. That’s one piece that I would recommend for anyone wanting to understand this new perspective.
I would also recommend General Nakasone’s Foreign Affairs August 2020 piece, How to Compete in Cyberspace, on persistent engagement.
And then, for shorter pieces from me and others on persistent engagement, I recommend the blog Lawfare.
This has been a fascinating conversation, but we’re about out of time, so let’s end on this question, and given your experience, this might be a particularly interesting one. What does the future look like? Regarding new technologies, or strategies, or even threats and vulnerabilities, what do you see that we might have to grapple with in the next five or 10 years?
In the immediate, I think this is still a period in which we’re trying to understand what a more active state-on-state response over security is going to lead to and look like. Optimists like me suggest that it’s going to start to move behavior into a more normal competitive mode and move us away from the possibility of the big attacks of war. However, some argue that being more active is going to actually increase the prospect that we go there.
That’s part of the debate that’s going to happen at the strategy level. I’m a guy who thinks there’s a lot of consistency to human behavior, but there are these moments that are so fundamentally distinctive that we’ve got to rethink the fundamentals of security. It happened with nuclear weapons, and I think it’s happened in cyberspace. I think we have to open up the aperture of who thinks they should be in the field of cybersecurity.
Because the field of cybersecurity is a lot larger than just coding. We need to be able to creatively understand how this technology will be used for good and could be used for ill.
By 10 years from now, we’re going to have a technological leap. By that time, we’re going to have more decisions being driven by algorithms, and those algorithms will learn from our behavior.
It’s not going to be humans coding anymore; it’s going to be code that’s living and shifting and adjusting to anticipate our behavior as it accumulates more and more data, and so I think the big challenge 10 years from now is there’s going to be things that are happening so efficiently and so fast that the human can’t be in the loop anymore.
The big transformation will be that the algorithm will become the source of wealth generation and power. If it is wealth and power, then that usually brings along issues of security.
Professor Harknett, thank you so much for a fascinating and informative discussion. I sincerely enjoyed it, and I know our readers will as well.
It has been a pleasure, and I’m glad I could contribute to this resource.