Dr. Yan Solihin is the Director of the Cybersecurity and Privacy Cluster, and Charles N. Millican Chair Professor of Computer Science at the University of Central Florida.
He obtained a BS in computer science from Institut Teknologi Bandung in 1995, a BS in Mathematics from Universitas Terbuka in 1995, M.A.Sc in computer engineering from Nanyang Technological University in 1997, and a Ph.D. in computer science from the University of Illinois at Urbana-Champaign (UIUC) in 2002.
He is a recipient of the 2010 and 2005 IBM Faculty Partnership Award, 2004 NSF Faculty Early Career Award, and 1997 AT&T Leadership Award. He is well known for pioneering cache sharing fairness and Quality of Service (QoS), efficient counter mode memory encryption, and Bonsai Merkle Tree, which have significantly influenced Intel Cache Allocation Technology and Secure Guard eXtension (SGX)’s Memory Encryption Engine (MEE).
In recognition, he received IEEE Fellow “for contributions to shared cache hierarchies and secure processors” in 2017. He is listed in the HPCA Hall of Fame and ISCA Hall of Fame. Full Bio
Here are the key takeaways
- Interest in cybersecurity: Solihin’s interest in cybersecurity began with his realization of the limitations of Moore’s law and Dennard scaling in the semiconductor industry. He foresaw the need for microprocessors to offer more than just performance growth, identifying security as a key selling point, especially with the rise of cloud computing.
- Hardware root of trust: Solihin’s research focused on creating a hardware root of trust, where trust is placed in the processor rather than in large, complex software systems. This approach has been influential in technologies like Intel SGX.
- Current research focus: Solihin is diversifying his research to include secure processors for the cloud and exploring security challenges in emerging technologies. He is particularly interested in the security implications of persistent memory and the vulnerabilities of wireless networks.
- Cyber security and privacy (CyberSP) cluster at UCF: Solihin discusses the interdisciplinary nature of the CyberSP Cluster at UCF, which covers various aspects of cybersecurity including blockchain, secure machine learning, and digital forensics. The cluster aims to scale research and education in cybersecurity to address global technological challenges.
- Educational approach and student interests: Solihin emphasizes rigorous training in cybersecurity for students, guiding them in foundational knowledge and research. The goal is to produce capable engineers and researchers who can identify and solve new problems.
- Future of cybersecurity: Solihin predicts that cybersecurity will increasingly intertwine with societal issues, including cyberwarfare, safety, consumer harm, and ethical considerations. He stresses the importance of involving ethicists, philosophers, and legal experts in cybersecurity discussions.
How did you first become interested in cybersecurity?
I’m a computer architect by training; I design processor chips. I started my career as an assistant professor in computer engineering in 2002. About three years into my career, I was reminded of Moore’s law and Dennard scaling in the semiconductor industry. Moore’s law predicted that the number of transistors you can put in the same chip doubles every certain period of time due to transistor miniaturization, such as 18 months or two years, while Dennard scaling predicted that power density remains constant as transistors are scaled down.
Moore’s law and Dennard scaling have been true for a very long time, for decades, but I started to see some writing on the wall, especially Dennard scaling. They are not going to keep improving at the same pace as it had in the past. The only selling point of microprocessors at the time was performance growth. As transistors become smaller, the processor becomes faster, and people take performance growth for granted.
If Dennard scaling is slowing down, to limit power consumption growth, performance growth will be sacrificed. Likewise, if Moore’s law is slowing down, the growth in the number of transistors slows down, the performance growth will also slow down. The microprocessor industry must come up with something else as a selling point.
Something that distinguishes one product from the others and gives tech companies a reason to buy new processors. Then I started to think that one of those reasons must be security. I got interested in what way a processor can become a security product. Then coinciding with that thought was that cloud computing was just getting started.
People realized that there’s a tremendous cost advantage by having a cloud server, a large-scale server. You don’t have to maintain your own IT infrastructure, just rent computing resources in the cloud and host your applications, your system, in the cloud.
At about that time, I saw the need to improve the security of processors used in cloud computing. Essentially, up until a few years ago, when you run something in the cloud, you have to trust the cloud computing providers entirely. They don’t trust your applications. They put it in a virtual machine, isolated from other users. But as a user, I have to trust whatever system software they provide for me. How can I be assured that whatever application or data I run in the cloud is not compromised?
As you know, system software, like the operating system, is a large piece of very complicated software with 50 million lines of code and above. How can I be assured that I’m not relying on that very complex software that has been shown to have vulnerabilities?
The idea was to go into a lower root of trust. Instead of trusting a big piece of software, let’s trust just the processor. That becomes a hardware root of trust. That’s the research that I started doing in 2004. I’ve been doing that research for 16 years.
Some of the technologies we developed have shown up or have enabled the technology for a secure processor in the cloud, like Intel SGX. They use memory encryption engine technology that is very similar to the first stateful MAC for memory integrity verification technology that we published in 2007.
Interesting. If I understand that correctly then, the idea is that if you can trust the processor, you don’t have to worry so much about the operating system or any other applications that the cloud vendor may be running on that system, right? You can trust the processor to give you at least some level of protection.
Yes. That’s correct. Also, something I didn’t anticipate at the time is that the cloud computing providers use this for their advantage, meaning that when they approach clients, they say, “Our system has the hardware root of trust so that you can trust us more than the other providers.”
At this point, I was involved with technology research and developing course curriculum, as well as teaching. We had a collaboration with processor companies like Intel. We sent a few Ph.D. students to do internships working on designing these secure processor features. One of my Ph.D. students was hired by Intel to develop some of these features in their processor.
Let’s fast-forward to today. What are you currently researching? Is developing technology for secure processors a persistent thread throughout your academic career, or have you moved onto other areas of research now?
I’m diversifying my research. I am still working on secure processors for the cloud, but we are increasingly looking at two different security approaches.
One focus is to look at emerging technology and envision what kind of security attacks could happen with emerging technology.
The other approach looks at problems with existing technology and how we can improve its security.
One area of my research is focused on potential security issues and the other on current issues.
For potential security issues, we are focused on the idea that persistent memory is becoming widely available. In your computer system today, you have what is called main memory. Typically, it uses DRAM. A feature of DRAM main memory is that when you turn off the power, data disappears. DRAM has stopped scaling, so what will replace main memory in the future is called persistent memory or non-volatile memory.
Persistent memory is interesting in that if you turn off power to your computer, the data doesn’t disappear. It just stays there for years. Because of this, the computation model has to change a lot. In the past, when you run a program, all of that data is kept in memory, unencrypted, and in plain text. With this new memory, you can’t do that. If you do, then the program’s data will remain there, even after turning off the power. It’s not recycled, so it stays there. If someone gets ahold of a computer or steals a laptop, you can easily read the main memory’s content and get a lot of sensitive information.
To some extent, this problem has been exposed in the past, with hard drives. People in the past didn’t often encrypt their hard drives. When they sold them on eBay or another auction site, people obtained them and scanned the hard drive content; they could find social security numbers, credit card numbers, and so on.
Now, this threat exists for more than just hard drives. It can also occur with main memory, which is much harder to protect because main memory has to be fast. Unlike a hard drive, which is for archival data, main memory has to be very fast, and at the same time, has to be secure.
We’ve been publishing a series of papers about the use of persistent memory. The challenge with this research is that we don’t fully know how people will use this memory because it’s so new. At the same time, we have to envision what the security vulnerabilities might be depending on how people use it.
We are speculating right now that when non-volatile memory, or persistent memory, is available, people will start to move important data and archival data from files to main memory. When you move data from files to memory, that’s the only copy that you have. You no longer have files, except for working with something too big; let’s say video. In this situation, people will expect that the persistent memory object is as reliable as the file system, but it has to be fast. We have to be able to make sure the data can be recovered after power failure.
In the past, we had storage, which is slow and persistent, and then memory, which is volatile, and the interface is very different. Computing systems for decades have been optimized for two different paths. Now we’ve got something in the middle that acts like a hard drive or SSD with persistency, but it’s also fast and accessible directly by the processor. It doesn’t require an operating system interface. We are working on new technology to make persistent memory simultaneously secure, reliable, and fast.
In our research that is focused on improving security for existing technologies, we are investigating how much information can be inferred from merely scanning a wireless network.
As it turns out, there is quite a bit of information that can be gathered in this way. By being near a WiFi router, someone with a scanner can learn how many and what kinds of devices are connected. By scanning your network, they could determine if you have light bulbs, or a refrigerator, or tablets, or phones connected to your network. They can determine how many of these devices are active and when they are active.
For businesses, adversaries could use this information to determine what devices to attack or to gather business intelligence information to infer the company’s financial health.
Can you tell us more about the Cyber Security and Privacy (CyberSP) Cluster at the University of Central Florida?
UCF, a few years ago, recognized that some of the most interesting problems in society that need to be solved are interdisciplinary in nature.
They started a faculty cluster initiative or FCI, and one of the clusters they created was cybersecurity and privacy. In the past, cybersecurity and privacy was a very narrow niche area within computer science, but it’s becoming broader and broader. They recognize that there needs to be an interdisciplinary collaboration.
The cybersecurity cluster is one of nine faculty clusters that were created. I was hired in 2018, so I joined that. I believe in the vision of the need for interdisciplinary collaboration. Today, we have eight faculty members in the cluster, representing the College of Engineering and Computer Science, College of Business Administration, and School of Modeling, Simulation, and Training. These faculty members advise 45 Ph.D. students, 4 MS students, and 17 undergraduate students in research.
We are still in the process of growing. We research many aspects of cybersecurity, including trustworthy cloud, blockchain, secure machine learning, organizational aspects of cybersecurity, online privacy, malware analysis, digital forensics, and software security.
We offer several education programs, including Master of Science in Digital Forensics (MSDF), Graduate Certificate in Cyber Risk Management, and we have a Master in Cybersecurity under planning. Many students taking courses that we teach are in the BS in Computer Science or BS in Information Technology. There are around 4,000 of those students, so the interest in our courses is very strong.
It is urgent and important to scale research and education in cybersecurity for a few reasons. We are in a technology race against countries that are both trade partners and competitors, such as China. These countries are investing big in the research in artificial intelligence. We also have some research in artificial intelligence, but I think one thing that is an afterthought for artificial intelligence systems is security and privacy. Researchers push and push toward making AI smarter and more capable, but what about its security and privacy? For example, take Trojan AI. Trojan AI is when someone distributes an AI model trained to recognize certain inputs that will trigger a wrong classification.
For example, let’s say it’s a self-driving car, and it recognizes traffic signals and so on. This model is distributed, but unknowingly it has been trained to recognize that a stop sign with a little bit of modification will be recognized as something else; let’s say a yield instead of a stop sign.
We can’t easily detect that because the artificial intelligence model typically is a big collection of matrices and operates like a black box. When that trigger is encountered, suddenly, the AI system will misbehave. Again, this kind of thing needs careful security consideration and the ability to detect if a Trojan exists.
Then the reliability of AI systems is an afterthought, also. For example, if you run self-driving car software, you want the privacy for that, so you put it inside a secure enclave. If you detect something wrong, some modification: let’s say some cosmic ray hits and some bits flip from zero to one or one to zero, you will just shut down instead of having graceful degradation. We have research in the cluster in that direction too.
We look at the security aspects of AI, along with reliability and graceful degradation; the things we take for granted for other systems. We are in a technology race. We need to teach these courses to students and ramp up our training and education to produce engineers to invent the future’s new technologies. This will contribute to the nation’s economic growth that will make us stronger. I see our cluster as playing a small role in that direction.
Let’s change our focus now and look at some of your students. What are your students interested in?
Many of the students who interact or are advised by us, especially the undergrad students, are interested in learning foundational knowledge that helps prepare them for good jobs. We provide rigorous training in cybersecurity for them. Some of them are also curious about research.
They have never thought about that, and they’re looking for something to do to add to their educational experience. They come as blank pages, and they want to learn whatever we are working on. We provide them guided research.
For Ph.D. students, they have some idea of what fields they are interested in working in, and they have heard about our faculty and the kinds of things we are interested in. They know the area, but they don’t know what to work on, so we guide them initially.
The goal of the Ph.D. program is to produce engineers or researchers who are capable of identifying a new problem and coming up with a solution and approach to solving that problem. A typical Ph.D. student will take five years to graduate. In the third year or fourth year, they start to be entrusted with coming up with a solution.
Sometimes they also participate in formulating the problem. By the time they graduate, they can do what we can do: identify research problems, design solutions, design evaluation methods, and communicate results and findings to other researchers or the public.
If you were to put together a cybersecurity reading list, and it could be books, papers, lectures, or videos, what would be on that list?
For undergraduate students, the first place to start is textbooks in computer security. For graduate students, they will be working on the cutting edge, so the technology on the cutting edge does not have textbooks yet. I tell them to follow the papers that appear in top computer security conferences, like USENIX Security Symposium, ACM CCS, and then the IEEE security and privacy conferences. They should follow those papers very closely.
In terms of breadth, they should read more about how cybersecurity affects our society. I like reading Bruce Schneier’s blog. He has been a security researcher and consultant for decades, and he always has insightful comments. I also recommend Simon Singh’s “The Code Book” and James Steyer’s “Which Side of History.”
The last question that we like to ask is what you think cybersecurity will look like, or what will change over the next five or ten years?
I think things will change in a few directions. One is that cybersecurity is now, whether it’s good or bad; I’m not commenting on that, but it’s becoming a weapon of cyberwar. Physical war becomes too destructive, and you can’t control it easily once it starts. The preferred method for war in the future will be cyberwar. You can tune how much damage you cause, and you can calibrate your message based on the severity of the damage you cause.
Another trend I see is that security is now becoming closely tied with safety. The reason is that computers are moving from being just a black box to being in everyday things. Computers are moving into refrigerators, moving into cars, moving into speakers, moving into many different things, including medical devices. Now, a security compromise becomes a safety compromise.
If someone has a pacemaker connected to the internet, and then it gets hacked and disrupts the signal, the person may be harmed. We have a lot of implications, including safety and health, as well as regulations. Who is liable if something like that happens?
Also, consumer harm or harm to society must be considered. For example, the proliferation of fake news. If somebody is harmed because a social media platform hosts or propagates false claims, and they get injured, is the social media platform responsible? These are questions that need to be answered in the future.
Another example will be if a human resource department purchases an AI system to screen resumes and determine which ones will be good employees. What if that artificial intelligence system is trained based on data that has bias and as a result of that bias, someone, let’s say a person of color, doesn’t get the job, even though he would be a great employee if hired. That person is harmed. How will this be proven? What kind of remedy can this person have?
I think cybersecurity and privacy start to morph into societal issues. I am a very technical person in my research. Still, I understand that cybersecurity in the future has to have more involvement of ethicists, philosophers, legal experts, etc. because we can’t just stay in our technological cocoon.
Thank you so much, Professor Solihin. This has been a fascinating conversation. I sincerely appreciate your time.
It was my pleasure. Thank you.