Dr. Pardis Emami-Naeini A Conversation on Privacy, AI, and the Human Side of Technology

Dr. Pardis Emami-Naeini

Assistant Professor of Computer Science and Electrical & Computer Engineering at Duke University

A Conversation on Privacy, AI, and the Human Side of Technology

Q: How did your research path begin? 

A: I got my bachelor’s in computer engineering from Sharif University in Iran. When I started my Ph.D. at Carnegie Mellon University, there was a professor there looking into computing and security from the human side of technology, which was new to me. That made me aware of this new field of usable security or human-centered security, which says that people are often the weakest link in computing security and privacy. No matter how secure your technology is, if people are not using two-factor authentication or are connecting to an unsafe network, that’s it, they are not secure anymore. 

Q: You did research on how consumers approach issues of privacy and security. What did that reveal? 

A: I conduct large-scale experimental surveys and other user research to understand people’s security and privacy interactions, behavior and expectations. What we found is that people care about privacy and security, but they’re not able to find usable information readily available. Even people who care about privacy and security cannot find the information.

Q: What was your solution? 

A: That led me to design a cybersecurity nutrition label. It is similar to a food nutrition label but the ingredients, so to speak, are the security and privacy factors. And you supply that information in an easy-to-use format. I graduated from Carnegie Mellon in 2020, and shortly after that the Biden Administration put out an executive order that said we need to have consumer-facing labels to enhance the transparency of smart devices. We started engaging with various governmental organizations, including the National Institute of Standards and Technology and the Federal Communications Commission, to inform their efforts in designing such labels in the United States. Many details of this process are still being decided, including the standards that manufacturers should follow, but UL Solutions has been appointed as the lead administrator for the program, working with stakeholders to ensure the program is effective. It is expected to start in 2025. 

Q: You’ve also done research into the rising use of AI chatbots. What have you found there?

A: My students and I have found that many people are using AI chatbots for mental health. They don’t have therapists, or they live in rural areas where they don’t have access to therapists. So, they talk to ChatGPT. They want reassurance that they’re okay. But they also want better privacy, and some participants were concerned about oversharing. Where is this information going? Do they have any control? The majority of our participants had this misconception that your interaction with the AI chatbot is protected by HIPAA (the Health Insurance Portability and Accountability Act). That is actually very concerning. 

Q: Are there any protections in place yet? 

A: There’s not much right now that consumers can do to protect their sensitive mental health-related information when interacting with these general-purpose chatbots. Based on our findings, we provide several guidelines to manufacturers. Our advice to them is that there should be a difference in how you handle users’ data in educational uses, compared to health-related uses or the gaming context. The context really matters. For example, there’s a smart mirror that can analyze the tone of your voice and your conversations with others in the household. When you stand in front of it, it can start talking to you, based on your mood, to make you feel better. We are getting to the point where these are not just devices anymore. People start seeing these devices as companions, therapists and doctors. But we really need to understand the risks of this technology to really push for transparency. 

Q: What will it take for us to get better privacy safeguards? 

A: There are multiple stakeholders involved. If the government does not care about transparency, then manufacturers are not exactly incentivized to care about it. The Apple App Store and Google Play Store already put privacy labels in front of consumers when you want to download a new application to your phone. So, I’m hopeful that they can add information about AI. Specifically, details about what user information they use to train their AI models, how they handle such information and the risks involved with their privacy practices. Also, do users have any control over these models? The challenge for us researchers is to actively work with stakeholders. We can’t just do research alone. We need to educate people. They need to be more informed because there are a lot of risks. 

Q: Are you optimistic about the future of interactions between people and AI? 

A: I personally don’t think that people are going to be replaced by these technologies, at least not anytime soon, but in the meantime, they can be severely harmed by them. So, we need to make sure that people are getting more and more informed, but we have to do it without overwhelming them with more and more privacy policies that no one reads. There’s a lot of excitement around AI technologies right now. They’re useful technologies. But if you cannot reliably compare these tools based on their privacy and security practices, I think that’s a problem. 

Q: How are you enjoying your work at Duke University? 

A: I have amazing students. We are in the lab, moving fast. A big part of my job is to make the case that the technical side of computing has been doing amazing for many years. Now it’s the people side and the ethics of computing that we need to develop more. It’s not a technical challenge. It’s people-centered. In the next 20 years or so, perhaps we’ll have chatbots and humanoid robots everywhere. We will need to know how to interact with them.

Clearly, artificial intelligence, machine learning and other technological advances will shape our collective future in powerful ways. We are proud to support Duke University as it makes expanding and energizing science and technology education and research a top priority.

KRISTI K. WALTERS

KRISTI K. WALTERS

Director of Higher Education

The Duke Endowment

Background gradient

Next Story