ISTS Distinguished Speaker Series Presents Professor Lee Wenke of Georgia Tech

Professor Lee Wenke will share his thoughts on the interactions between machine learning and security titled “Machine Learning and Security: The Good, The Bad, and The Ugly."

April 20, 2021
3 pm - 4 pm
Location
https://dartmouth.zoom.us/j/96081600796?pwd=K0lJdGpGVUd1dm9kbmExWDJCcHU5dz09
Sponsored by
Institute for Security, Technology, and Society (ISTS)
Audience
Public
More information
ISTS

Professor Lee Wenke will share his thoughts on the interactions between machine learning and security. The title of is his lecture is: “Machine Learning and Security: The Good, The Bad, and The Ugly.” Professor Wenke Lee is a Professor of Computer Science, the John P. Imlay Jr. Chair, and the Director of the Institute for Information Security & Privacy at Georgia TechJoin us here: http://bit.ly/ISTSlecture or here: https://dartmouth.zoom.us/j/96081600796?pwd=K0lJdGpGVUd1dm9kbmExWDJCcHU5dz09. Calendar: Outlook or Google. Open to all, please feel free to share the link!

 

Abstract

The good: We now have more data, more powerful machines and algorithms, and better yet, we don’t need to always manually engineered the features. The ML process is now much more automated and the learned models are more powerful, and this is a positive feedback loop: more data leads to better models, which lead to more deployments, which lead to more data. All security vendors now advertise that they use ML in their products.

The bad: There are more unknowns. In the past, we knew the capabilities and limitations of our security models, including the ML-based models, and understood how they can be evaded. But the state-of-the-art models such as deep neural networks are not as intelligible as classical models such as decision trees. How do we decide to deploy a deep learning-based model for security when we don’t know for sure it is learned correctly?

Data poisoning becomes easier. On-line learning and web-based learning use data collected in run-time and often from an open environment. Since such data is often resulted from human actions, it can be intentionally polluted, e.g., in misinformation campaigns. How do we make it harder for attackers to manipulate the training data?

The ugly: Attackers will keep on exploiting the holes in ML, and automate their attacks using ML. Why don’t we just secure ML? This would be no different than trying to secure our programs, and systems, and networks, so we can’t. We have to prepare for ML failures. Ultimately, humans have to be involved. The question is how and when? For example, what information should a ML-based system present to humans and what input can humans provide to the system?

Location
https://dartmouth.zoom.us/j/96081600796?pwd=K0lJdGpGVUd1dm9kbmExWDJCcHU5dz09
Sponsored by
Institute for Security, Technology, and Society (ISTS)
Audience
Public
More information
ISTS