Blaine Hoak

Blaine Hoak

Ph.D. Student in Computer Sciences

University of Wisconsin-Madison

Hello! I am a Ph.D. student in the Department of Computer Sciences at the University of Wisconsin-Madison. I work in the MadS&P Laboratory and am advised by Prof. Patrick McDaniel.

My research is centered around evaluating and advancing the security of machine learning models, with a primary focus on adversarial robustness. This entails understanding, designing, and defending against sophisticated attacks that expose worst-case failures of machine learning models. Such attacks not only test the resiliency of machine learning models, but also highlight discrepancies between human and machine perception. My current work aims to explain and bridge this gap to build safer, better aligned machine learning models.

Aside from AI trustworthiness, I am also interested in and have collaborated on works broadly within security and privacy, all with the goal of identifying and protecting against threats to user privacy and security in a variety of systems.

Feel free to reach out if you want to know more!

Interests
  • Trustworthy Machine Learning/AI
  • Security and Privacy
  • Machine Learning
Education
  • Ph.D. in Computer Sciences, 2026 (expected)

    University of Wisconsin-Madison

  • B.S. in Biomedical Engineering, 2020

    Pennsylvania State University

Research Experience

 
 
 
 
 
University of Wisconsin-Madison
Research Assistant
University of Wisconsin-Madison
August 2022 – Present Madison, WI
 
 
 
 
 
Visa Research
Ph.D. Research Intern - Trustworthy AI Team
Visa Research
May 2023 – August 2023 Atlanta, GA
 
 
 
 
 
Pennsylvania State University
Research Assistant
Pennsylvania State University
August 2020 – August 2022 State College, PA

Recent Publications

(2024). Explorations in Texture Learning. 12th International Conference on Learning Representations, Tiny Papers Track (ICLR).

Cite Code arXiv Blog Post

(2024). Is Memorization Actually Necessary for Generalization?. In submission.

Cite

(2023). The Space of Adversarial Strategies. In USENIX Security ‘23.

PDF Cite Code Poster Slides Video arXiv USENIX Page

(2022). Measuring and Mitigating the Risk of IP Reuse on Public Clouds. IEEE Symposium on Security and Privacy (IEEE S&P).

Cite URL

(2022). Building a Privacy-Preserving Smart Camera System. Proceedings on Privacy Enhancing Technologies (PETS).

Cite DOI

Recent & Upcoming Talks

AI’s Impact on the Future of Work
The UW-Madison School of Computer, Information, and Data Sciences is hosting an event at Microsoft in Chicago in partnership with the School of Business. The event will have a speaking panel focused on the future of work, specifically for those working in tech in relation to advances in AI. I am honored to be one of the speakers on the panel!
AI's Impact on the Future of Work
Machine Learning in Security
This talk was a guest lecture for CS 642 - Introduction to Information Security. In this lecture, I presented an introduction of machine learning techniques and applications in security domains.
Machine Learning in Security
The Space of Adversarial Strategies @ USENIX Security 2023
The conference presentation of our accepted work, The Space of Adversarial Strategies, at USENIX Security in Anahiem, CA.
The Space of Adversarial Strategies @ USENIX Security 2023
Trust, Expectations, and Failures in AI @ The UW Now
I had the incredible opportunity of being featured on The UW Now as a panelist for the episode “Artificial Intelligence - Facts vs. Fiction”. Here, I discussed ways that AI can fail, what it means to have trustworthy AI, and what we, as a society and as experts of many of our own areas, can do to help build a future towards trustworthy AI. This talk is intended for a general audience, no computer science or security background required :)
Trust, Expectations, and Failures in AI @ The UW Now
The Space of Adversarial Strategies @ CRA Webinar
During one of the Collaborative Research Alliance(CRA) webinars I had the incredible opportunity to present my current work, The Space of Adversarial Strategies. In this talk I present an overview of the motivation and findings of this work with a detailed focus on how we performed a decomposition of foundational attack algorithms to build our framework and produce over 500 new attacks.
The Space of Adversarial Strategies @ CRA Webinar

Professional Service

iclr
International Conference on Learning Representations (ICLR)

Area Chair

2023 (Tiny Papers Track)

Program Committee

2024, 2023 (Tiny Papers Track)

neurips
Conference on Neural Information Processing Systems (NeurIPS)

Program Committee

2023

ieeesp
IEEE Symposium on Security and Privacy (IEEE S&P)

Program Committee

2024, 2024 (Posters)

External Reviewer

2023

satml
IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

Program Committee

2024, 2023

usenix
USENIX Security Symposium

External Reviewer

2023

ccs
ACM Conference on Computer and Communications Security (CCS)

External Reviewer

2022