Blaine Hoak

Blaine Hoak

Ph.D. Student in Computer Sciences

University of Wisconsin-Madison

Hello! I am a Ph.D. student in the Department of Computer Sciences at the University of Wisconsin-Madison. I work in the MadS&P Laboratory and am advised by Prof. Patrick McDaniel.

My research is centered around evaluating and advancing the trustworthiness of AI/ML systems. I am specifically interested in investigating functional differences between machine learning models and biological systems (e.g., the human visual system), and how these differences play a role in the robustness and resiliency of models. My current work aims to explain and bridge this gap to build safer, better aligned machine learning models.

Aside from AI trustworthiness, I am also interested in and have collaborated on works broadly within security and privacy, all with the goal of identifying and protecting against threats to user privacy and security in a variety of systems.

Feel free to reach out if you want to know more!

Interests
  • Trustworthy Machine Learning/AI
  • Security and Privacy
  • Machine Learning
Education
  • Ph.D. in Computer Sciences, 2026 (expected)

    University of Wisconsin-Madison

  • B.S. in Biomedical Engineering, 2020

    Pennsylvania State University

Recent Publications

(2024). Err on the Side of Texture: Texture Bias on Real Data. In submission.

Cite

(2024). On Synthetic Texture Datasets: Challenges, Creation, and Curation. In submission.

Cite arXiv URL

(2024). Explorations in Texture Learning. 12th International Conference on Learning Representations, Tiny Papers Track (ICLR).

Cite Code Poster arXiv Blog Post

(2024). Is Memorization Actually Necessary for Generalization?. In submission.

Cite

(2023). The Space of Adversarial Strategies. In USENIX Security ‘23.

PDF Cite Code Poster Slides Video arXiv USENIX Page

Recent & Upcoming Talks

Adversarial Attacks
SecureAI is a cybersecurity and privacy training program designed for AI professionals and researchers to equip them with the knowledge and skills to build AI systems that are technically sound and secure. In this talk I provide an introduction to adversarial examples and guidelines for robustness evaluations in practical systems.
Adversarial Attacks
AI’s Impact on the Future of Work
The UW-Madison School of Computer, Information, and Data Sciences is hosting an event at Microsoft in Chicago in partnership with the School of Business. The event will have a speaking panel focused on the future of work, specifically for those working in tech in relation to advances in AI. I am honored to be one of the speakers on the panel!
AI's Impact on the Future of Work
Machine Learning in Security
This talk was a guest lecture for CS 642 - Introduction to Information Security. In this lecture, I presented an introduction of machine learning techniques and applications in security domains.
Machine Learning in Security
The Space of Adversarial Strategies @ USENIX Security 2023
The conference presentation of our accepted work, The Space of Adversarial Strategies, at USENIX Security in Anahiem, CA.
The Space of Adversarial Strategies @ USENIX Security 2023
Trust, Expectations, and Failures in AI @ The UW Now
I had the incredible opportunity of being featured on The UW Now as a panelist for the episode “Artificial Intelligence - Facts vs. Fiction”. Here, I discussed ways that AI can fail, what it means to have trustworthy AI, and what we, as a society and as experts of many of our own areas, can do to help build a future towards trustworthy AI. This talk is intended for a general audience, no computer science or security background required :)
Trust, Expectations, and Failures in AI @ The UW Now

Professional Service

iclr
International Conference on Learning Representations (ICLR)

Area Chair

2023 (Tiny Papers Track)

Program Committee

2023 (Tiny Papers Track), 2024

neurips
Conference on Neural Information Processing Systems (NeurIPS)

Program Committee

2023

ieeesp
IEEE Symposium on Security and Privacy (IEEE S&P)

Program Committee

2024, 2024 (Posters), 2025

External Reviewer

2023

satml
IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

Program Committee

2023, 2024

usenix
USENIX Security Symposium

Program Committee

2025

External Reviewer

2023

ccs
ACM Conference on Computer and Communications Security (CCS)

Program Committee

2024

External Reviewer

2022