Program

Invited Speakers

Noa Garcia

Noa Garcia

Homepage

Noa Garcia is an Associate Professor at the Institute for Advanced Co-Creation Studies, Osaka University, Japan. Originally from Barcelona, she moved to Japan in 2018, first as a postdoctoral researcher and then as a specially-appointed assistant professor at the Institute for Datability Science. She completed her Ph.D in multimodal retrieval and instance-level recognition at Aston University, United Kingdom, after earning her degree in Telecommunications Engineering from Universitat Politècnica de Catalunya, Barcelona. Her current research interests lie at the intersection of computer vision, natural language processing, fairness, and art. She is an active member of the computer vision community, having co-organized several workshops and international events, and regularly publishes at conferences such as CVPR, ICCV, ECCV, or NeurIPS.

Ovetta Sampson

Nikki Popee

Homepage

Nikki Pope (she/her) is the head of AI and Legal Ethics at NVIDIA. In this role, Nikki works closely with business units and product teams to integrate NVIDIA's Trustworthy AI Principles into the company's AI models and systems. Nikki is an advocate for the wrongfully convicted. She co-authored "Pruno, Ramen, and a Side of Hope: Stories of Surviving Wrongful Conviction" and co-founded The Pruno Fund, a nonprofit organization that helps exonerated people after their release from prison. Although she was a bit of a nomad in the past, Nikki has spent nearly two decades living in Silicon Valley. She earned a BA in economics, an MBA in marketing and management, a JD, and an LLM in intellectual property law.

Schedule

The workshop will feature 11 oral presentations and 15 poster presentations. All times are in Hawaii Standard Time (HST).

08:00 AM - 08:15 AM Welcome & Introduction
08:15 AM - 09:00 AM Keynote 1 (Noa)
09:00 AM - 09:50 AM Paper Presentations Session 1
09:00 AM - 09:10 AM COOkeD: Ensemble-based OOD detection in the era of zero-shot CLIP
09:10 AM - 09:20 AM From Global to Local: Social Bias Transfer in CLIP
09:20 AM - 09:30 AM Enhancing Vision-Language Models for Zero-Shot Video Action Recognition via Visual-Textual Refinement and Improved Interpretability
09:30 AM - 09:40 AM Robust Experts: the Effect of Adversarial Training on CNNs with Sparse Mixture-of-Experts Layers
09:40 AM - 09:50 AM Explain with Confidence: Fusing Saliency Maps for Faithful and Interpretable Weakly-Supervised Model
10:00 AM - 10:15 AM Coffee Break
10:15 AM - 11:00 AM Keynote 2 (Nikki)
11:00 AM - 12:00 PM Paper Presentations Session 2
11:00 AM - 11:10 AM GenAI Confessions: Black-box Membership Inference for Generative Image Models
11:10 AM - 11:20 AM Extracting Uncertainty Estimates from Mixtures of Experts for Semantic Segmentation
11:20 AM - 11:30 AM ImageNet-BG: A Toolkit and Dataset for Evaluating Vision Model Robustness Against Background Variations
11:30 AM - 11:40 AM Data Bias Mitigation and Evaluation Framework for Diffusion-based Generative Face Models
11:40 AM - 11:50 AM Are X-ray landmark detection models fair? A preliminary assessment and mitigation strategy
11:50 AM - 12:00 PM Is It Certainly a Deepfake? Reliability Analysis in Detection & Generation Ecosystem
12:00 PM Poster Session

Presentation Details