Tentative Important Dates

Timeline (Tentative)

Workshop Schedule
Event Start time End time
Opening Remarks 8:30 8:40
Challenge Session 8:40 9:00
Invited talk #1: Prof. Chao Shen 9:00 9:20
Invited talk #2: Prof. Florian Tramer 9:20 9:40
Invited talk #3: Prof. Alfred Chen 9:40 10:00
Invited Talk #4: Prof. Bo Han 10:00 10:20
Invited Talk #5: Prof. Jing Shao 10:20 10:40
Invited Talk #6: Prof. Vishal M Patel 10:40 11:00
Invited Talk #7: Prof. Chaowei Xiao 11:00 11:20
Poster Session #1 11:20 12:00
Lunch (12:00-13:00)

Call for Papers

Foundation models (FMs) have demonstrated powerful generative capabilities, revolutionizing a wide range of applications in various domains including computer vision. Building upon this success, X-domain-specific foundation models (XFMs, e.g., Autonomous Driving FMs, Medical FMs) further enhance performance in specialized tasks within their respective fields by training on a curated dataset that emphasizes domain-specific knowledge and making architectural modifications specific to the task. Alongside their potential benefits, the increasing reliance on XFMs has also exposed their vulnerabilities to adversarial attacks.The workshop will bring together researchers and practitioners from the computer vision and machine learning communities to explore the latest advances and challenges in adversarial machine learning, with a focus on the robustness of XFMs. We welcome research contributions related to the following (but not limited to) topics:
  • Robustness of X-domain-specific foundation models
  • Adversarial attacks on computer vision tasks
  • Improving the robustness of deep learning systems
  • Interpreting and understanding model robustness, especially foundation models
  • Adversarial attacks for social good
  • Dataset and benchmark that could evaluate foundation model robustness
Format: Submissions papers (.pdf format) must use the CVPR 2025 Author Kit for LaTeX/Word Zip file and be anonymized and follow CVPR 2025 author instructions. The workshop considers two types of submissions: (1) Long Paper: Papers are limited to 8 pages excluding references; (2) Extended Abstract: Papers are limited to 4 pages including references. Accepted papers have the option to be included in the CVF and IEEE Xplore Proceedings.

Submission Site: https://openreview.net/group?id=thecvf.com/CVPR/2025/Workshop/Advml
Submission Due (both Paper and Supplementary Material): !!!Time Delay March 25, 2025, 11:59 PM (UTC±0)


Accepted Long Paper

  • Trustworthy Multi-UAV Collaboration: A Self-Supervised Framework for Explainable and Adversarially Robust Decision-Making [Paper]
    Yuwei Chen (Aviation Industry Development Research Center of China); Shiyong Chu (Aviation Industry Development Research Center of China)
  • Defending Against Frequency-Based Attacks with Diffusion Models [Paper]
    Fatemeh Amerehi (University of Limerick); Patrick Healy (University of Limerick)
  • Attacking Attention of Foundation Models Disrupts Downstream Tasks [Paper]
    Hondamunige Prasanna Silva (University of Florence); Federico Becattini (University of Siena); Lorenzo Seidenari (University of Florence)
  • Towards Evaluating the Robustness of Visual State Space Models [Paper]
    Hashmat Shadab Malik (Mohamed Bin Zayed University of AI); Fahad Shamshad (Mohamed Bin Zayed University of AI); Muzammal Naseer (Khalifa University); Karthik Nandakumar (Michigan State University); Fahad Shahbaz Khan (Mohamed Bin Zayed University of AI); Salman Khan (Mohamed Bin Zayed University of AI)
  • FullCycle: Full Stage Adversarial Attack For Reinforcement Learning Robustness Evaluation [Paper]
    Zhenshu Ma (Beihang University); Xuan Cai (Beihang University); Changhang Tian (Beihang University); Yuqi Fan (Beihang University); Kemou Jiang (Beihang University); Gangfu Liu (Beihang University); Xuesong Bai (Beihang University); Aoyong Li (Beihang University); Yilong Ren (Beihang University); Haiyang Yu (Beihang University)
  • Human Aligned Compression for Robust Models [Paper]
    Samuel Räber (ETH Zürich); Andreas Plesner (ETH Zürich); Till Aczel (ETH Zürich); Roger Wattenhofer (ETH Zürich)
  • Probing Vulnerabilities of Vision-LiDAR Based Autonomous Driving Systems [Paper]
    Siwei Yang (University of California, Santa Cruz); Zeyu Wang (University of California, Santa Cruz); Diego Ortiz (University of California, Santa Cruz); Luis Burbano (University of California, Santa Cruz); Murat Kantarcioglu (Virginia Tech); Alvaro A. Cardenas (University of California, Santa Cruz); Cihang Xie (University of California, Santa Cruz)
  • Task-Agnostic Attacks Against Vision Foundation Models [Paper]
    Brian Pulfer (University of Geneva); Yury Belousov (University of Geneva); Vitaliy Kinakh (University of Geneva); Teddy Furon (University of Renne); Slava Voloshynovskiy (University of Geneva)
  • EL-Attack: Explicit and Latent Space Hybrid Optimization based General and Effective Attack for Autonomous Driving Trajectory Prediction [Paper]
    Xuesong Bai (Beihang University); Changhang Tian (State Key Laboratory of Intelligent Transportation Systems); Wei Xia (State Key Laboratory of Intelligent Transportation Systems); Zhenshu Ma (Beihang University); Haiyang Yu (Beihang University); Yilong Ren (Beihang University);
  • VidModEx: Interpretable and Efficient Black Box Model Extraction for High-Dimensional Spaces [Paper]
    Somnath Sendhil Kumar (Microsoft Research); Yuvaraj Govindarajulu (AIShield, Bosch Global Software Technologies); Pavan Kulkarni (AIShield, Bosch Global Software Technologies); Manojkumar Parmar (AIShield, Bosch Global Software Technologies)
  • Attention-Aware Temporal Adversarial Shadows on Traffic Sign Sequences [Paper]
    Pedram MohajerAnsari (Clemson University), Amir Salarpour (Clemson University), David Fernandez (Clemson University), Cigdem Kokenoz (Clemson University), Bing Li (Clemson University), Mert D. Pesé (Clemson University)
  • One Noise to Fool Them All: Universal Adversarial Defenses Against Image Editing [Paper]
    Shorya Singhal (Data Science Group, IIT Roorkee); Parth Badgujar (Data Science Group, IIT Roorkee); Devansh Bhardwaj (Data Science Group, IIT Roorkee);

Accepted Extended Abstract

  • On the Safety Challenges of Vision-Language Models in Autonomous Driving [Paper]
    Yang Qu (Beihang University), Lu Wang (Beihang University)
  • Camouflage Attack on Vision-Language Models for Autonomous Driving [Paper]
    Dehong Kong (Sun Yat-sen University); Sifan Yu (Sun Yat-sen University); Linchao Zhang (China Electronics Technology Group Corporation); Shirui Luo (China Electronics Technology Group Corporation); Siying Zhu (China Electronics Technology Group Corporation); Yanzhao Su (Rocket Force University of Engineering); WenQi Ren (Sun Yat-sen University)
  • Improvement of Selecting and Poisoning Data in Copyright Infringement Attack [Paper]
    Feiyu Yang(Nanyang Technological University)
  • Multi-Task Vision Experts for Brain Captioning [Paper]
    Weihao Xia (University of Cambridge), Cengiz Oztireli (University of Cambridge)

Sponsors

logo-img
logo-img
logo-img


logo-img
logo-img
logo-img