Schedule (CAP6412 – Spring 2021)
Date | Paper | Presenter | Notes |
---|---|---|---|
1/11/2021 | Lecture-1 [PDF of Presentation] [Video of Presentation] | Dr. Shah | (1) N. Akhtar, & A. Mian (2018). Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, IEEE Access, 6, 14410-14430 (2) Kui Ren, Tianhang Zheng, Zhan Qin, & Xue Liu (2020). Adversarial Attacks and Defenses in Deep Learning Engineering, 6(3), 346 - 360. |
1/13/2021 | Continuation of Lecture-1/Project Description [Video of Presentation] | Dr. Shah | |
1/18/2021 | Martin Luther King Jr. Day No Class | ||
1/20/21 | Lecture-2-Beyond AdversarialAttacks [PDF of Presentation] | Dr. Shah | (1) Shibani Santurkar, Andrew Ilyas, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry, Image synthesis with a single (robust) classifier, Advances in Neural Information Processing Systems, 2019, pp. 1260–1271. (2) Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry, Robustness may be at odds with accuracy, arXiv preprint arXiv:1805.12152 (2018). (3)Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry, Learning perceptually-aligned representations via adversarial robustness, arXiv preprint arXiv:1906.00945 (2019) (4)Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry, Adversarial examples are not bugs; they are features, arXiv preprint arXiv:1905.02175 (2019). |
1/25/21 | S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: a simple and accurate method to fool deep neural networks,” inProceedingsof the IEEE conference on computer vision and pattern recognition,2016, pp. 2574–2582. (DeepFool) [PDF of Presentation] | Dr. Kardan (For) Marzieh (Against) | |
1/27/21 | C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfel-low, and R. Fergus, “Intriguing properties of neural networks,”arXivpreprint arXiv:1312.6199, 2013 (First paper) [Video of Presentation] | James Beetham : For Rahul Ambati : Against | |
2/1/21 | I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,”arXiv preprint arXiv:1412.6572, 2014. (FGSM) [PDF of Presentation] [Video of Presentation] | Mdnazmul Karim and Umar Khalid | |
2/3/21 | C. Xie, Z. Zhang, Y. Zhou, S. Bai, J. Wang, Z. Ren, and A. L. Yuille, “Improving transferability of adversarial,”in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 2730–2739 (FGSM-based direction for black-box attack led to this) [PDF of Presentation] [Video of Presentation] | Alina Ageichik; Manish Goyal | Projects I & II Groups (1)Zach Schickler and Blake Wyatt. (2)Sunil Kumar and Tushar Sangam. (3)Qing Feng and Daoyang Song. (4)Yang Gao and Zhengtai Zhong. (5)Mdnazmul Karim and Umar Khalid. (6)Eric Watson and William Sawran. (7)Thomas Cummings and Lindsey Erikson. (8)Miles Crowe and Mauricio De Abreu. (9)Kyle Rebello and Aleenah Khan. (10 Quoc Le and, Saeed Rahaman (11) Akash Kumar, Zacchaeus Scheffer (12)Daniel Silva and Kesar Tumkur Narasimhamurthy. (13) James Beetham and Rahul Ambati (14)Nickolas Meeker, Sarinda Samarasinghe (15)Alina Ageichik; Manish Goyal Project III Groups Group A- Thomas Cummings Lindsey Erickson Sunil Kumar Tushar Sangam Group B- Miles Crowe and Mauricio De Abreu Gorup C- Aleenah Khan Alina Ageichik Kyle Rebello Manish Goya Group D- Daoyang Song, Zhengtai Zhong, Qing Feng, Yang Gao Group E- Kesar Tumkur Narasimhamurthy; Zachary Schickler; Blake Wyatt; Daniel Silva Group F- Quoc & Saeed Rahaman and Eric Watson, William Sawran Group G-Nicholas Meeker, Sarinda Samarasinghe, Umar Khalid, Nazmul Group H-James Beetham; Rahul Ambati; Zacchaeus Scheffer; Akash Kumar |
2/8/21 | N. Carlini and D. Wagner, Towards Evaluating the Robustnessof Neural Networks“,” in 2017 IEEE Symposium on Security and Privacy (sp). IEEE,2017, pp. 39–57. (C&W attack) [PDF] [Video of Presentation] | Akash Kumar; Zacchaeus Scheffer | |
2/10/21 | S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial perturbations,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1765–1773 (DeepFool led to this universal perturbations) [PDF of Presentation] [Video of Presentation] | Kyle Rebello and Aleenah Khan | |
2/15/21 | Madry, A., Makelov, A., Schmidt, L., Tsipras, D. and Vladu, A., 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. (PGD) [PDF of Presentation] [Video of Presentation] | Lindsey Erikson and Thomas Cummings | Project-1 Report Due |
2/17/21 | Project-3 Ideas Presentations Some Ideas: Attack Yolo Object Detector Attack Faster R-CNN Object Detector Attack UCF-101 Action Recognition Attack Semantic Segmentation Attack Tracking (DNN) Attack Optical Flow Attack Face Recognition(VGG Face) Reverse Engineer the attack; given perturbation determine which attack generated it? | 5 minutes presentation by each group | |
2/22/21 | Tramer, F., Carlini, N., Brendel, W. and Madry, A., 2020. On adaptive attacks to adversarial example defenses. [PDF of Presentation] [Video of Presentation] | Zach Schickler and Blake Wyatt. | |
2/24/21 | Wu, D., Wang, Y., Xia, S.T., Bailey, J. and Ma, X., 2020. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv preprint arXiv:2002.05990. (strongest attack) [PDF of Presentation] [Video of Presentation] | Mauricio De Abreu and Miles Crowe | |
3/1/21 | Xie, C., Wu, Y., Maaten, L.V.D., Yuille, A.L. and He, K., 2019. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 501-509). (feature denoising) [PDF of Presentation] [Video of Presentation] | Quoc Le and, Saeed Rahaman | |
3/3/21 | Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4312–4321, 2019. [PDF] | Qing Feng and Daoyang Song | |
3/8/21 | Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adversarial training.In International Conference on Learning Representations, 2020. [PDF of Presentation] [Video of Presentation] | Kesar Tumkur Narasimhamurthy; Zachary Schickler; Blake Wyatt; Daniel Silva | Project II Report Due |
3/10/21 | Naseer, Muhammad Muzammal, Salman H. Khan, Muhammad Haris Khan, Fahad Shahbaz Khan, and Fatih Porikli. "Cross-domain transferability of adversarial perturbations." In Advances in Neural Information Processing Systems, pp. 12905-12915. 2019. [PDF of Presentation] [Video of Presentation] | Daniel Silva and Kesar Tumkur Narasimhamurthy | |
3/15/21 | Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningfulperturbation. In: IEEE International Conference on Computer Vision (CVPR). pp.3429–3437 (2017) [PDF of Presentations] [Video of Presentation] | Yang Gao and Zhengtai Zhong | |
3/17/21 | Lapuschkin, S., W¨aldchen, S., Binder, A., Montavon, G., Samek,W., M¨uller, K.R.:Unmasking clever hans predictors and assessing what machines really learn. NatureCommunications 10, 1096 (2019) [PDF of Presentation] [Video of Presentation] | Tushar Sangam and Sunil Kumar Patro | |
3/22/21 | Smilkov, D., Thorat, N., Kim, B., Vi´egas, F., Wattenberg, M.: Smoothgrad: removingnoise by adding noise. arXiv preprint arXiv:1706.03825 (2017) [PDF of Presentation] [Video of Presentation] | Eric Watson and William | |
3/24/21 | Montavon, G., Binder, A., Lapuschkin, S., Samek, W., M¨uller, K.R.: Layer-wiserelevance propagation: An overview. In: Explainable AI: Interpreting, Explainingand Visualizing Deep Learning. Lecture Notes in Computer Science 11700, pp.193-209. Springer (2019) [PDF of Presentation] [Video of Presentation] | Nicholas Meeker and Sarinda | |
3/29/21 | Jeya Vikranth Jeyakumar, Joseph Noor, Yu-Hsi Cheng, Luis Garcia, Mani Srivastava, How Can I Explain This to You? An Empirical Studyof Deep Neural Network Explanation Methods [PDF of Presentation] [Video of Presentation] | Aleenah Khan Alina Ageichik Kyle Rebello Manish Goyal | |
3/31/21 | Hu Zhang, Linchao Zhu, Yi Zhu, and Yi Yang Motion-Excited Sampler: Video AdversarialAttack with Sparked Prior, ECCV20220. [PDF of Presentation] [Video of Presentation] | James Beetham; Rahul Ambati; Zacchaeus Scheffer; Akash Kumar | |
4/5/21 | First Report Project-3 Due/Presentations | ||
4/7/21 | Minseon Kim, Jihoon Tack, Sung Ju Hwang Adversarial Self-Supervised Contrastive Learning, NeurIPS, 2020, [PDF of Presentation] [Video of Presentation] | Thomas Cummings Lindsey Erickson Sunil Kumar Tushar Sangam | |
4/12/21 | Spring Break | ||
4/14/21 | Spring Break | ||
4/19/211 | Chih-Hui Ho Nuno Vasconcelos , Contrastive Learning with Adversarial Examples,NeurIPS, 2020. [PDF of Presentation] [Video of Presentation] | William Sawran and Eric Watson | |
4/21/21 | Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang, Robust Pre-Training byAdversarial Contrastive Learning, NeurIPS, 2020. [Video of Presentation] | Umar Khalid, Nicholas Meeker, Sarinda Samarasinghe, Nazmul | |
4/26/21 | Last Class | ||
4/28/21 | Final Exam Day |