*We will send the seminar link and password to registered participants.
Seminar recording available here.
Abstract:
Deep Learning (DL) networks are the current great hope for artificial intelligence. They have achieved impressive feats like approaching or surpassing human performance on image recognition (including facial recognition) and language translation. They also fail in ways that raise equity and security concerns. The focus here is on so-called adversarial examples, which are prompts (images, texts, etc.) for which DL networks make wildly incorrect judgments if the prompt is perturbed in a very minimal way, such as changing a few pixels in an image. In evaluating how we ought to react to this challenge to DL’s ability to perform perceptual tasks, a comparison is made to human susceptibility to perceptual illusions, and philosophical accounts of robustness are consulted.
About the speaker:
Catherine Stinson is Queen’s National Scholar in Philosophical Implications of Artificial Intelligence, Assistant Professor in the School of Computing and Philosophy Department. Dr. Stinson holds a PhD in History & Philosophy of Science from the University of Pittsburgh, and an MSc in Computer Science from the University of Toronto. They have published in philosophy of neuroscience, philosophy of psychiatry, philosophy of artificial intelligence, and tech policy. Current research concerns algorithmic bias in search and recommendation, debunking eugenic tech, how workplace diversity affects research, and data science for anti-racist advocacy.
Everyone welcome!
For more information about our seminars, please visit: https://www.sscqueens.org/events/seminar-series