researching the future of human and artificial cognition

a philosophy research lab at the university of toronto


news

> CBC News covers the research of Rachel Katz on AI Chatbots, "New AI apps promise mental health support at a student's fingertips. But can you trust a chatbot?"link ⇨> In September, our lab hosted a public event on AI Safety in featuring a conversation between Roger Grosse, Sheila McIlraith, Sedef Kocak. link ⇨> Dr. Vold will be a keynote speaker at the upcoming Association of Certified Fraud Examiner (ACFE) annual conference in Ottawa. ACFE is the world's largest anti-fraud organization and Dr. Vold will be speaking about the how to protect yourself from deepfakes, misinformation, and other risks of generative AI systems. link ⇨> Members of the Persicope Lab, Rachel Katz and Emily Huynh, have played in integral role in the innovative UofT course, "Exploring Values in Science and Engineering Practice", which aims to bridge the gap between engineering and non-engineering student. link ⇨

> Chinese Social Sciences Today, an academic newspaper affiliated with the Chinese Academy of Social Sciences (CASS), interviews Dr. Vold about her research on philosophy of virtual reality. link ⇨> University of Toronto Magazine features an article on Dr. Vold's AI2050 Schmidt Sciences Project on 'Learning from Machines' link ⇨> Bluesci, the Cambridge University Science Magazine, interviews Dr. Jessie Hall about virtual reality, materialism, and consciousness. link ⇨> Dr. Xinyuan Liao, a former visiting member of the periscope lab has been appointed as Assistant Professor of Philosophy to Wuhan University. Congratulations, Dr. Liao!> The National Post article, "A chatbot is no good in a crisis. Why you can't trust AI psychotherapy" covers Rachel Katz's doctoral work on ethical concerns around using AI in psychotherapy. link ⇨


members

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is a Research Lead at the U of T Schwartz Reisman Institute for Technology and Society, an AI2050 Early Career Fellow with the Schmidt Sciences Foundation, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at Cambridge's Leverhulme Centre for the Future of Intelligence.

Dr. Jessie Hall is a postdoctoral researcher and course instructor working on the Learning from Machines project. She received her BSc in physics and philosophy, focusing on the electromagnetism side of physics, and her Ph.D. on the metaphysics of computing at the IHPST at the University of Toronto. She was previously a graduate fellow at the Schwartz Reisman Institute for Technology and Society.

Lintao Dong is a third-year PhD candidate at the Institute for the History & Philosophy of Science & Technology (University of Toronto). His research interests are mainly found in the ethical and legal issues surrounding technology. He is currently working with Prof. Vold on technology and interference with individual choices.

Pooyan Ehsani is currently pursuing a Master's degree at IHPST. His main areas of interest are AI ethics, ethical design, and the ethical implications of AI. With experience as a machine learning engineer, Pooyan aims to focus on the problem of human autonomy within AI systems.

Helena Gagnier is a recent MA graduate from the University of Toronto’s Institute for the History and Philosophy of Science and Technology. She focuses on the philosophical implications of emerging technologies, with particular emphasis on AI ethics, AI policy development, and data governance.

Emily Huynh is an undergraduate Life Sciences student at the University of Toronto double majoring in Molecular Genetics & Microbiology and Cell & Molecular Biology. Her research focuses on practical, solution-oriented approaches to algorithmic bias.

Rachel Katz is a fourth-year PhD student at the University of Toronto's Institute for the History and Philosophy of Science and Technology (IHPST). She mainly work in bioethics, along with philosophy of medicine and psychiatry. She has further research interests in areas in AI ethics and ethical issues in crowdfunding.

Amelia Kush is a recent graduate from the University of Toronto's Masters in Philosophy of Science program. She also holds a BSc in Cognitive Science and Psychology from U of T. Her main areas of interest are enactive/embodied cognition, consciousness, artificial intelligence, and the ethics of technology.

Cheryl Lin is a senior undergraduate majoring in computer science. Her interests are in long term challenges around advanced AI systems, including the potential for artificial general intelligence, and the technical and ethical challenges that these technologies will bring with them.

Zhang Jiamin is a second-year PhD student at Shanxi University and Visiting students at the Institute of History and Philosophy of Science and Technology, University of Toronto. Her main research interests are virtual reality technology, Philosophy of artificial intelligence and human enhancement technology.

Xu Zhang is a fourth-year PhD student at Southeast University and a visiting student at the Institute of History and Philosophy of Science and Technology at the University of Toronto. Her main research directions are privacy philosophy and technology philosophy. Her recent research interests include cognitive issues of digital privacy and privacy design issues in big data.

Ankesh Chandaria is a strategy consultant, researcher, and lawyer qualified in the U.S. and U.K. Before embarking on his professional career, he completed an LL.M. at UCLA. He is currently pursuing further studies at the University of Toronto. Ankesh’s research interests include AI safety and governance, decision making, and the exploration of ethics and norms in the context of AI as a socio-technical whole


projects

AI safety and long term risks

As AI systems become more autonomous and more integrated into critical sectors like healthcare, finance, and security, concerns arise about unintended consequences, including catastrophic and existential risks. This project focuses on longer term ethical and safety risks that could emerge from future advanced AI systems. These include things like conscious AI systems, agentic systems, artificial general AI systems (AGI), as well as powerful narrow AI systems.

outputs> Staneva, M., Baret, A., Gobert, J., et al. (with Vold, K.). (2023). Assessing AI Capabilities with Education Tests. In Education Research and Innovation, AI and the Future of Skills: Methods and evaluation AI capabilities, Vol. 2. OECD. link ⇨> Vold, K. and D. Harris. How Does AI Pose an Existential Threat? (2021). Oxford Handbook on Digital Ethics, Oxford University Press, C. Veliz (Ed). link ⇨> Cave, S., Nyrup, R., Vold, K., and A. Weller. (2019). Motivations and Risks of Machine Ethics. Proceedings of the IEEE: Machine Ethics: the design and governance of ethical AI and autonomous systems. 107(3): 562-574. link ⇨> Shevlin, H., Vold, K., Crosby, M., and M. Halina. (2019). The Limits of Machine Intelligence. EMBO Report: e49177. link ⇨> Gómez, E. et al. (with Vold, K.). (2018). Assessing the impact of machine intelligence on human behaviour: an interdisciplinary endeavour. Proceedings of 1st Human Behaviour and Machine Intelligence Workshop, Barcelona, edited by the European Commission. link ⇨> Price, H. and K. Vold. (2018). Living With AI. Research Horizons, Issue 35. link ⇨


AI in medicine and healthcare

AI is becoming increasingly prevalent in healthcare due to the complexity and growing volume of health and medical data. It's already being used in areas like diagnosis, treatment recommendations, therapy, organ donation and allocation, and patient engagement. Each area of application presents its own unique challenges. This work explores a variety of opportunities and ethical risks around the use of AI in medicine and healthcare.

outputs> Calvo, R., Deterding, S., Flick, C., Luetge, C., Powell, A., and Vold, K. (2023). "Editorial: After Covid-19: Crises and Socio-Technical Change", IEEE Transactions on Technology and Society. link ⇨> Vold, K. and J. Hernández-Orallo. AI Extenders and the Ethics of Mental Health. (2022). In Ethics of Artificial Intelligence in Brain and Mental Health, Springer's Advances in Neuroethics Book Series. link ⇨> Ahmadpour, N., Ludden, G., Peters, D., Vold, K. (2022). "Editorial: Responsible Digital Health" in Frontiers in Digital Health, Special Issue on Responsible Digital Health. link ⇨> McCoy, L., C. Brenna, S. Chen, K. Vold, and S. Das. (2021). Believing in Black Boxes: Must Machine Learning in Healthcare be Explainable to be Evidence-Based? Journal of Clinical Epidemiology. link ⇨> Peters, D., Vold, K., Robinson, D., and R. Calvo. (2020) Responsible AI – two frameworks for ethical design practice. IEEE Transactions: Technology & Society 1(1). link ⇨> Calvo, R., Peters, D., Vold, K., and R. Ryan. (2020). Supporting human autonomy in AI systems: A framework for ethical enquiry. Philosophical Studies Series. C. Burr and L. Floridi (Eds.). link ⇨

cognitive technologies

Cognitive technologies include emerging tools like virtual reality headsets, brain-computer interfaces, and other smart wearables that interact with or record brain activity. Also included are AI-driven tools and applications, such as classifiers, predictors, generative models (e.g., ChatGPT and Dall-E) that are designed to assist humans in completing cognitive tasks. These devices hold the potential to extend human cognition and improve our well-being, however, they also introduce new ethical dilemmas and philosophical concerns by challenging some of our traditionally held conceptual boundaries.

outputs> Vold, K. and Liao, X. Neuroprosthetics, Extended Cognition, and the Problem of Ownership. (2024). With X. Liao. In Neuro-ProsthEthics – Ethical Implications of Applied Situated Cognition. Springer, B., Beck, O. Friedrich, and J. Heinrichs (Eds.):1-20. link ⇨> Vold, K. Human-AI Cognitive Teaming: Using AI to Support State-level Decision Making on the Resort to Force. (2024). Australian Journal of International Affairs 78 (2): 229-236. link ⇨> Schellaert, W., Burden, J., Vold, K. Martinez-Plumed, F., Casares, P., Loe, B. S., Reichart, R., Ó hÉigeartaigh, S., Korhonen, A. and J. Hernández-Orallo. (2023). Your Prompt is My Command: Assessing the Human-Centred Generality of Multi-Modal Models. Journal of AI Research, 77, pp. 377-389. link ⇨> Vold, K. Can Consciousness Extend? (2021). Philosophical Topics, 48(1), pp. 243-264. link ⇨> Vold, K. and D. Schlimm. (2020). Extended Mathematical Cognition: External representations with non-derived content. Synthese, 197(9), 3757-3777. link ⇨> Hernández-Orallo, J. and K. Vold. (2019). AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI. Proceedings of the AAAI/ACM Conference in AI, Ethics, & Society. link ⇨> Vold, K. (2018). Overcoming deadlock: Scientific and ethical reasons to embrace the Extended Mind. Philosophy and Society. 29(4): 471-646. link ⇨


learning from machines

This project explores the epistemic opportunities of AI for human learning and knowledge discovery. The goal is to unlock the potential of AI-assisted discoveries—be them in science, medicine, or mathematics, for example—in ways that are sensitive to human epistemic capacities and limitations across a variety of contexts. This project draws on work from a range of fields including philosophy, psychology, history, and AI research.

outputs> University of Toronto Magazine features an article on Dr. Vold's AI2050 Schmidt Sciences project on 'Learning from Machines.' link ⇨> Schmidt Sciences Foundation interviews Dr. Vold about her Project on 'Learning from Machines.' link ⇨


lab sponsors


contact

We wish to acknowledge the land on which the University of Toronto operates. For thousands of years it has been the traditional land of the Huron-Wendat, the Seneca, and the Mississaugas of the Credit. Today, it is still the home to many Indigenous people from across Turtle Island and we are grateful to have the opportunity to convene our lab on this land.