researching the future of human and artificial cognition

a philosophy research lab at the university of toronto


news

> This September, our lab will host the Technophilosophy Soiree on AI companionship. This public event will feature a conversation between Michael Inzlicht, Anastasia Kuzminykh, Jocelyn Maclure, and Jelena Markovic. Registration will open soon link ⇨> Periscope is launching a new project on algorithmic bias in Canada. We are excited to welcome Dr. Miron Clay-Gilmore as a postdoctoral researcher on this project! More info on our project here: link ⇨> Dr. Vold has joined the Editorial Board for the newly launched journal AI & Innovation (AI²). AI² is a peer-reviewed journal dedicated to rigorous, interdisciplinary analysis of the AI revolution, with a specific focus on groundbreaking innovation, technological advancement, and forward-thinking governance. link ⇨> Periscope welcomes Dr. Daniel Harris, who will join our group as a postdoctoral researcher this fall! Dr. Harris will support work on AI alignment, AI safety and long term risks.> We're excited to welcome two new doctoral students to Periscope this Fall: Hongyu Chen and Renée Sirbu.

> Dr. Jessie Hall, a former postdoctoral research at Periscope has been appointed as a Postdoc at Carleton College. Congratulations, Dr. Hall! Read more here: link ⇨> Periscope members write about the hidden risks of cognitive enhancements through brain-computer interfaces: link ⇨> CBC News interviews Prof. Vold about the rise of fake AI accounts on facebook link ⇨> Helena Gagnier, a recent MA graduate from IHPST and member of Periscope, has begun a new role in the TELUS Data & Trust Office, where she will contribute to important research on responsible AI development. Congratulations, Helena!> New IAI article on how machines can help us to think differently (or better!) by Dr. Jessie Hall and Karina Vold link ⇨> CBC News covers the research of Rachel Katz on AI Chatbots, "New AI apps promise mental health support at a student's fingertips. But can you trust a chatbot?"link ⇨


members

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is a Research Lead at the U of T Schwartz Reisman Institute for Technology and Society, an AI2050 Early Career Fellow with the Schmidt Sciences Foundation, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at Cambridge's Leverhulme Centre for the Future of Intelligence.

Dr. Miron Clay-Gilmore is a postdoctoral researcher at the Centre for Ethics and the Schwartz Reisman Institute for Technology and Society. He is the first Black philosopher to earn a Ph.D. from the University of Edinburgh. His research examines the racialized applications of artificial intelligence, big data, and predictive policing within broader regimes of counterinsurgency and state violence.

Dr. Daniel Harris is a postdoctoral researcher at Periscope and at the Schwartz Reisman Institute for Technology and Society. Daniel holds a Ph.D. in Philosophy (McGill University), an MPhil Studies in Philosophy (King's College London), an M.A. in Theory, Culture & Politics (Trent University), and a B.A. in Philosophy & Information Systems (Trent University). His research explores topics in AI safety, catastrophic risks, human-AI interaction, and game theory.

Rachel Katz is a fourth-year PhD student at the University of Toronto's Institute for the History and Philosophy of Science and Technology (IHPST) and a graduate fellow at the Schwartz Reisman Institute for Technology and Society. She mainly work in bioethics, along with philosophy of medicine and psychiatry. She has further research interests in areas in AI ethics and ethical issues in crowdfunding.

Lintao Dong is a third-year PhD candidate at the Institute for the History & Philosophy of Science & Technology (University of Toronto). His research interests are mainly found in the ethical and legal issues surrounding technology. He is currently working with Prof. Vold on technology and interference with individual choices.

Kye Palider is pursuing his Ph.D. at IHPST. His research focuses on the history and philosophy of neural networks, especially as it intersects physics and neuroscience, and particularly through the lens of a connectionist science. He works as a research assistant on Prof. Vold's projects on epistemic uncertainty in AI systems, and the role of AI in scientific discovery.

Hongyu Chen is currently pursuing a PhD at the IHPST. He researches philosophical questions surrounding the nature of mind and life, particularly where consciousness, embodiment, and agency are concerned. He is also interested in how developments in AI can help clarify the nature of cognition across diverse life forms and systems.

Zhang Jiamin is a second-year PhD student at Shanxi University and Visiting students at the Institute of History and Philosophy of Science and Technology, University of Toronto. Her main research interests are virtual reality technology, Philosophy of artificial intelligence and human enhancement technology.

Xu Zhang is a fourth-year PhD student at Southeast University and a visiting student at the Institute of History and Philosophy of Science and Technology at the University of Toronto. Her main research directions are privacy philosophy and technology philosophy. Her recent research interests include cognitive issues of digital privacy and privacy design issues in big data.

Pooyan Ehsani is currently pursuing a Master's degree at IHPST. His main areas of interest are AI ethics, ethical design, and the ethical implications of AI. With experience as a machine learning engineer, Pooyan aims to focus on the problem of human autonomy within AI systems.

Sabrina Singh is an undergraduate at the University of Toronto, double-majoring in Physics and Philosophy of Science. Her research explores neurotechnology advancement and its utility in addressing the widening gap between human capabilities and technological progress.

Emily Huynh is a senior undergraduate at the University of Toronto, double majoring in Molecular Genetics & Microbiology and Cell & Molecular Biology. Her research focuses on practical, solution-oriented approaches to algorithmic bias.

Ankesh Chandaria is a strategy consultant, researcher, and lawyer qualified in the U.S. and U.K. Before embarking on his professional career, he completed an LL.M. at UCLA. Ankesh will begin his MPhil in AI Ethics and Society at the University of Cambridge in Fall 2025. His interests include AI safety and governance, decision making, and the exploration of ethics and norms in the context of AI as a socio-technical whole.

Cheryl Lin is a senior undergraduate at the University of Toronto, majoring in computer science. Her interests are in long term challenges around advanced AI systems, including the potential for artificial general intelligence, and the technical and ethical challenges that these technologies will bring with them.

Renée Andrea Sirbu Ph.D. student at IHPST. Renée received her B.Sc. in Human Biology, Bioethics, and Philosophy at UofT and her M.P.H. in Health Policy and Public Health Modeling from Yale, before spending two years as a predoctoral researcher at Yale's Digital Ethics Center. Her research explores the intersection of mortality and digital technology, focusing on human-computer interaction, brain-computer interfaces, and AI in clinical settings.

Ren Yulin is a PhD student in the Department of History of Science at Tsinghua University and is currently a visiting student at the University of Toronto. Her research focuses on AI ethics, specifically addressing ethical issues in autonomous vehicle technology and exploring the intersection of artificial intelligence, legal frameworks, and ethical decision-making in autonomous transportation systems

Cypress Marchesseau has a background studying feminist epistemology and participatory research, and is increasingly interested in post-humanism. In particular, they are exploring translation and development of more-than-human perspectives and their incorporation into sciences.

lab alumni

Amelia Kush is a recent graduate from the University of Toronto's Masters in Philosophy of Science program. She is currently pursing her Ph.D. in the Philosophy Department at McGill University. She also holds a BSc in Cognitive Science and Psychology from U of T. Her main areas of interest are enactive/embodied cognition, consciousness, artificial intelligence, and the ethics of technology.

Dr. Jessie Hall is a postdoctoral researcher and course instructor at Carleton College. She was previously a postdoc researcher at Periscope and the Schwartz Reisman Institute for Technology and Society. Dr. Hall received her BSc in physics and philosophy, focusing on the electromagnetism side of physics, and her Ph.D. on the metaphysics of computing at the IHPST at the University of Toronto.

Helena Gagnier is a recent MA graduate from the University of Toronto’s Institute for the History and Philosophy of Science and Technology. She is currently working as Mitacs Business Strategy Intern at Periscope and with the TELUS Data & Trust Office, focusing on the AI ethics and responsible AI development.

Dr. Xinyuan Liao a former visiting member of Periscope is now an Assistant Professor of Philosophy to Wuhan University. Dr. Liao works in philosophy of cognitive science, with particular expertise in extended cognition.


projects

AI safety and long term risks

As AI systems become more autonomous and more integrated into critical sectors like healthcare, finance, and security, concerns arise about unintended consequences, including catastrophic and existential risks. This project focuses on longer term ethical and safety risks that could emerge from future advanced AI systems. These include things like conscious AI systems, agentic systems, artificial general AI systems (AGI), as well as powerful narrow AI systems.

outputs> Staneva, M., Baret, A., Gobert, J., et al. (with Vold, K.). (2023). Assessing AI Capabilities with Education Tests. In Education Research and Innovation, AI and the Future of Skills: Methods and evaluation AI capabilities, Vol. 2. OECD. link ⇨> Vold, K. and D. Harris. How Does AI Pose an Existential Threat? (2021). Oxford Handbook on Digital Ethics, Oxford University Press, C. Veliz (Ed). link ⇨> Cave, S., Nyrup, R., Vold, K., and A. Weller. (2019). Motivations and Risks of Machine Ethics. Proceedings of the IEEE: Machine Ethics: the design and governance of ethical AI and autonomous systems. 107(3): 562-574. link ⇨> Shevlin, H., Vold, K., Crosby, M., and M. Halina. (2019). The Limits of Machine Intelligence. EMBO Report: e49177. link ⇨> Gómez, E. et al. (with Vold, K.). (2018). Assessing the impact of machine intelligence on human behaviour: an interdisciplinary endeavour. Proceedings of 1st Human Behaviour and Machine Intelligence Workshop, Barcelona, edited by the European Commission. link ⇨> Price, H. and K. Vold. (2018). Living With AI. Research Horizons, Issue 35. link ⇨


AI in medicine and healthcare

AI is becoming increasingly prevalent in healthcare due to the complexity and growing volume of health and medical data. It's already being used in areas like diagnosis, treatment recommendations, therapy, organ donation and allocation, and patient engagement. Each area of application presents its own unique challenges. This work explores a variety of opportunities and ethical risks around the use of AI in medicine and healthcare.

outputs> Crouch, M., Irvin, E., Tiong, M., Sriharan, A., Vold, K., Jetha, A. Artificial Intelligence in the workplace: A living systematic review protocol on worker health, safety, and well-being implications. link ⇨> Vold, K., Orlandi, M., Hall, J., Huynh, E., Katz, R., and Gross, J. Ethical Considerations in using Artificial Intelligence in Organ Transplantation.
(2025). To appear in the University of Toronto Press. Eds. M. Bhat and A. Sidhu.
> Starke, G., Gille, F., Termine, A., et al. (with Vold. K.). Finding consensus on trust in AI in healthcare: recommendations from a panel of international experts" (2025). Journal of Medical Internet Research. link ⇨> Calvo, R., Deterding, S., Flick, C., Luetge, C., Powell, A., and Vold, K. (2023). "Editorial: After Covid-19: Crises and Socio-Technical Change", IEEE Transactions on Technology and Society. link ⇨> Vold, K. and J. Hernández-Orallo. AI Extenders and the Ethics of Mental Health. (2022). In Ethics of Artificial Intelligence in Brain and Mental Health, Springer's Advances in Neuroethics Book Series. link ⇨> Ahmadpour, N., Ludden, G., Peters, D., Vold, K. (2022). "Editorial: Responsible Digital Health" in Frontiers in Digital Health, Special Issue on Responsible Digital Health. link ⇨> McCoy, L., C. Brenna, S. Chen, K. Vold, and S. Das. (2021). Believing in Black Boxes: Must Machine Learning in Healthcare be Explainable to be Evidence-Based? Journal of Clinical Epidemiology. link ⇨> Peters, D., Vold, K., Robinson, D., and R. Calvo. Responsible AI – two frameworks for ethical design practice. IEEE Transactions: Technology & Society 1(1), 34–47. link ⇨

learning from machines

This project explores the epistemic opportunities of AI for human learning and knowledge discovery. The goal is to unlock the potential of AI-assisted discoveries—be them in science, medicine, or mathematics, for example—in ways that are sensitive to human epistemic capacities and limitations across a variety of contexts. This project draws on work from a range of fields including philosophy, psychology, history, and AI research.

outputs> Vold, K., (Forthcoming). Augmenting Military Decision Making with Artificial Intelligence. Cambridge Forum on AI: Law and Governance.> Hall, J. and Vold, K., AI can uncover humanity’s unknown unknowns. (2025). IAI News. [link ⇨] (https://iai.tv/articles/ai-can-uncover-humanitys-unknown-unknowns-auid-3083)> University of Toronto Magazine features an article on Dr. Vold's AI2050 Schmidt Sciences project on 'Learning from Machines.' link ⇨> Schmidt Sciences Foundation interviews Dr. Vold about her Project on 'Learning from Machines.' link ⇨


algorithmic bias in Canada

There is widespread evidence of biases emerging in various stages of AI system development: from training data, a lack of robust testing, proxy measure choices, system fine-tuning, etc. Despite this evidence, AI tools are being rapidly adopted by both industry and governments in Canada, making the delivery of services both unfair and highly inefficient for particular groups of people, especially racial, gender, ethnic, and linguistic minorities, and people of sovereign nations. The goal of this project is to build knowledge on how algorithmic bias is affecting minority groups and to suggest policies to mitigate these effects. Find more information on our project website: link ⇨

outputs> Robinson, D. and K. Vold. (2018). Responsibly Deploying AI in the Immigration System. Policy Options. link ⇨


cognitive technologies

Cognitive technologies include emerging tools like virtual reality headsets, brain-computer interfaces, and other smart wearables that interact with or record brain activity. Also included are AI-driven tools and applications, such as classifiers, predictors, generative models that are designed to assist humans in completing cognitive tasks. These devices hold the potential to extend human cognition and improve our well-being, however, they also introduce new ethical dilemmas and philosophical concerns by challenging some of our traditionally held conceptual boundaries.

outputs> Vold, K., Hall, J., and A. Kush. The Hidden Risks of Neuralink. (2024). IAI News. link ⇨> Vold, K. and Liao, X. Neuroprosthetics, Extended Cognition, and the Problem of Ownership. (2024). With X. Liao. In Neuro-ProsthEthics – Ethical Implications of Applied Situated Cognition. Springer, B., Beck, O. Friedrich, and J. Heinrichs (Eds.):1-20. link ⇨> Vold, K. Human-AI Cognitive Teaming: Using AI to Support State-level Decision Making on the Resort to Force. (2024). Australian Journal of International Affairs 78 (2): 229-236. link ⇨> Schellaert, W., Burden, J., Vold, K. Martinez-Plumed, F., Casares, P., Loe, B. S., Reichart, R., Ó hÉigeartaigh, S., Korhonen, A. and J. Hernández-Orallo. (2023). Your Prompt is My Command: Assessing the Human-Centred Generality of Multi-Modal Models. Journal of AI Research, 77, pp. 377-389. link ⇨> Vold, K. Can Consciousness Extend? (2021). Philosophical Topics, 48(1), pp. 243-264. link ⇨> Vold, K. and D. Schlimm. (2020). Extended Mathematical Cognition: External representations with non-derived content. Synthese, 197(9), 3757-3777. link ⇨> Hernández-Orallo, J. and K. Vold. (2019). AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI. Proceedings of the AAAI/ACM Conference in AI, Ethics, & Society. link ⇨


lab sponsors


contact

We wish to acknowledge the land on which the University of Toronto operates. For thousands of years it has been the traditional land of the Huron-Wendat, the Seneca, and the Mississaugas of the Credit. Today, it is still the home to many Indigenous people from across Turtle Island and we are grateful to have the opportunity to convene our lab on this treasured land.