Certain events in life are of such seismic proportion that they remind us of our fragility not only as human beings, but as an entire human species. I first got that feeling in the chaotic aftermath of 9/11 when I feared possible nuclear retaliation might put an end to us all. I felt it again when the UN Report of the Intergovernmental Panel on Climate Change warned of the narrowest margin of time remaining if we hope to save our planet from destruction. And I felt it more recently in the face of Russia’s horrific invasion of Ukraine, giving rise to scary prospects of a possible third world war.
The one other time I felt as fatalistic about our future as a human race was after reading Yuval Noah Harari’s book, Homo Deus. As a futurist, Harari warns of the need to brace ourselves for the real revolution when information technology meets biotechnology and forever changes our human species as we know it. The book left me with many complex, existential questions about the kind of future we are shaping.
At what point will artificial intelligence cross the boundary between predicting human behaviour with near-perfect accuracy and nudging our behaviour in ways that jeopardize our sense of human agency and our capacity to decide for ourselves what is best?
Will algorithmic predictions of who is more or less likely to succeed in school dictate into which education streams we place our children or jobs they are likely to get, undoing all the strides made over decades to provide universal education and equal opportunity?
Will our free and democratic elections be nudged by micro-targeted messages based on algorithmic inferences of our political leanings, jeopardizing our hard-earned right to vote according to one’s conscience and in the privacy of the ballot box?
Will algorithms and neuro-sensors designed to predict who is most likely to commit (or recommit) crimes based on certain socio-demographic factors create a self-fulfilling prophecy, robbing us of our freedom to defy whatever odds may be against us to become the person we want to be?
Will probabilities about our mortality and morbidity based on our genomic make-up, not only as it is — but as it could now be edited and re-engineered — impact our right not to know what awaits us and take away our ability to live our lives fully and freely, unencumbered by a sense of pre-ordained fate?
When I speak about such things with my children, they don’t seem to mind getting served ads that are relevant to them or movie suggestions they are more likely to enjoy or personalized music lists of their ‘genre.’ But I can’t help but worry about things more insidious than that. What about their newsfeeds? Are they getting access to well-rounded, impartial information they need to know about important news events happening around them? Or is their view of the world subtly shaped by the curated articles they see pop up on their social media platforms? What sliver of truth are they being served online that’s different from yours and mine?
Deep down, most of us know that life as we know it will never be the same. Through the rapid adoption of information technologies, combined with biotechnologies, we have created a legacy we have yet to fully understand. One that will challenge our right to privacy like never before, and in some ways, our right to be human. Like climate change, these are not remote issues that can be addressed over the coming decades; we need to recognize their immediacy and be working towards solutions now.
There have been many calls to regulate the design and use of algorithms in recent years and Canada has responded by proposing the Artificial Intelligence and Data Act (AIDA) as part of a larger suite of data protection reforms contained in Bill C-27. If passed, AIDA would regulate certain activities related to artificial intelligence systems and prohibit conduct that may seriously harm individuals or their interests. While many of the details have yet to be set out in regulations, AIDA introduces a series of obligations and responsibilities, including the requirement to assess the impact of an artificial intelligence system and to have in place measures to identify and mitigate the risks of harm or biased output that could result from use of the system. AIDA also introduces the requirement to monitor compliance with those measures and includes important transparency and record-keeping obligations. And, AIDA creates an entirely new accountability regime with ministerial oversight, audit, and significant monetary penalties for those found to have violated the law, under a scheme that has yet to be defined by regulation.
I leave it to others to comment on the federal bill, but I will make a few observations about its implications for Ontarians. Most importantly, AIDA would only apply to federally regulated businesses engaged in international or interprovincial trade and commerce. It would not apply to provincial governments, public institutions, or any provincially regulated business operating within Ontario, leaving wide-open spaces in our own regulatory landscape.
The canvas is set for Canada’s largest province to fill in the gaps and do what’s right by Ontarians. Ontario has the opportunity to consider a harmonized approach that would govern the use of artificial intelligence systems and the collection of data that feeds them. The previous government started down this path as part of its Digital and Data Strategy by developing two very thoughtful discussion papers on which they consulted widely: Trustworthy Artificial Intelligence (AI) Framework and Modernizing Privacy in Ontario. As the recently elected government settles into its new mandate, it’s time to pick up the pen again and continue this critically important work, particularly now that we have a clearer picture of the federal horizon.
Ontario, as a major hub of AI innovation in the country, has a unique opportunity to lead in this area, including by:
- supporting and applying research that examines the effects of data profiling, social media exposure and algorithmic prediction on the healthy psychological development of individuals, especially children and youth
- defining harms more broadly than physical, psychological, property or economic harms to an individual, to also include group harms resulting from AI systems
- taking a broader human rights approach that goes beyond the federal constitutional powers limited to regulating commercial or criminal activity
- developing a more integrated, interoperable and coherent approach across Ontario’s public, private, and not-for-profit sectors, including in the areas of health and law enforcement, and
- grounding algorithmic impact assessments in a thoughtfully-articulated, principled framework that balances fundamental ethical values of autonomy, dignity and integrity of persons or groups, with broader societal interests and public good considerations.
As we prepare for an eventual regulatory regime to govern this space, we should not underestimate how difficult it will be to truly assess the impacts of artificial intelligence systems. Ontario must intensify its consultation efforts, particularly among those in marginalized groups and communities who stand to be most impacted by algorithmic decision-making. It must also increase its investments into developing the capacity required to carry out such multi-factorial assessments, including through interdisciplinary research into the ethical, legal, and social impacts of artificial intelligence systems, the foresight methodologies needed to anticipate and address these impacts, and the practical educational guidance required to support institutions and organizations in carrying out such assessments. And most importantly, Ontario must show exemplary leadership in governing its own use of artificial intelligence to enhance delivery of government services and programs within clear and transparent boundaries that Ontarians find socially and ethically acceptable.
Algorithmic systems are powerful tools of measurement, management, and optimization. But to what ends and for what social benefits is this power directed? It is more important than ever for us to understand and democratically determine the outcomes for which our personal information and digitized public life are being optimized.
The Haudenosaunee’s Seventh Generation Principle teaches us to think about more than current preoccupations and transposes us beyond the here and now. It reminds us of our transient presence on earth and our responsibility to ensure a sustainable future by reflecting on how our decisions will impact seven generations ahead. For many of us, long-term planning three to five years out can be hard enough, let alone thinking through the broader societal implications of our actions today on seven generations to come. But so, we must learn and get better at doing if we have any hope of preserving the world we live in and our ability to decide our fate as a human species that inhabits it.
This post is also available in: French