Artificial Intelligence in the public sector: Building trust now and for the future

On January 24, 2024, the IPC had the pleasure of hosting Ontarians to a public event in celebration of Data Privacy Day. The theme was Modern Government: Artificial Intelligence in the Public Sector. If you weren’t able to attend in person or online, the webcast is available here on our YouTube channel.

Here are a few highlights and key takeaways from the event.

Exhilarating promises of AI

AI technologies offer tremendous opportunities to improve public services. They can be used to fast track the processing and delivery of government benefits, inform decision-making by policymakers, and improve communications and engagement with citizens.

There is also a growing use of AI technologies to enable earlier diagnosis of complex health conditions, improve public safety, and respond to global emergencies.

Simply put, AI has the potential to transform the world as we know it today.

A 2023 survey by Global Government Forum found that more than one in ten Canadian public servants say they have used artificial intelligence tools such as ChatGPT in their work. This figure is likely to continue rising throughout 2024, as these technologies rapidly advance and become more commonly integrated into one’s day-to-day work.

Associated risks and potential harms

While the opportunities of AI are promising, we know that there are risks.  AI is not infallible and can lead to costly mistakes and unsafe outcomes for people.

Flawed algorithms can perpetuate biases embedded in the data used to train them, exacerbating the adverse impacts experienced by vulnerable and historically disadvantaged groups.

AI often relies on very large volumes of personal information or data sets that may not be properly protected and may not always be lawfully collected at source. The lack of transparency around the use of AI, and the inexplicability of decisions made as a result, can lead to unfair outcomes for individuals and gouge away at public trust.

Ever since generative AI tools, like ChatGPT, were publicly released and became readily accessible at mass scale, concerns are growing about how consumers can use these to create and spread misinformation. Sometimes spoofs can be funny and quite benign. Other times, not so. Cyber thieves are already simulating CEO voices and using them to spoof employees into transferring money through increasingly sophisticated phishing attacks. “Deepfakes” are being used to mislead the public by fabricating false statements made by political leaders, undermining our democratic processes. Deepfakes can also wreak havoc with financial markets, and gravely harm individuals by ruining their reputations or creating false sexual images of them.

Where the magic really happened

We were very privileged to be able to discuss these opportunities and risks with a blue-ribbon panel of experts from different areas of expertise including philosophy, history, political science, economics, law, social psychology, and technology.  Each of them brought a unique perspective to the table based on their deep knowledge and experiences.

But hearing them in discussion with one another is where the real magic happened! Combined, their contributions were particularly rich, insightful, engaging, and helped advance the dialogue around responsible use of AI in the public sector.

What is your word cloud when it comes to AI?

As a conversation starter, we asked each panelist the following question:

Considering each of you spend much of your day thinking and talking about AI in your respective roles, if we were to create a word cloud above your head, what would be your top three words?

For Melissa Kittmer, Assistant Deputy Minister, Ministry of Public and Business Service Delivery, those were: trustworthy, transparent and accountable. She spoke about the Ontario government’s Trustworthy AI Framework that has been under development since 2021 as part of Ontario’s Data and Digital Strategy. This risk-based framework is grounded in three principles: 1) No AI in secret; 2) AI use that Ontarians can trust; and 3) AI that serves all the people of Ontario.

Melissa highlighted the importance of identifying and managing AI risks. These include potential discrimination and violation of human rights, privacy infringements, misuse of intellectual property, and spread of misinformation. She stressed the responsibility of public servants to mitigate those risks when leveraging the benefits of AI in their work.

Stephen Toope’s three words were: excitement, worry and complexity. As President & CEO of the Canadian Institute for Advanced Research (CIFAR), Stephen spoke about CIFAR’s pan-Canadian AI Strategy. The strategy was launched in 2017 to build AI research capacity here in Canada, while ensuring responsibility, safety, equity, and inclusion. Today, Canada has become a powerhouse in terms of talent. We rank first among G7 countries in the growth and concentration of AI talent, and first in the world in percentage increase of female AI talent globally! Canada is also first in AI publications per capita. Canada used to rank fourth on ‘AI readiness’ in terms of our investment, innovation, and implementation, but we’ve dropped to fifth spot partially due to our lack of access to supercomputing power. Whereas other countries are building major computing platforms, Canada lags in comparison. So, while Canada’s story is one of success, it’s contingent success that requires continued investments in infrastructure and improved ability to protect our intellectual property.

Stephen added that as we deepen our understanding of AI, we also need to have appropriate guardrails in place to address discrimination among other risks. Although some have called for a global AI pact, he thinks that is unlikely to happen. Rather, we should be looking to local and national frameworks— maybe even regulatory coalitions — to ensure harmonization of high standards and avoid a race to the bottom.

The IPC’s own Manager of Technology Policy and Analysis, Christopher Parsons, chose fast-paced, nuanced and noisy. Chris spoke about how AI is being used to enhance national security and law enforcement. He noted the rapid growth of surveillance technologies, plummeting costs of computing power, enhanced access to analytical capabilities used to extract insights from data, all of which are now being leveraged for public security purposes. While this can be positive in some respects, for cybersecurity and automated defense systems, for example, there can also be significant impacts on our privacy and human rights, and ultimately public trust.

Chris emphasized concerns about the obscurity of these practices, many of which happen in secret, and the mass collection of personal information, sometimes from unlawful sources. Inferences derived from these data are largely invisible and may not always be accurate, yet they can feed into life-impacting decisions. This can lead to people being wrongfully identified and accused without the ability to understand how they are being drawn into the criminal justice system. This could further exacerbate bias and discrimination, undermine the right to due process and fair trial, and cause a chill on people’s freedom of expression and association.

Interestingly, Colin McKay, former Head of Public Policy at Google, chose similar words to Chris. Colin took a historical and contextual look back to technology development over the past 25 years. Back then, technology companies did not have the internal teams to clearly communicate to the public or to regulators how they were collecting and using personal information, or the accountability frameworks in which to operate. This created a legacy of mistrust in the use of technologies that naturally frames the context in which we consider consumer applications of AI today.

Colin highlighted the opportunity for companies, large and small, to leverage their past experience with technology development generally. He suggested they could do this by broadening their teams of specialized experts, including technologists, privacy lawyers, data security specialists, and ethicists to explain and communicate publicly about the complexities of AI in a more nuanced manner. Private sector can play a key role in advancing the debate around data cleanliness and process optimization to reduce bias and improve outcomes. He also urged the development of sustainable AI governance frameworks supported by key investments across industry to ensure clear, focused, and ethically responsible use of AI technology.

For Teresa Scassa, Canada Research Chair in Information Law and Policy at the University of Ottawa, risk, regulation and governance were top of mind.  She pointed out that legislative and policy frameworks could be aligned across the country following the lead of the federal government’s Artificial Intelligence and Data Act. Nonetheless, there are still normative spaces for provinces to fill given Canada’s federal reality. One of these important spaces is the provincial public sector, including health care, and law enforcement.

There are fundamental governance questions Ontario needs to ask itself before deploying AI like: What kinds of problems are we trying to address? Is AI the appropriate tool to solve those problems? If so, what kind of AI, designed by whom, what data should feed it, and who will benefit from it?

In filling their regulatory role, provinces should strive for alignment with the laws and policies of other jurisdictions, both nationally and internationally, and draw from their practical experience implementing them. Teresa also emphasized the need to empower and resource existing regulators, like privacy and human rights regulators, to address AI issues that arise in their respective areas of competence.

The three words for Jeni Tennison, Founder and Executive Director of Connected by Data in the U.K., were power, community and vision. Jeni discussed some of the challenges and opportunities around transparency of AI. She spoke about the need for AI developers to be transparent for different purposes and different levels. This includes transparency to the public to enhance public trust, to those procuring AI systems so they can do their due diligence, to intended users of AI so they can carry out their professional obligations with confidence, and to regulators for audit and accountability purposes. A certain level of transparency is also needed to enable fair competition in the market, which is particularly important in a public context to avoid government getting locked into a relationship with a single vendor.

Jeni also stressed how important it is to explain how an AI-based system comes up with a given result, so that affected individuals and their representatives can understand what is happening behind closed doors. This knowledge can help challenge any biases, inaccuracies, and unfairness.

Jeni described why transparency is needed not only in respect of algorithmic models and the development process, but also the results of impact assessments, as well as the number and outcomes of complaints received. These insights are important for communities to understand when and where things may go wrong — a key point for re-equilibrating relationships of power and remedying the public trust deficit.

Finally, Jeni emphasized the need for enhancing capacity and computing power. This is important not only for innovators and developers, but also for civil society, academia, regulators, and other challenging organizations whose role it is to hold developers to account for their use and deployment of AI.

Need for guardrails and limits

Governments in countries around the world are developing laws to address these and other issues associated with AI.

The European Council and Parliament have reached a provisional agreement after lengthy negotiations over the EU’s proposed AI Act. This act takes a risk-based approach to regulating AI and supporting innovation, but with greater transparency, accountability, and with several backstops. These include prohibitions against cognitive behavioural manipulation, the scraping of facial images from the internet, and the use of social scoring and biometric categorisation to infer sensitive data.

In California, the AI Accountability Act has been introduced with the aim of creating a roadmap, guardrails, and regulations for the use of AI technologies by state agencies. This includes requiring notice to the public when they are interacting with AI.

In Canada, the Artificial Intelligence and Data Act, part of Bill C-27, would require having measures in place to identify and mitigate the risks of harm or biased output, and to monitor compliance.

However, this federal legislation would not cover the public sector in Ontario, which is why it is so essential for us to develop our own framework here.

The Ontario government has already taken some positive steps by building various components of a Trustworthy Artificial Intelligence Framework.  But Ontario can and must do more.

Moving forward with AI: Initiatives from the IPC

Raising awareness and bringing to light the critical need for strong governance on AI has been at the forefront of the IPC’s initiatives in recent years.

Last May, the IPC issued a joint statement with the Ontario Human Rights Commission. We urged the Ontario government to establish a more robust and granular set of binding rules governing public sector use of AI that respects human rights, including privacy, and upholds human dignity as a fundamental value.

My office also joined our federal, provincial, and territorial counterparts in releasing Principles for Responsible, Trustworthy, and Privacy-Protective Generative AI Technologies. These principles are intended to help organizations build privacy protection right into the design of generative AI tools, and throughout their development, provision, and downstream use. They’re devised to mitigate risks, particularly for vulnerable and historically marginalized groups, and to ensure that generative content, which could have significant impact on individuals, is identified as having been created by generative AI.

On the international front, the IPC co-sponsored two resolutions at the 45th Global Privacy Assembly that were unanimously adopted by data protection authorities around the world. One on Generative Artificial Intelligence Systems and the other on Artificial Intelligence and Employment, both of which closely align, and resonate with, the kinds of things we’ve been saying and calling for here at home.

The future of AI

We should be proud to know that Canada and Ontario are clearly punching above their weight globally when it comes to AI innovation. Algorithmic systems are powerful tools of measurement, management, and optimization that can help spur the economy, diagnose and treat disease, keep us safe, and perhaps even save our planet.

Ultimately, however, the successful adoption of AI tools by public institutions can only be achieved with the public’s trust that these tools are being effectively governed. To gain that trust, we need to ensure they are being used in a safe, privacy-protective, and ethically responsible manner, with fair outcomes and benefits for all citizens.

— Patricia

This post is also available in: French

Media Contact

For a quick response, kindly e-mail or phone us with details of your request such as media outlet, topic, and deadline:
Telephone: 416-326-3965

Social Media

The IPC maintains channels on Twitter, YouTube and Linkedin in its efforts to communicate to Ontarians and others interested in privacy, access and related issues.