The Digital Transformation Office (DTO) at the Canadian Digital Service (CDS) is the design team for Canada.ca. It’s made up of researchers, designers, developers, and communicators with expertise in content and interaction design, user research, and plain language writing.

In 2024, DTO undertook an Artificial Intelligence (AI) trust study to gauge the public’s trust of different AI design approaches for Canada.ca. Studies like these help inform guidance that can be shared across the Government of Canada to help public servants apply generative AI in their work to improve the information and services the Government of Canada (GC) provides through Canada.ca.

GC Comms needs robust AI guidance

As AI continues to be an avenue that many private companies use to complete tasks and communicate/interact with the public and their clients, the DTO believed it was important to conduct an AI Trust study as it related to Canada.ca. As stewards of the Canada.ca brand on the GC’s public-facing websites, it’s part of our job to maintain trust in that brand.

In collaboration with the Treasury Board of Canada Secretariat (TBS)’s AI policy team and the Privy Council Office (PCO), the aim of this trust study was to use the findings to provide guidance on the use of AI on Canada.ca. By conducting this study, the team hoped to provide more robust guidance on how and when communications and web teams across the GC could or should use AI on Canada.ca. By doing this work, the DTO hopes to assist GC public servants in their decision making around AI and its place in their work.

The GC continues to explore the potential of AI to boost public service productivity and provide better service. Studies like these are crucial to inform future adoption decisions.

How we conducted the study on Canada.ca

You may recall a Trust study conducted on Canada.ca back in 2022. A similar method of research was used for this study.

From July 4 to 17, 2024, 3% of users on Canada.ca were invited to participate in the study. Overall, we had 1,513 Canadian participants, so we were able to gain insights from every province and territory, in English and French. We also had 957 international participants from over 100 countries, which we decided to evaluate separately. While this blog focuses on the data from the Canadian participants, our separate analysis found that Canadian and international participants had similar responses.

Participants were asked a series of “click which image you trust the most” questions regarding images, texts, citations, and how their trust is impacted when the use of AI is disclosed on the website. People were also asked whether they would trust a custom Canada.ca AI application more or less than the current AI features on search engines like Google and Bing.

What we found

This study revealed a lot of interesting data for us within the context of AI images and the use of AI citations.

Cited or not, people are skeptical of AI use in images

Study participants were shown 2 images: a face stock photo in a promotional feature and a decorative topic page image that looked like it was made with AI, but is real. None of the images used in the study were actually created by AI, but the study participants were not made aware of this. The reason we used an image that looked like it was made using AI was to highlight the risk of government teams using images that could be perceived as generated by AI, even if they weren’t.

The questions always showed the images first with the citations so they could choose the citation they trusted most. Then they always saw a question that had the same image and citation and one without, to gauge trust of images with and without citations. Given the choice of images with and without citations, only a minority, 22%, trusted the AI citation the most, versus 40% who trusted both images the same amount.

One individual responded to this question by saying “There’s no way of knowing it was generated by AI so I would feel it (without) was the most up to date.”

This tells us that the other 78% of participants aren’t necessarily opposed to AI use. It tells us that people don’t trust that AI was or wasn’t used in an image in the first place.

Survey samples (trust of AI images)

Participants were shown the following two images in a multiple choice format and asked to check which one they trusted most or check the option "Same".

Image 1: Face from a stock photo

A Canada.ca feature for direct deposits showing a stock image of a woman.

Image 2: Decorative topic page image

A Canada.ca page with an image of 3 people with buildings for heads.

The use of AI citations in GC Comms

Simple over detailed for images

When given choices between different citation options for each image (Al-generated OpenAl (2023), Source: DALL-E [Al] (2023), Created with AI, or Same), there was very little preference, with the majority of participants (59% for image 1 and 60% for image 2) selecting that they trusted all options the same amount.

The qualitative data showed that people appreciated the use of the word “Source”, with one respondent saying “it’s good to have a source, especially on a government site.” However, many people also mentioned that simple might be better, with some getting confused by the names of tools and/or firms as they were unsure of what they were. Given how little preference the public had regarding citations, the DTO decided that the recommendation would be to use “Source: Created with AI”, and remove mentions of dates, tools, or firms to simplify things for the intended audience.

Survey samples (image citations)

Participants were asked to check from a list which of the following options they trusted the most, with a fourth option where they could choose that they trusted them all the same amount. Below are the images the participants were shown.

Image 1: AI-generated OpenAI (2023)

AI-generated OpenAI (2023) citation on a Canada.ca page with an image of 3 people with buildings for heads.

Image 2: Source: DALL-E [AI] (2023)

DALL-E [AI] (2023) citation on a Canada.ca page with an image of 3 people with buildings for heads.

Image 3: Created with AI

Created with AI citation on a Canada.ca page with an image of 3 people with buildings for heads.

More concern for AI-generated text than images

Interestingly, the outcome for AI citations with text was very different than for images. Participants were shown 3 identical pieces of text, but each one had a different citation at the end.

Participants were then asked a separate question that showed a text with no citation and the same text with a citation style that they chose from the previous question. When asked which one they trusted the most, 68% checked that they trusted the one with the AI citation.

However, while the quantitative data showed trust in AI citation, the qualitative data highlighted some important insights for our researchers to keep in mind. One respondent commented “AI is okay for generating images. It is not acceptable for generating text.”, while another said “You don’t need to say whether the images are AI-generated. Who cares?…If you generate text with AI, 100% say you did, but also reviewed by a human would be good too. Unreviewed AI text is basically gonna be a disaster for you.” Many appreciated the use of the term “expert” and highlighting that AI-generated text was reviewed by a human expert.

So what does this tell us? It tells us that, unlike with images, people are less comfortable with text generated by AI. They want to know the source and/or tool used to generate the text, as well as be told that it was reviewed by a human expert.

Survey samples (identical text with one type of citation)

Participants were shown 3 identical pieces of text with 3 different citation styles. Below is the example of a long citation style.

Long citation format, long description follows
Image description: example of long citation style

An example of a piece of text that reads "The Minister of Public Safety and Emergency Preparedness may decide that a passport is not to be issued when there are reasonable grounds to believe that the decision is necessary to prevent the commission of a terrorism offence or for the national security of Canada or a foreign country or state." with a long citation format that reads "- This summary of the Canadian Passport Order was generated by artificial intelligence. It was reviewed and modified by experts to ensure accuracy. The prompt used was 'Generate a professional, plain language summary of this content', ChatGPT, OpenAI, May 15 2024."

Custom AI solution for Canada.ca

It was found that a majority (47%) would trust a custom AI chat over ones powered by Google or Bing. Qualitative data collected during the study found that people believed a custom tool would have the most up-to-date information, and would feel easier to interact with.

A bar graph showing participant selections, long description follows

Trust in AI solutions

A bar graph showing that 47% of participants selected a custom Canada.ca AI solution, 20% selected a Google AI solution, 10% selected a Bing Copilot solution and 23% selected they trusted all options the same amount.

Survey samples (custom AI solutions)

Image 1: Bing AI chat function

Bing AI chat results, long description follows
Image description: Bing AI chat function

Screenshot of Bing AI chat results when asking "I submitted my passport application ages ago. When will I get it?" It lists general guidelines for passport application timelines by if you applied in-person or by mail. The source link isn't displayed at the top, and research shows there's less trust in the accuracy of the content compared to Google AI chat.

Image 2: Google AI chat

Google AI chat results, long description follows
Image description: Google AI chat

Screenshot of the Google AI chat results when asking "I submitted my passport application ages ago. When will I get it?" Google indicates where guidance is from and links to the source. It provides a short summary of processing timelines, then links to more relevant resources, such as how to check the status of your application.

Image 3: Mock up of Canada.ca custom AI chat function

Canada.ca AI chat results, long description follows
Image description: Mock up of Canada.ca custom AI chat function

Screenshot of the mock up custom Canada.ca AI chat results when asking "I submitted my passport application ages ago. When will I get it?" Canada.ca AI chat asks when the individual applied, and they responded with "I applied on April 3". The custom AI solution provides a Canada.ca link so the individual can check their passport application status online.

Overall findings and recommendations

  • The use of AI is not key to improving trust, but it can damage it. By being transparent about the use of AI, we help to maintain trust in the Canada.ca brand.
  • Be mindful when using AI. 30% of Canadians are still wary about the use of AI, and the topic can evoke strong feelings and responses. This may change over time as more and more Canadians use it in their day-to-day lives.
  • People are more likely to trust a custom AI chat function on Canada.ca over a Google or Bing tool, but the custom function needs to be backed by Canada.ca content and links.
  • Educate ourselves. There are many resources, including on the Canada School of Public Service website, that can give us insight into generative AI. We all need to be learning about AI and thinking about ways it can improve what we deliver for Canadians.

When GC public servants are thinking about AI, ask the following question: is this only benefitting me, or is it benefitting the public? If it’s only benefitting you, then it might be best to go back to the drawing board and think of another solution.

AI is constantly evolving and sometimes it can be overwhelming to keep up. Take a look at the TBS generative AI guidelines and have a chat with your own teams about how you should and should not use AI both internally and externally.