Tag: trust

  • The Ipsos AI Monitor 2025 by Ipsos

    The Ipsos AI Monitor 2025 by Ipsos

    About the paper

    The paper is a 30-country survey about public understanding of AI, trust, perceived risks, and expectations for AI’s impact on work, content, brands, economies and everyday life.

    It is original survey research conducted by Ipsos via its Global Advisor online platform and, in India, its IndiaBus platform, between 21 March and 4 April 2025, with 23,216 adults across 30 countries; India used a mixed face-to-face and online approach.

    The methodology is clear, but Ipsos notes that some country samples are more “connected” than nationally representative, and that the 30-country average is an unweighted average across markets rather than a population-adjusted global figure.

    Length: 57 pages

    More information / download:
    https://www.ipsos.com/en-dk/ipsos-ai-monitor-2025

    Core Insights

    1. What is the central tension in public attitudes towards AI?

    The report’s central argument is that public opinion on AI is defined by a tension Ipsos calls the “Wonder and the Worry of AI”. People recognise AI’s potential and expect it to become embedded in many areas of life, but they also feel nervous about its consequences.

    At the 30-country average level, 52% say AI products and services make them excited, while 53% say they make them nervous. That means excitement and anxiety are not opposing camps so much as overlapping reactions: many people appear to hold both views at once.

    This tension is also geographically uneven. The Anglosphere — the US, Great Britain, Canada, Ireland and Australia — is described as more nervous than excited. European markets sit in a middle zone, with moderate excitement and less intense nervousness. Several South-East Asian markets are much more positive, while Japan is presented as an outlier: neither especially excited nor especially nervous.

    The broader meaning is that AI is not being received as a simple “innovation story”. People expect progress, but they are not automatically confident that the benefits will be fairly distributed, responsibly governed, or socially benign.

    2. How much do people understand AI, and how does knowledge vary by country?

    A majority say they understand AI at a general level, but fewer say they understand where AI is actually being used.

    Across the 30 countries, 67% agree that they have a good understanding of what artificial intelligence is. However, only 52% say they know which types of products and services use AI. That gap matters: people may feel familiar with AI as a concept while still being unsure where it is embedded in everyday services.

    There are large country differences. Indonesia, Thailand and South Africa are among the highest on claimed understanding of AI, while Japan is lowest. For knowing which products and services use AI, Indonesia and Thailand again rank high, while Belgium, Japan and Canada are at the lower end.

    This suggests that “AI literacy” is not just a question of awareness. The public may know the term, recognise the general idea, and still lack practical understanding of where AI is operating in search, marketing, recruitment, news, advertising, disinformation, customer service or workplace tools.

    3. What does the report reveal about trust in AI, companies and governments?

    Trust is one of the report’s most important fault lines. People are not simply asking whether AI is useful; they are asking who controls it, who regulates it, and whether organisations using it can be trusted.

    Only 48% across the 30-country average say they trust companies using AI to protect their personal data. Trust is much higher in countries such as Indonesia, Thailand and India, while Sweden, Canada, Japan, France and the United States sit much lower. The net trust measure is only slightly positive at the global country average level, which signals a fragile trust environment for brands and platforms.

    Governments are trusted somewhat more than companies in this context: 54% say they trust their government to regulate AI responsibly. But this also varies dramatically. Singapore, Indonesia, Malaysia and Thailand are high-trust markets, while the United States, Japan, Hungary, Great Britain and Canada are much lower. Ipsos suggests that low trust in government regulation may help explain higher nervousness in some markets, especially the US.

    One striking finding is that people say they trust AI more than people not to discriminate or show bias. At the 30-country average, 54% trust AI not to discriminate or show bias, compared with 45% who trust people not to discriminate or show bias. That does not mean people think AI is neutral; rather, it suggests that public trust in human fairness is also weak.

    The strongest trust-related consensus is disclosure. Seventy-nine per cent agree that products and services using AI should have to disclose that use. This is one of the clearest implications for organisations: transparency is not a niche concern but a mainstream expectation.

    4. How do people feel about AI-generated content, advertising and brand use?

    The report shows a clear public distinction between expecting AI-generated content and preferring it. People believe AI will be widely used, but they still prefer human-created content in most cases.

    For example, 79% think AI is likely to be used for online search results, and only 28% say they are uncomfortable with that use. That suggests search may be one of the more socially acceptable AI applications. By contrast, people are much more uncomfortable with AI-generated political ads, AI-written news stories, AI screening job applicants, and AI used to create or target disinformation.

    When asked about content preferences, the public consistently favours human-driven content. Seventy per cent prefer human-driven online news articles or websites; 71% prefer human-driven photojournalism; 67% prefer human-driven movies; 62% prefer human-driven advertising; and 60% prefer human-driven customer marketing websites.

    For brands, the picture is mixed and potentially risky. People are split on whether AI use would make them trust companies more or less. At the 30-country average, AI-enhanced product images produce 34% more trust and 38% distrust; AI-written product descriptions produce 33% more trust and 42% distrust; AI-created advertising images or video produce 30% more trust and 38% distrust; and AI-written product reviews produce 29% more trust and 36% distrust.

    The implication is that AI use in marketing is not automatically reputationally damaging, but it is not automatically efficiency-positive either. Brands may gain from AI where it improves usefulness, speed or relevance, but they risk distrust when AI is perceived as deceptive, synthetic, manipulative or insufficiently disclosed.

    5. What future impact do people expect AI to have on jobs, economies and everyday life?

    People expect AI to become more important in daily life, but their expectations are uneven across domains.

    A majority already feel AI has affected them: 52% say AI products and services have profoundly changed their daily life in the past three to five years. Looking ahead, 67% say AI will profoundly change their daily life in the next three to five years. So AI is not viewed as speculative; it is already part of people’s lived experience and expected to intensify.

    On work, the findings are ambivalent. Globally, 59% think AI is likely to change how they do their current job in the next five years, but only 36% think it is likely to replace their current job. Even more importantly, people are more optimistic about their own job than about the wider labour market. Among those with a job, 38% think AI will make their own job better, while 16% think it will make it worse. But for the job market overall, only 31% think AI will make it better, while 35% think it will make it worse.

    This “my job versus the job market” distinction is one of the report’s most useful insights. People may believe they personally can adapt, benefit or remain protected, while still worrying about broader labour disruption.

    The same pattern appears in other future-facing areas. People are optimistic that AI will improve efficiency: 55% say it will make the amount of time it takes to get things done better, compared with only 10% who say worse. They are also more positive than negative about entertainment options and health. But they are much more concerned about disinformation: only 29% think AI will make the amount of disinformation on the internet better, while 40% think it will make it worse.

    Economically, the global country average is cautiously positive: 34% think AI will improve their country’s economy, while 23% think it will worsen it. Ipsos argues that countries most excited about AI tend to be countries where people are also more likely to believe AI will benefit the economy. In other words, enthusiasm appears tied not only to technology itself, but to whether people believe AI will produce visible, shared economic benefits.