Tag: risk management

  • The Global Risks Report 2026 by World Economic Forum

    The Global Risks Report 2026 by World Economic Forum

    About the paper

    The World Economic Forum’s Global Risks Report 2026 examines global risks across 2026, 2028 and 2036, framing the period as an “age of competition” shaped by geo-economic confrontation, societal fragmentation, technological acceleration and environmental stress.

    It is a mixed-methods report based on the Global Risks Perception Survey of over 1,300 experts worldwide, the Executive Opinion Survey of over 11,000 business leaders in 116 economies, and foresight input from 161 experts through interviews and workshops conducted between May and November 2025.

    Length: 102 pages

    More information / download:
    https://www.weforum.org/publications/global-risks-report-2026/

    Core Insights

    1. What is the report’s central argument about the global risk landscape in 2026–2036?

    The report’s central argument is that the world is entering an “age of competition” in which cooperation is weakening just as global risks are becoming faster, more interconnected and more systemic. The report does not present predictions, but rather a set of plausible risk trajectories intended to support prevention and preparedness.

    Its core diagnosis is that geopolitical and geo-economic rivalry are no longer separate risk categories; they are becoming organising forces that shape the entire risk landscape. Trade, finance, technology, supply chains and infrastructure are increasingly treated as instruments of power. This creates a world in which confrontation replaces collaboration, and where multilateral institutions struggle to manage cross-border problems.

    The report’s tone is notably pessimistic. Half of surveyed experts expect a turbulent or stormy global outlook over the next two years, rising to 57% over the next decade. Only 1% expect a calm outlook across either time horizon. The implication is that instability is not viewed as a temporary disruption, but as a structural condition of the coming decade.

    2. Which risks dominate the short-term outlook, and why?

    In the immediate and two-year outlook, geo-economic confrontation is the dominant concern. It is identified as the top risk most likely to trigger a material global crisis in 2026, selected by 18% of respondents, followed by state-based armed conflict at 14%. Over the two-year horizon, geo-economic confrontation is also ranked as the most severe risk.

    This reflects the report’s view that economic instruments are increasingly being used for strategic advantage. Sanctions, tariffs, investment controls, technology restrictions, supply-chain weaponisation and resource competition are no longer peripheral policy tools; they are becoming central features of international rivalry. The report argues that this threatens the core of the interconnected global economy.

    Other short-term risks cluster around the same underlying instability. Misinformation and disinformation ranks second over the two-year horizon, societal polarisation third, extreme weather fourth, and state-based armed conflict fifth. Cyber insecurity, inequality and erosion of civic freedoms also feature in the top 10. This shows that the report sees short-term risk as a combination of geopolitical confrontation, social fragmentation and information disorder.

    Economic risks also rise sharply in the two-year outlook. Economic downturn and inflation each rise eight places compared with the previous year, while asset bubble burst rises seven places. The report links these concerns to debt pressures, volatile markets, potential AI-related investment bubbles and the broader uncertainty created by protectionism and geo-economic rivalry.

    3. How does the long-term risk outlook differ from the two-year outlook?

    The long-term outlook shifts from geopolitical and economic confrontation towards environmental and technological risks. Over the 10-year horizon, extreme weather events rank first, followed by biodiversity loss and ecosystem collapse, critical change to Earth systems, misinformation and disinformation, and adverse outcomes of AI technologies. Half of the top 10 long-term risks are environmental.

    This creates one of the report’s central tensions: environmental risks are being deprioritised in the short term even though they remain dominant in the long term. The report notes that most environmental risks decline in the two-year ranking and also show reduced short-term severity scores compared with the previous year. Yet over 10 years, environmental risks remain the most severe category.

    The report’s interpretation is that immediate geopolitical, economic and societal pressures are crowding out longer-term collective priorities. In practical terms, climate and biodiversity risks remain existential, but political attention is being pulled towards wars, protectionism, inflation, debt, social unrest and technological disruption.

    This is one of the report’s most important implications: the world may be paying less attention to the risks that experts still see as most severe over the coming decade.

    4. What role do technology, AI and quantum developments play in the report’s risk assessment?

    Technology is presented as a source of enormous opportunity and systemic risk. In the short term, the report is most concerned with misinformation and disinformation, cyber insecurity and the way digital technologies amplify social polarisation. Misinformation and disinformation ranks second in the two-year outlook, while cyber insecurity ranks sixth.

    AI becomes much more important over the long term. “Adverse outcomes of AI technologies” rises from #30 in the two-year outlook to #5 in the 10-year outlook — the largest upward shift across all 33 risks surveyed. The report highlights several possible consequences: labour-market disruption, higher inequality, loss of purpose and social belonging, information chaos, concentration of economic power, and risks from military uses of AI.

    The report’s AI chapter is especially concerned with a scenario it describes as “jobless productivity”: productivity rises because of AI, but employment opportunities shrink or become more unevenly distributed. This could deepen inequality and social polarisation, particularly if middle-class and white-collar work is disrupted faster than societies can adapt.

    Quantum technologies are treated as a more distant but potentially severe frontier risk. The report highlights the possibility that quantum computing could undermine current cryptographic systems, threatening digital authentication, data privacy and trust infrastructure. It also warns that quantum leadership could become another domain of strategic rivalry, widening economic and geopolitical divides.

    5. What does the report imply about cooperation, governance and resilience?

    The report’s underlying message is that cooperation is becoming harder at precisely the moment when it is most needed. Multilateralism is described as under pressure from declining trust, protectionism, weakening rule of law and the rise of more adversarial national strategies.

    The report does not argue that cooperation has disappeared. Rather, it suggests that cooperation will need to look different. In a more fragmented world, global treaties may be harder to achieve, so the report points to coalitions of the willing, minilateral agreements, public-private partnerships, multi-stakeholder engagement, public awareness, education, R&D and corporate resilience strategies as practical mechanisms for risk reduction.

    The report’s perspective is pragmatic rather than optimistic. It assumes that the current order is weakening, but not that collapse is inevitable. Its conclusion is that resilience will depend on rebuilding trust, protecting institutional capacity, investing in adaptive infrastructure, preparing societies for technological disruption, and finding new forms of cooperation even amid competition.

    The most important strategic implication is that risk management can no longer be treated as domain-specific. Geopolitics affects economics; economics affects social trust; social distrust affects governance; technology affects all of them; and environmental risks continue to intensify in the background. The report’s core warning is that leaders must prepare for compounding risks, not isolated crises.

  • Future of Professionals Report 2025 by Thomson Reuters

    Future of Professionals Report 2025 by Thomson Reuters

    About the paper

    Thomson Reuters’ Future of Professionals Report 2025 examines how AI and GenAI are affecting legal, risk, compliance, tax, accounting, audit and trade professionals, with a particular focus on strategic AI adoption and ROI.

    It is an original survey-based report, drawing on 2,275 responses gathered in February and March 2025 from professionals across firms, corporations, government and in-house functions.

    The geographic scope is international, with responses from the US, Canada, UK, Mainland Europe, Middle East, Africa, Latin America, Asia, and Australia/New Zealand.

    Length: 31 pages

    More information / download:
    https://www.thomsonreuters.com/en/c/future-of-professionals

    Core Insights

    1. What is the central argument of the report?

    The report argues that AI adoption has moved from experimentation to strategic differentiation. Thomson Reuters’ core claim is that the decisive question is no longer whether professional organisations should adopt AI, but whether they do so deliberately, visibly and in alignment with broader business goals.

    The report frames a widening divide between organisations with a clear AI strategy and those relying on informal or ad hoc adoption. Organisations with visible AI strategies are presented as significantly more likely to experience AI-related benefits, including revenue growth, productivity gains and stronger operational performance. By contrast, organisations without a strategy are portrayed as at risk of falling behind within a few years.

    This is not just a technology argument. The report repeatedly emphasises that AI must be connected to organisational purpose, workflow redesign, leadership behaviour, talent strategy and individual professional development. AI is described as an enabler of broader transformation rather than a standalone tool.

    2. What evidence does the report provide that AI is already affecting professional work?

    The report provides several data points showing that AI has already become a major force in professional services and related corporate functions.

    Most prominently, 80% of respondents believe AI will have a high or transformational impact on their profession within five years. At the same time, 53% say their organisation is already experiencing at least one type of benefit from AI adoption. The most common benefits are efficiency, productivity, faster response times, reduced errors, cost reduction and freed-up time.

    The report estimates that AI could save professionals around five hours per week, or 240 hours per year. In the foreword, Thomson Reuters states that for legal professionals this represents an average annual value of around $19,000 per professional, contributing to a combined annual impact of $32 billion in the US legal and tax/accounting sectors.

    However, the report also identifies a gap between expected long-term impact and current organisational change. While 80% expect AI to have a major impact within five years, only 38% expect high or transformational change in their own organisation this year, and 30% believe their organisation is moving too slowly.

    3. What distinguishes organisations that achieve stronger ROI from AI?

    The report’s main explanatory model is the “AI Success Pyramid”, which identifies four layers required for stronger AI returns: strategy, leadership, operations and individual users.

    The strongest lever is strategy. Organisations with a visible AI strategy are described as 3.5 times as likely to experience at least one form of ROI compared with organisations that have no significant AI adoption plans. They are also almost twice as likely to report revenue growth from AI compared with organisations adopting AI informally.

    Leadership is the second layer. Respondents whose leaders lead by example are 1.7 times as likely to see AI benefits. Organisations investing in AI-powered technology are twice as likely to report benefits, while those adding new governance roles are also more likely to experience positive outcomes.

    Operational change is the third layer. The report argues that organisations need to redesign workflows, roles, delivery models, services and pricing structures. This is where AI moves beyond personal productivity and begins to change how professional work is produced and delivered.

    The fourth layer is individual adoption. Professionals with good or expert AI knowledge are 2.8 times as likely to see organisational benefits as those with basic or no knowledge. Regular users of AI tools are 2.4 times as likely to report benefits compared with non-regular users. This makes individual AI literacy a strategic issue, not merely a personal skill upgrade.

    4. What risks, barriers and tensions does the report identify?

    The report identifies several barriers to more robust AI adoption. The largest barrier to investment is demonstrable accuracy, cited by 50% of respondents. This is followed by available budget, data security, ethical concerns and implementation resources.

    Accuracy is especially important because professional work often carries high stakes. The report notes that 91% of professionals believe computers should be held to higher standards of accuracy than humans, including 41% who say AI outputs would need to be 100% accurate before being used without human review. This reinforces the report’s view that human oversight remains essential.

    The report also highlights a new concern: overreliance on AI at the expense of professional skill development. Almost a quarter of respondents identify this as a negative consequence of concern. This is a subtle but important shift from earlier fears of job loss towards worries about deskilling, judgement and long-term professional capability.

    Another major tension is misalignment between organisational and individual adoption. Some professionals have personal AI goals but are unaware of any organisational strategy, meaning they are being encouraged to adopt AI without clear guidance. Conversely, some organisations have AI strategies but professionals lack personal AI goals, creating an implementation gap.

    The report also describes the “jagged edge” of AI adoption: uneven adoption across regions, functions, organisations and demographics. For example, some organisations invest heavily but see low individual usage, suggesting wasted investment and weak change management. Others see high individual usage but low organisational investment, which may create risks if employees rely on public tools without proper safeguards.

    5. What does the report imply for the future of professional work?

    The report implies that professional value will increasingly depend on the ability to combine domain expertise with AI fluency. It does not argue that AI replaces professional judgement. Instead, it argues that modern professionals will use AI to augment core abilities such as research, writing, analysis, communication, project management, technical expertise and higher-order thinking.

    The “modern professional” in the report is someone who can use AI as a working partner: to analyse patterns, compare regulations, draft documents, summarise complex material, explain specialist issues in accessible ways, manage deadlines and explore scenarios. The traditional professional skillset remains important, but the report suggests that it will increasingly be mediated and amplified by technology.

    The report also points to a significant skills gap. Forty-six percent of respondents report skills gaps within their teams, with the largest gap in technology and data skills. Technical domain expertise is also a concern. This means the future challenge is not only AI adoption but reskilling across multiple levels of the organisation.

    The report’s final implication is competitive: organisations and professionals that act deliberately are likely to gain advantage, while those that wait may lose relevance. For organisations, this means connecting AI to strategy, governance, workflow and value creation. For individuals, it means developing AI proficiency through formal training, experimentation, peer learning and active involvement in how AI is developed and used.

  • The Ipsos AI Monitor 2025 by Ipsos

    The Ipsos AI Monitor 2025 by Ipsos

    About the paper

    The paper is a 30-country survey about public understanding of AI, trust, perceived risks, and expectations for AI’s impact on work, content, brands, economies and everyday life.

    It is original survey research conducted by Ipsos via its Global Advisor online platform and, in India, its IndiaBus platform, between 21 March and 4 April 2025, with 23,216 adults across 30 countries; India used a mixed face-to-face and online approach.

    The methodology is clear, but Ipsos notes that some country samples are more “connected” than nationally representative, and that the 30-country average is an unweighted average across markets rather than a population-adjusted global figure.

    Length: 57 pages

    More information / download:
    https://www.ipsos.com/en-dk/ipsos-ai-monitor-2025

    Core Insights

    1. What is the central tension in public attitudes towards AI?

    The report’s central argument is that public opinion on AI is defined by a tension Ipsos calls the “Wonder and the Worry of AI”. People recognise AI’s potential and expect it to become embedded in many areas of life, but they also feel nervous about its consequences.

    At the 30-country average level, 52% say AI products and services make them excited, while 53% say they make them nervous. That means excitement and anxiety are not opposing camps so much as overlapping reactions: many people appear to hold both views at once.

    This tension is also geographically uneven. The Anglosphere — the US, Great Britain, Canada, Ireland and Australia — is described as more nervous than excited. European markets sit in a middle zone, with moderate excitement and less intense nervousness. Several South-East Asian markets are much more positive, while Japan is presented as an outlier: neither especially excited nor especially nervous.

    The broader meaning is that AI is not being received as a simple “innovation story”. People expect progress, but they are not automatically confident that the benefits will be fairly distributed, responsibly governed, or socially benign.

    2. How much do people understand AI, and how does knowledge vary by country?

    A majority say they understand AI at a general level, but fewer say they understand where AI is actually being used.

    Across the 30 countries, 67% agree that they have a good understanding of what artificial intelligence is. However, only 52% say they know which types of products and services use AI. That gap matters: people may feel familiar with AI as a concept while still being unsure where it is embedded in everyday services.

    There are large country differences. Indonesia, Thailand and South Africa are among the highest on claimed understanding of AI, while Japan is lowest. For knowing which products and services use AI, Indonesia and Thailand again rank high, while Belgium, Japan and Canada are at the lower end.

    This suggests that “AI literacy” is not just a question of awareness. The public may know the term, recognise the general idea, and still lack practical understanding of where AI is operating in search, marketing, recruitment, news, advertising, disinformation, customer service or workplace tools.

    3. What does the report reveal about trust in AI, companies and governments?

    Trust is one of the report’s most important fault lines. People are not simply asking whether AI is useful; they are asking who controls it, who regulates it, and whether organisations using it can be trusted.

    Only 48% across the 30-country average say they trust companies using AI to protect their personal data. Trust is much higher in countries such as Indonesia, Thailand and India, while Sweden, Canada, Japan, France and the United States sit much lower. The net trust measure is only slightly positive at the global country average level, which signals a fragile trust environment for brands and platforms.

    Governments are trusted somewhat more than companies in this context: 54% say they trust their government to regulate AI responsibly. But this also varies dramatically. Singapore, Indonesia, Malaysia and Thailand are high-trust markets, while the United States, Japan, Hungary, Great Britain and Canada are much lower. Ipsos suggests that low trust in government regulation may help explain higher nervousness in some markets, especially the US.

    One striking finding is that people say they trust AI more than people not to discriminate or show bias. At the 30-country average, 54% trust AI not to discriminate or show bias, compared with 45% who trust people not to discriminate or show bias. That does not mean people think AI is neutral; rather, it suggests that public trust in human fairness is also weak.

    The strongest trust-related consensus is disclosure. Seventy-nine per cent agree that products and services using AI should have to disclose that use. This is one of the clearest implications for organisations: transparency is not a niche concern but a mainstream expectation.

    4. How do people feel about AI-generated content, advertising and brand use?

    The report shows a clear public distinction between expecting AI-generated content and preferring it. People believe AI will be widely used, but they still prefer human-created content in most cases.

    For example, 79% think AI is likely to be used for online search results, and only 28% say they are uncomfortable with that use. That suggests search may be one of the more socially acceptable AI applications. By contrast, people are much more uncomfortable with AI-generated political ads, AI-written news stories, AI screening job applicants, and AI used to create or target disinformation.

    When asked about content preferences, the public consistently favours human-driven content. Seventy per cent prefer human-driven online news articles or websites; 71% prefer human-driven photojournalism; 67% prefer human-driven movies; 62% prefer human-driven advertising; and 60% prefer human-driven customer marketing websites.

    For brands, the picture is mixed and potentially risky. People are split on whether AI use would make them trust companies more or less. At the 30-country average, AI-enhanced product images produce 34% more trust and 38% distrust; AI-written product descriptions produce 33% more trust and 42% distrust; AI-created advertising images or video produce 30% more trust and 38% distrust; and AI-written product reviews produce 29% more trust and 36% distrust.

    The implication is that AI use in marketing is not automatically reputationally damaging, but it is not automatically efficiency-positive either. Brands may gain from AI where it improves usefulness, speed or relevance, but they risk distrust when AI is perceived as deceptive, synthetic, manipulative or insufficiently disclosed.

    5. What future impact do people expect AI to have on jobs, economies and everyday life?

    People expect AI to become more important in daily life, but their expectations are uneven across domains.

    A majority already feel AI has affected them: 52% say AI products and services have profoundly changed their daily life in the past three to five years. Looking ahead, 67% say AI will profoundly change their daily life in the next three to five years. So AI is not viewed as speculative; it is already part of people’s lived experience and expected to intensify.

    On work, the findings are ambivalent. Globally, 59% think AI is likely to change how they do their current job in the next five years, but only 36% think it is likely to replace their current job. Even more importantly, people are more optimistic about their own job than about the wider labour market. Among those with a job, 38% think AI will make their own job better, while 16% think it will make it worse. But for the job market overall, only 31% think AI will make it better, while 35% think it will make it worse.

    This “my job versus the job market” distinction is one of the report’s most useful insights. People may believe they personally can adapt, benefit or remain protected, while still worrying about broader labour disruption.

    The same pattern appears in other future-facing areas. People are optimistic that AI will improve efficiency: 55% say it will make the amount of time it takes to get things done better, compared with only 10% who say worse. They are also more positive than negative about entertainment options and health. But they are much more concerned about disinformation: only 29% think AI will make the amount of disinformation on the internet better, while 40% think it will make it worse.

    Economically, the global country average is cautiously positive: 34% think AI will improve their country’s economy, while 23% think it will worsen it. Ipsos argues that countries most excited about AI tend to be countries where people are also more likely to believe AI will benefit the economy. In other words, enthusiasm appears tied not only to technology itself, but to whether people believe AI will produce visible, shared economic benefits.

  • The Global Risks Report 2025 by World Economic Forum

    The Global Risks Report 2025 by World Economic Forum

    About the paper

    The Global Risks Report 2025 is the World Economic Forum’s 20th annual assessment of major global risks across geopolitical, environmental, societal, economic and technological domains.

    It is a mixed-methods report based primarily on the 2024–2025 Global Risks Perception Survey of over 900 experts worldwide, collected from 2 September to 18 October 2024, supplemented by the Executive Opinion Survey of over 11,000 business leaders in 121 economies and qualitative input from 96 experts.

    The geographic scope is global.

    Length: 104 pages

    More information / download:
    https://www.weforum.org/publications/global-risks-report-2025/

    Core Insights

    1. What is the report’s central argument about the global risk landscape in 2025?

    The report argues that the world is entering 2025 in a state of deepening fragmentation, with risks increasingly reinforcing one another across domains. Its central diagnosis is that geopolitical conflict, societal polarization, environmental stress and technological disruption are converging in ways that existing governance systems are poorly equipped to manage.

    The immediate risk that dominates the 2025 outlook is state-based armed conflict, selected by 23% of GRPS respondents as the risk most likely to present a material global crisis in 2025. This is a major shift from the previous year, when it ranked eighth. The report connects this rise to the wars in Ukraine, the Middle East and Sudan, and to broader fears that conflicts could escalate or spread.

    But the report does not present conflict as an isolated geopolitical problem. It describes a risk environment where conflict is linked to geo-economic confrontation, cyber warfare, misinformation and disinformation, forced displacement, humanitarian crises and weakening multilateralism. In other words, the world is not only experiencing more crises; it is losing some of the connective tissue needed to manage them collectively.

    The tone is notably pessimistic. Only a small share of respondents see the near-term outlook as stable or calm, while a majority expect an “unsettled” world and sizeable minorities expect turbulence or stormy conditions. The report’s broader message is that short-term crisis management is no longer enough because many immediate risks are symptoms of deeper structural shifts.

    2. Which risks dominate the short-term outlook to 2027, and why?

    The top risk over the two-year horizon is misinformation and disinformation, which ranks first for the second year running. The report sees this as especially dangerous because false or misleading content now interacts with political polarization, conflict, elections, distrust in institutions and advances in generative AI.

    The short-term top risks also include extreme weather events, societal polarization, cyber espionage and warfare, state-based armed conflict, inequality, involuntary migration or displacement, erosion of human rights and civic freedoms, geo-economic confrontation and pollution.

    Several patterns stand out. First, geopolitical risks have moved sharply upward. State-based armed conflict is now third over the two-year horizon, and geo-economic confrontation has risen from fourteenth to ninth. Second, economic risks such as inflation and economic downturn have fallen out of the two-year top 10, even though the report warns against complacency. Third, societal risks remain highly prominent, suggesting that social cohesion is becoming a central risk variable rather than merely a consequence of other crises.

    The report’s short-term outlook is therefore not just a list of threats; it is a picture of a world where trust is weakening. Misinformation undermines shared reality, polarization reduces the capacity for collective action, and geopolitical rivalry makes international cooperation harder just when it is most needed.

    3. How does the report describe the longer-term risk outlook to 2035?

    The 2035 outlook is even darker than the short-term outlook. The report finds that all 33 risks assessed in the GRPS are expected to increase in severity over the 10-year horizon compared with the two-year horizon. Environmental and technological risks become much more prominent over the longer term.

    The highest-ranked long-term risk is extreme weather events, followed by critical change to Earth systems, biodiversity loss and ecosystem collapse, natural resource shortages and misinformation and disinformation. This means that four of the top five long-term risks are environmental.

    The report presents environmental risk as having moved from a distant long-term concern to an urgent, worsening reality. Extreme weather remains the top 10-year risk for the second year in a row, while biodiversity loss and Earth system change are framed as signs that the world may be approaching irreversible thresholds.

    Technological risks also rise sharply over the decade. Adverse outcomes of AI technologies ranks only low in the two-year outlook but climbs to sixth in the 10-year ranking. The report treats this as a warning against complacency: current risk perception may be underestimating how quickly AI, biotechnology and other frontier technologies could reshape social, political and security risks.

    The 2035 message is that the world faces a compounding risk landscape: climate stress, technological acceleration, demographic change and geostrategic fragmentation are not separate trends, but structural forces that interact.

    4. What role do technology, misinformation and polarization play in the report’s risk narrative?

    Technology is treated as both an accelerator and an amplifier of risk. The report does not argue that technology itself is the root cause of fragmentation, but it shows how digital platforms, generative AI, algorithmic systems and expanding surveillance capabilities can intensify existing social and political divisions.

    The most immediate concern is misinformation and disinformation. The report notes that generative AI makes it easier to produce false or misleading text, images, audio and video at scale. This makes it harder for citizens, companies and governments to distinguish reliable information from manipulated content.

    The report links this directly to societal polarization, which ranks fourth over the two-year horizon. Polarized societies are more vulnerable to manipulated narratives, and manipulated narratives can in turn deepen polarization. The risk is a feedback loop in which trust in media, institutions and public information continues to erode.

    The report also highlights algorithmic bias and censorship and surveillance. As public services, media systems and political communication become more data-driven, biased or opaque algorithms can produce unfair outcomes and further reduce trust. Meanwhile, the growing digital footprint of citizens gives governments, companies and threat actors greater capacity to monitor and influence behaviour.

    The report’s underlying assumption is that technological governance is lagging behind technological capability. It calls for stronger accountability, transparency, digital literacy and upskilling for those building and using automated systems.

    5. What does the report imply for global governance and risk preparedness?

    The report’s strongest implication is that fragmented governance is becoming a risk multiplier. Across conflict, trade, technology, pollution, biotech and demographic ageing, the report repeatedly returns to the same problem: many of the most severe risks require collective action, but the international environment is becoming less cooperative.

    For armed conflict, the report argues that weakened faith in multilateral institutions could push governments towards unilateral action and selective alliances. For geo-economic confrontation, it warns that escalating tariffs, sanctions and investment restrictions could fragment global trade and weaken cooperation on climate, health, technology and development. For pollution and biotech, it stresses the need for better regulation, monitoring and global norms.

    The report does not suggest that global treaties alone are sufficient. It also emphasizes regional organizations, multi-stakeholder engagement, domestic resilience, public education, corporate strategies, research and development, and better monitoring systems. But its broader conclusion is that durable risk mitigation depends on rebuilding forms of cooperation that can survive geopolitical rivalry.

    The final message is cautiously normative: the world is moving into a more divided and unstable period, but the report insists there is no viable alternative to dialogue, collaboration and multilateral solutions. Its purpose is therefore not simply to forecast risk, but to push leaders to act before today’s warning signals become irreversible crises.

  • Future of Professionals Report 2024 by Thomson Reuters

    Future of Professionals Report 2024 by Thomson Reuters

    About the paper

    Thomson Reuters’ Future of Professionals Report 2024 examines how AI and GenAI are reshaping professional work across legal, tax, accounting, global trade, risk, fraud, compliance, government, and corporate C-suite roles.

    It is original survey research based on 2,205 responses collected through 15–20 minute surveys, with respondents across the United States, UK, Canada, Mainland Europe, Latin America, Asia-Pacific, Africa, and the Middle East/North Africa.

    The report is heavily AI-focused because respondents identify AI as the dominant force currently driving change in professional services.

    Length: 37 pages

    More information / download:
    https://www.thomsonreuters.com/en-us/posts/technology/future-of-professionals-2024/

    Core Insights

    1. What is the central argument of the report?

    The report argues that AI and GenAI are now the dominant forces reshaping professional work, not as distant possibilities but as practical technologies already influencing strategy, workflows, value creation, pricing models, and career expectations.

    The strongest evidence is that 77% of respondents believe AI will have a high or transformational impact on their work over the next five years, up from 67% in the 2023 report. The report presents this as a shift from speculative concern to more concrete expectation: professionals are no longer merely wondering whether AI matters; they are beginning to understand where and how it will affect their daily work.

    The report’s tone is notably optimistic. Thomson Reuters concludes that AI can make professional work more efficient, productive, and fulfilling. It repeatedly frames AI as a way to release professionals from routine or labour-intensive tasks so they can focus on judgement-based, strategic, client-facing, and higher-value work.

    However, the report does not argue that AI adoption will be automatic or risk-free. Its central argument is conditional: AI can be a force for good, but only if organisations combine adoption with responsible use, human oversight, data security, transparency, training, and new business models.

    2. How are professionals currently using AI, and what does this reveal about adoption maturity?

    Current AI use appears practical but still relatively early-stage. Respondents most commonly use AI-powered technologies for drafting documents, summarising information, conducting basic research, preparing communications, reviewing documents, and generating first drafts.

    The report says 50% of respondents who have used AI as a starting point describe its output as “a basic starting point” where they still need to do most of the work. Another 28% say it provides “a strong starting point” that mainly requires editing. This suggests that AI is already useful, but professionals still see it primarily as an assistant rather than an autonomous producer of reliable final work.

    The main barriers among non-users are also revealing. Concerns centre on accuracy, data security, ethics, uncertainty about what AI can be used for, and uncertainty about how to access it. The report notes generational differences too: Gen Z professionals have tried AI at higher rates, while baby boomers show lower current usage but surprisingly ambitious expectations for future AI assistance.

    The adoption picture is therefore mixed: AI is already embedded in common professional tasks, but many users still regard it as a productivity aid that requires significant human review. The report’s own interpretation is that trust will depend on transparency, benchmarking, responsible innovation, and better user education.

    3. What productivity and value gains does the report expect from AI?

    The report’s most concrete productivity claim is that AI could free up four hours per professional per week within one year, eight hours within three years, and twelve hours within five years. Based on an assumption of 48 working weeks per year, that would equal roughly 200, 400, and 600 hours respectively.

    This is one of the report’s most important findings because it connects AI adoption to organisational strategy. Freed-up time is not presented simply as a cost-saving mechanism. Respondents say they would use additional time for work-life balance, client work, long-term projects, business development, process improvement, strategic planning, research, training, and better workload management.

    The report also distinguishes between efficiency and value. More than half of professionals are excited about AI because of time savings and productivity improvements, but 39% are most excited about AI’s ability to add new value to their work. Examples include handling large volumes of data more effectively, improving client response times, reducing human error, enabling advanced analytics, and supporting better decision-making.

    This distinction is crucial. The report does not merely say AI will help professionals do the same work faster. It argues that AI may allow professional services to change what kind of work is done, what quality of service is delivered, and where professionals focus their expertise.

    4. What risks, ethical concerns, and governance needs does the report identify?

    The report identifies several persistent concerns:

    • accuracy of outputs
    • data security
    • ethical use
    • overreliance on AI
    • inadequate human judgement
    • and unclear accountability.

    These concerns are especially important because the professions covered in the report often involve legal, regulatory, financial, compliance, or high-stakes advisory work.

    Professionals draw a clear ethical boundary around full AI autonomy in high-stakes professional judgement. More than 95% of legal and tax respondents say it would be a step too far for AI to represent clients in court or make final decisions on complex professional matters. Legal professionals are particularly resistant to AI providing legal advice, while respondents in tax, risk, fraud, and compliance appear somewhat less opposed to AI involvement in strategic advice.

    The report finds no single consensus on responsible AI use, but several principles recur. Almost two-thirds of respondents see data security as vital, both for prompts and outputs. A similar share see compulsory human review as critical. Other important elements include transparency about data sources, clarity on which tasks AI may be used for, bias mitigation, deletion of personal data, and standards for training data.

    On enforcement, respondents favour certification processes for AI systems and standards developed by professional or industry bodies. Government regulation, company guidelines, whistleblowing, and algorithm audits are also mentioned, but the report presents certification and professional standards as the leading options.

    5. What are the broader implications for professional careers, leadership, and business models?

    The report’s broader implication is that AI will shift the nature of professional work rather than simply eliminate it. Fear of widespread job loss appears less prominent than in the previous year’s report. Instead, 85% of respondents believe new or additional roles will be created to manage broader AI use.

    The human skills expected to become more important include problem-solving, creativity, judgement, strategic thinking, and the ability to manage AI responsibly. The report therefore frames the future professional not as someone replaced by AI, but as someone who must become better at using AI while preserving human expertise.

    For leaders, the report implies that AI adoption is not just an IT project. It affects talent strategy, operating models, pricing, client value, workflow design, risk management, and organisational culture. Leaders are advised to assess skills, invest in training, create responsible AI principles, run pilot projects, scale successful use cases, and explore how AI can open new sources of stakeholder value.

    The pricing implication is especially significant for professional services firms. Many respondents expect hourly-rate pricing to decline over the next five years. As AI makes routine work faster, firms will need to explain why clients should still pay premium fees for work completed more efficiently. The report argues that firms must move towards value-based pricing and become better at articulating the value AI adds beyond speed.

    The conclusion is optimistic but demanding: AI can make professional careers more fulfilling and organisations more competitive, but only for those that actively embrace the technology, redesign work around it, and take responsibility for its limits.

  • Future of Professionals Report 2023 by Thomson Reuters

    Future of Professionals Report 2023 by Thomson Reuters

    About the paper

    Thomson Reuters’ Future of Professionals Report 2023 examines how AI, especially generative AI, is expected to transform professional work across legal, tax and accounting, risk, compliance, corporate and government settings.

    It is original survey research based on a web survey conducted in May–June 2023 among more than 1,200 professionals, with about half based in the US and most of the rest in the UK, Canada and Latin America.

    The report combines survey findings with Thomson Reuters’ own interpretive commentary, so it should be read as a research-based thought leadership report rather than a neutral academic study.

    Length: 36 pages

    More information / download:
    https://www.thomsonreuters.com/en-us/posts/technology/future-of-professionals-2023/

    Core Insights

    1. What is the central argument of the report?

    The report’s central argument is that AI will not merely make professional work faster; it will reshape the value proposition of professional services. Thomson Reuters presents AI as a catalyst for transformation across three linked dimensions: productivity, professional value, and responsible adoption.

    The productivity argument is the most immediate. Professionals expect AI to help with operational efficiency, research, document review, drafting, administrative work, risk identification, regulatory monitoring and client communication. The report repeatedly frames AI as a way to remove repetitive or low-value work so that professionals can spend more time on higher-value advisory tasks.

    The deeper argument is about the future role of professionals. The report suggests that “Professional 2.0” will be less defined by routine technical execution and more by judgement, strategic advice, client service, specialisation, and the ability to use AI effectively. It argues that AI will shift professionals from doing more work manually to orchestrating, checking, interpreting and adding value to AI-enabled work.

    The report is optimistic, but not naïvely so. It recognises fears around accuracy, job loss, ethics, data security, regulation, work-life balance and professional identity. However, Thomson Reuters’ overall position is clear: AI will not replace highly trained professionals wholesale, but professionals who use AI will outcompete those who do not.

    2. How do professionals expect AI to affect productivity, client service and business performance?

    Professionals in the report are broadly positive about AI’s operational potential. A key headline finding is that 67% expect AI or generative AI to have a transformational or high impact on their profession over the next five years. That makes AI the most significant trend tested in the study, ahead of economic recession and the cost-of-living crisis.

    The report identifies several productivity gains. In law firms, AI is expected to help with large-scale data analysis, non-billable administrative work, time recording, research and document-related tasks. In tax and accounting, respondents see potential in analysing deductions, income streams, tax scenarios and future tax results. In corporate and government departments, AI is expected to streamline internal processes, reduce external spend, improve research and speed up document review.

    Client service is another major theme. Respondents expect AI to improve the speed, clarity and consistency of communication. The report mentions AI helping draft and edit client communications, translate complex ideas into plain language, identify client needs arising from regulatory change, and support faster internal advice. For in-house teams, the report suggests that AI may strengthen their role as business partners by helping them provide more consultative, growth-oriented advice.

    However, the financial consequences are less clear. Firms may become more profitable if AI reduces costs and frees professionals for higher-value work. At the same time, clients may use AI as a reason to push fees down, move more work in-house, or turn to alternative legal service providers. The report does not claim certainty here; it explicitly notes that the “financial victor” remains uncertain.

    3. What evidence does the report provide that AI will change professional roles, skills and career paths?

    The report argues that AI will fundamentally alter who does professional work, what skills are valued, and how people enter and progress within the professions.

    One of the strongest findings is that 64% of professionals believe AI will make their professional skills more highly valued, while 33% fear that AI could contribute to the demise of their profession or reduce demand for their skills. This tension runs throughout the report: professionals see opportunity, but also existential risk.

    The report expects new career paths to emerge. It suggests that some work currently performed by credentialed professionals may shift to paralegals, junior professionals, enrolled agents, legal tech consultants, operations specialists or other non-traditional roles. It also anticipates more hybrid roles combining professional expertise with technology, data science, IT, security, regulatory and AI skills.

    Training is presented as one of the clearest areas of change. Almost 90% of respondents expect basic mandatory AI training for all professionals within five years, and 87% expect everyone to need training in new skills. The report also predicts changes in how junior professionals are trained and in the nature of university or college education.

    A particularly important nuance is that AI may reduce traditional entry-level work. More than half of respondents expect a decline in entry-level roles over the next five years, yet a majority also expect the total number of professionals in their firm or department to increase. In other words, the report does not predict simple job destruction. It predicts a reshaping of the professional labour market: fewer traditional junior tasks, more specialised or AI-enabled roles, and greater need for adaptability.

    4. What are the main concerns, risks and barriers identified in the report?

    The report identifies several overlapping concerns.

    The biggest fear is accuracy. A quarter of respondents cite compromised accuracy as their greatest concern. This is especially important because professionals work in fields where errors can have legal, financial, ethical or regulatory consequences. The report stresses that AI outputs must be checked by humans rather than accepted at face value.

    Job loss and professional displacement are also major concerns. Nineteen per cent cite widespread job loss as their biggest fear, while 17% cite the demise of the profession. Some respondents fear that AI may “dumb down” professional judgement if people rely on machine-generated answers without understanding the underlying reasoning.

    Ethics and data security are also prominent. Fifteen per cent cite data security as their biggest fear, and another 15% cite loss of ethics. The report connects these concerns to the need for transparency, explainability, trustworthy sources, professional standards and regulation.

    The biggest barrier to change is cultural rather than technical. The report says 83% of professionals cite risk aversion or fear of change as a top-three barrier within the professions. Lack of technology skills, lack of investment, partnership models, and lack of diversity of thought are also identified as obstacles.

    Finally, the report is ambivalent on wellbeing. AI could reduce long hours, lower the risk of errors, and remove mundane work. But some respondents fear it could increase pressure, reduce human connection, worsen engagement, or create anxiety about disposability. The report therefore treats wellbeing as both a potential benefit and a risk depending on how AI is implemented.

    5. What is Thomson Reuters’ perspective, and what are the implications of the report?

    Thomson Reuters’ perspective is strongly pro-adoption, but framed around responsible implementation. The company argues that AI should be embraced decisively, but with guardrails around trust, ethics, transparency, accuracy, regulation and human oversight.

    Its assumptions are visible throughout the report. Thomson Reuters assumes that AI adoption is inevitable, that productivity gains will be substantial, and that the professions will be reshaped rather than destroyed. It also assumes that the highest-value professional work will remain human-centred: advice, judgement, client relationships, ethics, interpretation and strategic thinking.

    The report’s implications are significant. For firms, it suggests a need to rethink pricing, services, staffing models, training and competitive advantage. For in-house departments, it suggests an opportunity to move from cost centres to growth enablers, particularly if AI helps them deliver more consultative advice and bring more work in-house. For individual professionals, the implication is that passive adaptation will not be enough. They will need to develop AI literacy, deepen expertise, understand their own value proposition, and learn how to work with AI rather than around it.

    The broader conclusion is that trust will be the decisive condition for AI adoption in professional work. Without confidence in accuracy, data security, ethics and explainability, the promised productivity gains may not materialise. With the right governance, however, the report argues that AI can improve productivity, increase professional value, create new roles, support better client service and potentially improve wellbeing.