About the paper
The report is an original global survey of how PR and communication professionals are adopting, governing, and thinking about AI in their work.
It is based on 473 responses collected between 18 February and 17 April 2025, using a mix of multiple-choice and open-ended survey questions; the geographic scope is global, with respondents from Africa, EMENA, North America, Australia/New Zealand, Asia-Pacific, and South and Central America, although the report notes self-selection bias, regional variation, and an overrepresentation of smaller organisations.
Length: 21 pages
More information / download:
https://globalalliancepr.org/reimagining-tomorrow-ai-in-pr-and-communication-management/
Core Insights
1) What is the central argument of the report about AI in PR and communication management?
The core argument is that AI adoption is already widespread in the profession, but governance, ethics, training, and strategic leadership have not kept pace. The report presents PR and communication as a profession at an inflection point: practitioners are using AI enthusiastically, yet the structures needed to guide that use responsibly are still underdeveloped. On page 3, the report’s summary makes this tension very clear: 91% are allowed to use AI, but only 39.4% of organisations have responsible AI frameworks, and 38.3% have no constraints at all. It also argues that the profession’s most valuable future contribution is not merely technical implementation, but shaping ethical frameworks, governance structures, and stakeholder communication about AI.
The interpretive section on page 4 pushes that point further. It says the real opportunity for PR and communication teams is to elevate their strategic role by helping organisations develop and implement responsible AI, rather than remaining focused on tactical use. In other words, the report is not anti-AI; it is pro-adoption but strongly argues that adoption without guardrails is risky and professionally shortsighted.
2) What does the survey reveal about how widely AI is being used, and how organisations are managing that use?
The survey shows that AI use is already mainstream in the field. According to the report, 91% of respondents say AI is allowed in their organisations, and among the 9% who say it is not allowed, 52.8% admit they use it anyway as “shadow AI.” That alone suggests that AI adoption is not waiting for formal permission structures. Access is also relatively broad: 65.2% say all team members in PR and communication have access, while 24.3% say access is restricted to select individuals and 10.5% say it is limited to leaders only.
At the same time, management of AI is patchy. The report says 38.3% of organisations have no constraints or restrictions in place, 37.5% rely on approved company-wide tools, 35.8% allow staff to explore freely, and only 18.3% have formal processes for AI tool recommendation and approval. Organisational support is also middling: the average support rating for implementation is just 2.78 out of 5, which the report explicitly describes as moderate but insufficient. That is an important finding because it shows that permission to use AI is much more common than meaningful support for using it well.
So the picture is not one of controlled institutional rollout. It is closer to democratised but uneven adoption: people have access, many are experimenting, but the quality of support, governance, and process is inconsistent.
3) Where is the biggest gap between current AI practice and what PR professionals believe their role should be?
The biggest gap lies between tactical activity and strategic responsibility. The report says PR and communication professionals see governance and ethics as their top priorities: 33.3% rank formal AI governance structures as the number one priority, and 27.3% rank training for ethical, safe, and transparent AI use as the top priority. Yet their actual involvement patterns do not fully reflect those priorities. The report’s alignment analysis shows that teams are often more involved in technical implementation than in the activities they themselves consider most strategically important.
This mismatch is described in detail on page 9. The report highlights the largest misalignments in lower-value technical areas such as AI certification programmes, where the gap is 80.6%, and advising on complex prompts and use cases, where the gap is 60.4%. By contrast, the smaller gaps are in ethical AI use and formal AI governance, which suggests these are the areas closest to the profession’s intended role. The report interprets this as a resource allocation problem: teams are spending time where they are currently needed, but not necessarily where they believe they create the most value.
This matters because it reframes the profession’s future. The report is effectively saying that PR should not define its AI contribution by being good at prompts or faster outputs. It should define it by governance, ethics, risk, literacy, stakeholder engagement, and strategic counsel.
4) What specific weaknesses does the report identify in governance, confidence, and stakeholder communication?
The report identifies three especially important weaknesses.
First, governance remains thin.
Only 39.4% of organisations have a responsible AI guideline, policy, or framework. Even where such frameworks exist, they are far from universal, and coverage is uneven. Among organisations that do have guidelines, the most common elements are ethics/law, governance/standards, security/privacy, and risk/reputation. The report also notes that because of skip logic, those framework-content percentages apply only to the subset of respondents whose organisations already have guidelines, not to the entire sample.
Second, ethical confidence is limited.
Only 26.2% say they feel very confident evaluating the ethical implications of AI in their roles; 60.5% are only somewhat confident, and 13.3% are not confident. The report treats this as a major training opportunity, not a marginal issue. That reading is reinforced by respondents’ own definitions of responsible AI, which emphasise ethics, beneficial use, human oversight, verification, and transparency.
Third, stakeholder communication is surprisingly weak.
Given that communication is the profession’s core function, the report finds it striking that fewer than half communicate about responsible AI approaches to stakeholders, 46.9% communicate about AI ethics, and only 35.6% communicate about AI governance structures. The most common topic communicated is simply how to use AI tools, at 53.6%, which the report interprets as evidence of a tactical rather than strategic focus. This is one of the report’s sharpest critiques: PR professionals are not yet communicating about AI in the way their own strategic position would suggest they should.
5) What broader implications does the report draw for the future of the profession?
The report suggests that AI will reshape the profession significantly over the next five years, pushing it away from routine production work and towards more strategic, advisory, and governance-oriented roles. Respondents predict shifts “from content creator to content facilitator,” more focus on strategy, more automation of routine tasks, possible workforce reduction, increasing regulatory complexity, and a risk of depersonalisation or diminished creativity.
The concern side is equally strong. On page 13, respondents describe the main threats as job displacement, reduced creativity, authenticity problems, misinformation, and loss of human interaction. That combination shows the profession is not simply optimistic about efficiency gains; it is also worried about relevance, trust, and the erosion of distinctly human value.
The report’s conclusion is that the profession can either be diminished by AI or elevated by it. Elevation depends on moving beyond content creation, building stronger governance and ethical frameworks, investing in training, communicating more actively with stakeholders, and positioning PR as a strategic advisor on responsible AI implementation across the organisation. Its recommendations to professionals and organisations alike all point in that direction. So the deeper implication is not just that AI will change communications work, but that it may redefine what counts as valuable communications leadership in the first place.

