About the paper
The whitepaper argues that AI will fundamentally reshape corporate communications over the next decade, especially through synthetic stakeholders, predictive systems, mass personalisation, and a renewed premium on human authenticity.
Methodologically, it appears to be an expert whitepaper built on secondary analysis and illustrative examples rather than original empirical research; no fieldwork method, sample size, timeframe for data collection, or respondent base is clearly specified.
The geographic focus is framed as global, but the evidence cited draws from a mix of U.S.-centric examples and selected international references, so the precise geographic boundaries of the underlying data are not clearly specified in the report.
Length: 14 pages
More information / download:
https://page.org/knowledge-base/communicating-with-robots-connecting-to-people-nanne-bos-aegon/
Core Insights
1. What is the whitepaper’s central argument about how AI will change corporate communications?
The central argument is that AI will not merely improve existing communication workflows but will transform the entire logic of corporate communications. The paper says the profession has already moved beyond asking whether AI will change communications and must instead focus on how that change will unfold. It frames AI as both a technological and geopolitical force, and suggests that the speed, scale, and sophistication of communication will be radically altered by systems such as GPT-4, Claude, and Grok.
At the same time, the paper insists that the core purpose of corporate communications will remain stable. It explicitly says that communicators will still need to build trust, deepen understanding with stakeholders, maintain licence to operate, and influence behaviour. In other words, the mission stays the same, but the mechanisms and operating environment change dramatically.
The whitepaper therefore presents AI not as a side issue or a tool trend, but as a structural shift that changes who receives messages, how they are interpreted, how fast they circulate, and what communicators are actually responsible for. Its five predictions are meant as a framework for navigating that transition while staying anchored in enduring communication goals.
2. What does the paper mean by “synthetic stakeholders”, and why does it see them as so significant?
The idea of “synthetic stakeholders” is one of the paper’s most important claims. It argues that, by the mid-2030s, people will increasingly rely on personal AI agents as their main interface with organisations. These agents will not simply summarise information; they will filter, verify, interpret, negotiate, and sometimes even decide on behalf of the human stakeholder. That means journalists, investors, employees, regulators, and customers may no longer engage with corporate communication directly, but through AI intermediaries.
The report sees this as significant because it changes the locus of trust. Instead of stakeholders primarily trusting brands, institutions, or traditional media, they may trust their own AI agents more. That has major implications for reputation and influence. If AI agents become the gatekeepers of credibility, then communicators must create messages that are readable, interpretable, and verifiable by machines as well as by humans.
The paper also links synthetic stakeholders to the “death of the single narrative”. It argues that AI agents will personalise information so extensively that there will no longer be one broadly shared version of a corporate story. Different stakeholders may receive materially different framings based on personal history, biases, preferences, and risk profiles. The implication is that communication becomes less about controlling one public narrative and more about maintaining coherence and credibility across many parallel, AI-mediated realities.
3. How does the whitepaper describe the shift from mass communication to mass conversations and from reactive to predictive communications?
The paper’s second and third predictions work together. First, it says the old broadcast model of one-to-many messaging is ending. In its place, organisations will engage in “mass conversations”: millions of simultaneous, AI-mediated, highly personalised interactions with stakeholders. These conversations will supposedly be context-aware, emotionally intelligent, and adapted to each person’s needs, role, behaviour, and preferences. Investors, employees, and customers would all receive tailored communication rather than standardised messages.
This matters because the communicator’s role changes from crafting a single message to designing adaptive systems. The report suggests future communicators will define narrative models, supervise tone calibration, and audit huge numbers of micro-interactions. So communication becomes less a matter of publishing and more a matter of orchestrating intelligent, ongoing relationship management at scale.
The move from reactive to predictive communications goes even further. The paper argues that AI will enable organisations to anticipate reputational risks, stakeholder disengagement, morale problems, or regulatory friction before they are visible through conventional means. It imagines AI systems that simulate possible stakeholder reactions, run communication scenarios in parallel, forecast trust impact, and identify likely virality or misinformation risks. In this model, communications becomes a foresight function rather than just a response function.
Taken together, these two predictions describe a future in which corporate communication is continuous, personalised, and anticipatory. The communicator is recast as a strategist and overseer of adaptive systems, rather than primarily as a writer, spokesperson, or campaign manager.
4. Why does the paper argue that human authenticity will become more valuable as AI becomes more powerful?
The paper’s fourth prediction is that AI abundance will increase, not reduce, the value of human authenticity. Its reasoning is straightforward: as generative AI floods the information environment with cheap, abundant, low-value content, audiences will become more resistant to anything that feels synthetic or manipulative. The report refers to this as a coming “synthetic content crisis” and says AI filters will increasingly screen out content that lacks originality, emotional value, or resonance.
In response, the communication that cuts through will be the communication that feels genuinely human. The paper highlights three qualities that will matter most: emotional depth, ethical clarity, and human voice. It argues that trust will increasingly attach to visible human sincerity rather than to polished, scaled, anonymous messaging. This is why it says communicators will become “custodians of authenticity” in a synthetic age.
Importantly, the whitepaper does not frame this as an anti-AI argument. Instead, it advances the idea of “co-intelligence”: AI handles scale, complexity, translation, and real-time personalisation, while humans contribute judgment, moral reasoning, emotional nuance, vulnerability, and authenticity. So the paper’s conclusion is not that AI replaces human communication, but that the distinctively human elements become more strategically valuable as machine-generated communication proliferates.
5. What are the report’s main implications for the future role, structure, and ethics of the communications function?
The paper argues that the communications function will become AI-native. That means the department of the future will look very different from the traditional press-office or corporate affairs structure. Routine content production will increasingly be automated, while human professionals move into roles focused on oversight, orchestration, ethics, and strategic narrative design.
It predicts new specialist roles such as:
- AI Narrative Designers
- Predictive Strategists
- Communication Ethicists
- and even Chief Narrative Intelligence Officers.
It also says existing silos between internal communication, PR, investor relations, and brand strategy will weaken, giving way to a more unified narrative function powered by integrated AI systems. Teams may become smaller in headcount but more specialised in capabilities, drawing on disciplines such as linguistics, data science, behavioural psychology, and ethics.
Ethics is a major theme here. The report repeatedly warns that predictive communications and AI-generated influence raise serious questions about manipulation, informed consent, transparency, and accountability. It therefore argues that strong governance frameworks will be essential, including explainable AI, clarity about data sources and model logic, and disclosure around synthetic humans or AI-generated messages. High-stakes moments such as layoffs, mergers, or crises are still presented as fundamentally human occasions in which AI should assist rather than replace human communicators.
The deeper implication is that the future communicator is not just a better prompt writer or AI user. The role becomes more strategic, more cross-functional, and more ethically exposed. The paper’s perspective is that AI raises the bar for human communicators: they will need more judgment, more empathy, and more moral authority, not less.

