Tag: Wharton

  • The Wharton Blueprint for A.I. Agent Adoption by Wharton

    The Wharton Blueprint for A.I. Agent Adoption by Wharton

    About the paper

    The report is a mixed-methods synthesis rather than a single original study: it combines recent academic research on human-AI interaction, practitioner perspectives from executives at firms deploying AI agents, and recommendations from Wharton faculty.

    It does not present one unified sample or fieldwork design; instead, it draws on multiple experiments, surveys, working papers, and case-informed expert contributions, with sample sizes and geographies varying by cited study and not clearly specified at the report level.

    Length: 47 pages

    More information / download:
    https://knowledge.wharton.upenn.edu/special-report/wharton-blueprint-ai-agent-adoption/

    Core Insights

    1. What is the central argument of the report?

    The report’s core argument is that AI agent adoption is no longer mainly constrained by technology, but by psychology. The authors argue that many people are still unwilling to let AI agents perform meaningful tasks on their behalf, not because the systems cannot do anything useful, but because users hesitate to believe that the agent is competent, to trust it, and to hand over control. The report therefore frames adoption as a behavioural and design challenge rather than a purely technical one.

    It organises this adoption problem into three “psychological frictions”:

    • perceived competence
    • trust
    • and delegation of control.

    These are presented as the three core barriers that developers and organisations must overcome if they want broader acceptance of AI agents. In other words, the report is not asking only whether agents work; it is asking what makes people willing to let them work on their behalf in real settings.

    The report is also quite explicit that this is a practical blueprint. It is designed for people building or deploying AI agents and aims to translate behavioural science into design recommendations. That makes its purpose strongly applied: it is less about theorising AI adoption in the abstract and more about showing how organisations can reduce resistance and increase real-world uptake.

    2. How does the report explain the first friction, perceived competence, and what makes users believe an AI agent can do the job?

    The first friction is perceived competence, which the report defines as the user’s subjective belief in the agent’s ability to perform desired actions. The key point here is that perceived competence is not the same as technical capability. An agent may be highly capable in a technical sense, but if users do not experience it as capable, they will still hesitate to adopt it.

    The report argues that users prefer agents that appear competent rather than warm. Across cited experiments, people were less willing to use AI that sounded cheerful or friendly than AI that signalled expertise, consistency, and reasoning. Competence cues included explaining criteria, showing reasoning, and making recommendations in a way that felt rigorous rather than overly personable. In practice, the report suggests that in serious domains such as health, finance, law, and professional work, a warm personality can actually undermine adoption if it makes the system seem less capable.

    Another major idea is that people judge whether the agent adds value. In the report’s discussion of AI-enabled travel agents, four factors shaped that perception:

    • convenience
    • personalisation
    • ubiquity
    • and superior functionality.

    At the same time, benefits alone are not enough. Privacy concerns, technology anxiety, and the desire for human interaction can still depress adoption even when users recognise obvious advantages. That means perceived competence depends both on positive cues about usefulness and on reducing perceived reasons not to engage.

    The report also places strong weight on explanations. In high-stakes contexts, users felt AI was more reliable and safer when it explained its process in detail, including the steps taken, data considered, or method used. Explanations therefore function not only as transparency, but as a signal of seriousness and quality. Finally, the report argues that agents can borrow competence from humans: when AI is presented as supporting a credible human expert rather than acting as an equal or rival, resistance falls and competence rises. This is one of the clearest examples of the report’s broader logic that adoption depends heavily on perception and framing.

    3. What does the report say about trust, and which factors most strongly increase or weaken trust in AI agents?

    The second friction is trust, defined as the user’s willingness to rely on the AI agent despite uncertainty. The report presents trust as central because AI agents are not just offering information; they are potentially taking action. That raises the stakes, since users must feel confident not only in what the system says but in what it might do.

    One of the strongest findings is that trust improves when users understand the agent’s limitations. The report cites experiments showing that people trusted AI more, and worked more effectively with it, when they were explicitly told where it was likely to fail. Rather than undermining confidence, acknowledging weaknesses helped users feel that they understood the system’s boundaries and therefore knew when to rely on it and when to be cautious. This is an important insight because it runs against the instinct to present AI as broadly capable and seamless.

    The report also argues that proof of successful outcomes often matters more than technical explanations. Users were more persuaded by evidence that the agent had successfully performed similar tasks before than by detailed accounts of how the system worked internally. The implication is that many users do not primarily want interpretability in a technical sense; they want reassurance that the system delivers results. Similarly, trust rises when agents reduce uncertainty before, during, and after use by making goals explicit, showing steps as they happen, and demonstrating how feedback improves future actions.

    Other trust-building mechanisms in the report include making the agent seem as though it understands the user’s goals, labelling it as “learning” or “improving,” using precise rather than rounded numbers, and tailoring outputs to specific user criteria rather than generic averages. The report repeatedly warns against the opposite pattern: trust falls when AI feels generic, when it seems to optimise for “most users,” or when it makes the process so effortless that users feel detached from the outcome. That last point is especially interesting, because the report suggests that convenience alone does not guarantee trust; too much automation can reduce psychological ownership and make people less willing to accept the result.

    4. How does the report understand delegation of control, and what level of autonomy does it recommend?

    The third friction is delegation of control, which the report defines as the user’s willingness to grant the AI the autonomy required to act on their behalf. This is where adoption becomes most sensitive, because an AI agent can move from being a helpful assistant to something that feels intrusive or disempowering.

    The report’s clearest conclusion is that people prefer a moderate level of autonomy. Too little autonomy makes the agent feel burdensome and not worth using, because the human has to micromanage it. Too much autonomy makes people feel that their freedom and control are being taken away. The recommended design pattern is therefore “human in the loop”: the agent should do meaningful preparatory or analytical work, but final decisions or key approvals should remain visible and accessible to the user.

    Control is not just about actual permissions; it is also about felt control. The report highlights research showing that concerns about control account for a substantial share of people’s decision whether to adopt AI. It therefore recommends making edit, pause, stop, reverse, and review options highly visible. Users need to know not only that controls exist, but where they sit and how easily they can be used. This is consistent with the report’s wider emphasis on transparency, checkpoints, and reversibility.

    The report adds two further nuances. First, adoption increases when users feel the agent is “theirs,” for example through naming, setup choices, or preferences, because ownership increases willingness to invest attention and effort. Second, the report notes that people rely more on AI under pressure, because repeated evaluation is cognitively costly. Yet it does not recommend manufacturing urgency; instead, it suggests using real workflow pressures to make AI the easier default. The overall message is that delegation works best when autonomy is earned gradually, bounded clearly, and embedded in a structure where the human still feels like the principal actor.

    5. What are the report’s wider implications for organisations, and what limitations does it acknowledge?

    For organisations, the report implies that successful AI agent deployment depends as much on behavioural design, workflow design, and communication as on model performance. The most important practical lesson is that adoption is unlikely to follow automatically from capability. Even if agents can perform useful tasks, users may resist them unless systems are positioned as competent, transparent, bounded, and supportive rather than all-powerful or frictionless. In organisational terms, this means deployment strategies must address psychology deliberately.

    The report’s recommendations point towards a fairly specific organisational philosophy. Agents should be framed as assistants, not authorities; they should explain their reasoning in business terms; they should show users what they are doing before, during, and after action; and they should operate within structures resembling job descriptions, permissions, escalation paths, and measurable outcomes. This effectively treats AI agents less like magical automation and more like junior or specialist collaborators who require oversight, governance, and staged trust-building.

    At the same time, the report is careful to acknowledge limitations. It explicitly states that this is based on the current best scientific evidence in a rapidly evolving field, and that many insights are extrapolated from adjacent AI and organisational behaviour research because AI agents themselves are still relatively novel. The report also notes limitations study by study: many cited findings come from experiments, hypothetical scenarios, specific sectors such as travel, healthcare, e-commerce, or financial advice, or controlled lab environments rather than long-term real-world deployments. That means the blueprint is evidence-informed, but not final or universally settled.

    So the deeper implication is twofold. First, organisations should not wait for perfect certainty before designing for adoption, because the report offers a strong behavioural framework already. Second, they should treat these recommendations as a living playbook, to be tested, adapted, and updated as both the science and the technology mature. That combination of confidence and caution is one of the report’s defining features.