Grounds Kaltura Avatars messaging in science: media richness, CASA paradigm, self-disclosure, anthropomorphism, uncanny valley, and pedagogical agent effects. Use these citations when prospects push back on "why avatars" or when exec audiences want proof, not hype.
- Why this matters
- The anchor: Alan Dennis (Kelley School of Business, Indiana University)
- BMW field experiment: honesty framework and conversion lift
- Celebrity-as-a-Service: digital human PSAs with Hugh Jackman
- Digital humans in e-commerce: three studies on product-category fit
- Healthcare: digital humans and diabetes adherence
- Harvard Business Review: the practical manager's framework
- Computers Are Social Actors (Nass and Reeves)
- Self-disclosure: people tell AI things they won't tell humans
- Anthropomorphism: how humanlike is enough
- Uncanny valley: the real constraint
- Pedagogical agents and learning
- Comparison: video agents vs text chatbots
- Healthcare and customer support
- How to use this in sales conversations
- Open gaps worth flagging internally
- Source library (for citations)
- Positioning one-liners grounded in the research

Why this matters
Prospects want proof before they believe the story. "Avatars sound cool" is not a purchase reason. This note collects the actual academic work that supports the five claims we keep making in pitches:
- Rich, face-to-face-like media beats text for complex interactions.
- People trust and disclose to AI agents differently than to humans, sometimes more.
- Humanlike cues (voice, face, expressions) change behavior even when users know the agent is a machine.
- In real buying contexts, digital humans move hard numbers - conversion, adherence, compliance, sharing.
- The interesting comparison is not "AI vs human". It is "credible digital human vs everything else". Once the face works, source ontology fades.
Use the references below as grounding when you write RFP responses, business cases, or executive emails. When a buyer asks "where is the evidence?", you now have 40 years of IS, HCI, and marketing research plus a 2024-2025 package of seven primary sources (BMW, e-commerce, healthcare, PSA, HBR, ISR) to point at.
For copy-paste ready sales quotes organized by vertical and buyer type, see the companion file: research-digital-humans-sales-quotes.md.
The anchor: Alan Dennis (Kelley School of Business, Indiana University)
Dennis is the most useful academic to name when you want to ground the Kaltura Avatars story. He has 150+ papers and has moved from the foundational communication theories of the 1980s-90s directly into digital humans today. Three building blocks.
Media Richness Theory (Daft and Lengel, 1986)
Not Dennis's own theory, but the starting point. Daft and Lengel proposed that media vary in their ability to carry rich cues: face-to-face is richest (voice, facial expression, body language, instant feedback), written text is leanest. For equivocal or ambiguous tasks (negotiation, persuasion, emotional content, complex decisions), richer media produce better outcomes. For routine information transfer, leaner media are fine.
Implication for Kaltura Avatars: text chatbots are a lean-media tool solving a rich-media problem. Any task that involves trust, nuance, or decision-making belongs in a richer medium. Avatars give enterprises that richness at scale.
Media Synchronicity Theory (Dennis and Valacich, 1999)
Dennis's own extension, published at HICSS 1999 and later refined in MIS Quarterly. It replaced "richness" with five concrete media capabilities: synchronicity, parallelism, reprocessability, rehearsability, and symbol sets. Communication is split into two processes: conveyance (transferring information) and convergence (reaching shared understanding). Lean media work for conveyance, synchronous rich media work for convergence.
Implication: conversational avatars are optimized for convergence tasks - the kind that drive deals, adoption, onboarding, and training. Chat widgets are optimized for conveyance. Different tools, different jobs.
Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans
Published in Information Systems Research, 2025 (vol. 36, issue 2, pp. 1096-1128). Seymour, Riemer, Yuan, Dennis. This is the Dennis paper to cite when somebody asks "is there actual research on digital humans?"
Four studies, combined n = 957. The paper distinguishes a Digital Human Agent (DHA, controlled by AI) from a Digital Human Puppet (DHP, controlled by a human operator behind the scenes), and tests both against humans and chatbots across controlled lab and field settings.
Headline findings:
- 75% of participants spontaneously report problems with current chatbot/text-based conversational tech (friction, dead-ends, lack of empathy). Market readiness is real, not projected.
- When visual fidelity is held constant, DHAs match human-operated DHPs on affinity, trustworthiness, and preference. Users cannot reliably tell them apart, and the AI does not score worse once the face looks the same.
- Digital humans alleviate algorithm aversion. Users who normally discount AI recommendations do not discount them when the AI wears a face. The embodiment buffers the "it's just a machine" dismissal.
- DHAs outperform text chatbots on affinity, trustworthiness, preference, and purchase intent. The lift is consistent across four studies.
- Fidelity matters more than ontology. Whether the agent is "really" AI is less important to the user than whether the face, voice, and timing are coherent. The design bar is cue coherence, not pure realism.
Metainference: the interesting comparison is not "AI vs human", it is "high-fidelity digital human vs everything else". When the face works, the source is secondary.
Pull-quote for decks: "Digital Human Agents were rated similarly to Digital Human Puppets on affinity, trustworthiness, and preference - they are less artificial without being more intelligent in any measurable cognitive sense." (Seymour et al., 2025, ISR)
AI Recommendations Amplify Social Influence
Published in Journal of Management Information Systems, 2025. Demonstrates that when AI is labeled as the source of a recommendation, the recommendation itself carries more persuasive weight, not less. Counterintuitive and useful: saying "our AI avatar recommends..." can outperform "our team recommends...".
NVIDIA GTC 2025 session
Dennis's GTC talk (S72988) frames the telco customer experience thesis: digital humans replace chatbots as the primary conversational interface, unifying voice, face, and intent handling in one agent. Worth watching end-to-end and quoting from when pitching telco, BFSI, or any service-heavy vertical.
BMW field experiment: honesty framework and conversion lift
This is the one to lead with when a CRO asks "does it actually move a number?"
Paper: Seymour, Zhang, Riemer, Dennis (2024-2025, working paper circulated widely, presented at ICIS and in review). Field experiment with BMW on a live landing page, n = 222 qualified leads, randomized between a text chatbot baseline and a digital human interface, crossed with formal vs informal language.
Headline result: conversion rate jumped from 75% (chatbot baseline, formal language) to 95% (digital human with informal language). Cohen's d = 1.08. This is a large effect size, in a real buying context, with a Fortune 50 brand.
The honesty framework is the theoretical contribution. The authors argue that informal language from a text chatbot feels dishonest ("a machine pretending to be your friend"), because lean text strips the cues that make informality feel appropriate. Informal language from a digital human feels authentic - the face, voice, and timing carry the social signals that make casual language credible. Informal × chatbot underperforms. Informal × digital human outperforms. The interaction is the finding.
Practical implications:
- Casual/friendly language is not a universal win. It is a win only when the channel can carry it.
- Digital humans unlock a style of communication that chatbots cannot pull off. This is a capability, not a cosmetic difference.
- 75 → 95 is the cleanest B2B-adjacent conversion number we have. Use it carefully (retail finance context, premium brand) but use it.
Pull-quote for the CRO: "Switching from a text chatbot to a digital human, paired with informal brand-appropriate language, lifted qualified-lead conversion from 75% to 95% on a BMW landing page (Cohen's d = 1.08). Informal tone on a chatbot did not move the needle - the medium has to carry the message." (Seymour, Zhang, Riemer, Dennis, 2024-2025)
Celebrity-as-a-Service: digital human PSAs with Hugh Jackman
Paper: Seymour, Riemer, Yuan, Dennis (2023-2024). PSA/PSI paper circulated widely, presented at ICIS. n = 247, between-subjects.
The setup: public service announcement about skin cancer, delivered in four conditions crossing interactivity (static video vs interactive digital human) with celebrity status (Hugh Jackman vs unknown actor). Jackman's digital twin was built with his permission. All four videos used the same script.
Key findings:
- Interactivity and celebrity status both independently raise trust and enjoyment. They do not cancel, they compound.
- Trust drives two behaviors the researchers cared about: compliance with the PSA message, and sharing it with others. Enjoyment drives sharing only.
- Celebrity trust transfers cleanly to the digital twin. Fans treat the digital Jackman with the same warmth as a real one, once the fidelity is high enough. No uncanny-valley drop was detected in this study.
- Users do not penalize the agent for being AI once the celebrity framing is credible. This is consistent with the algorithm-aversion buffering finding in the ISR paper.
The framing shift: the authors argue PSAs (public service announcements) should become PSIs (public service interactions). One-way broadcasts are a 20th-century artifact. Conversational, interactive delivery is the modern unit.
Commercial implication for Kaltura: celebrity and exec "digital twins" are not gimmicks. They are a trust-transfer mechanism backed by controlled experimental data. CMOs and CCOs with strong personal brands can be present in 1:1 conversations at scale without cloning themselves.
Pull-quote: "Digital celebrity agents can transfer pre-existing celebrity trust into interactive, one-to-one conversations. Trust, not enjoyment, is what drives compliance and sharing." (Seymour et al., 2023-2024)
Digital humans in e-commerce: three studies on product-category fit
Paper: Seymour, Chen, Riemer, Dennis (2024-2025, under review). Three progressively stronger studies testing digital humans vs cartoon avatars vs control in e-commerce product-recommendation contexts.
Study 1 (n = 225, online experiment): Digital humans outperformed cartoon avatars on product evaluations and purchase intent. Effect was fully mediated by emotional response (positive affect toward the agent) and perceived product quality.
Study 2 (n = 433, preregistered replication): replicated Study 1 findings. Added AI-disclosure manipulation. When the AI identity was not disclosed, ~50% of participants believed they were talking to a real person. Disclosure did not erase the effect but shrank it.
Study 3 (n = 363, field-like setting): Effect strongest for hedonic and sensory products (beauty, fashion, premium food), weaker for utilitarian commodity products. The face and voice matter more when the buying decision has an emotional component.
Key framings that fall out:
- The disclosure dilemma is real. Non-disclosure raises conversion but creates a trust liability. Disclosure reduces conversion slightly but is the only defensible long-term posture. Kaltura's position should be: always disclose, lean into the "it's an AI" framing as a feature.
- Category fit is a moderator. Digital humans are not a universal upgrade over chat. They are a targeted upgrade for categories where emotion, trust, and recommendation weight matter.
- Full mediation through affect + perceived quality means the face is working through how users feel about the recommendation, not by changing the recommendation itself. Same content, different delivery, better outcome.
Pull-quote: "Digital human salespeople outperformed cartoon avatars on purchase intent across three studies (combined n = 1,021), with the effect fully mediated by emotional engagement and perceived product quality. The effect was largest for hedonic and sensory categories where trust and affect carry the decision." (Seymour et al., 2024-2025)
Healthcare: digital humans and diabetes adherence
Paper: Seymour, Bellet, Yuan, Riemer, Dennis (2024-2025). Study with 195 type-2 diabetes patients, testing a digital human health coach against a text chatbot coach, delivering identical content over an adherence-tracking period.
Path analysis:
Digital human → empathy perception (β = 0.47) → trust (β = 0.52) → patient satisfaction (β = 0.41) → medication/behavior adherence (p = .006).
Every step of the path is significant. The face, voice, and conversational warmth are perceived as empathy. Empathy builds trust. Trust raises satisfaction with the care interaction. Satisfaction correlates with actual adherence to the care plan.
The theoretical contribution is "artificial empathy" - the finding that users do not need to believe the agent actually feels anything for the empathy behaviors (contingent responses, nonverbal attunement, warm tone) to produce the downstream benefits. Simulation of empathy is sufficient for the clinical outcome.
Why this matters for Kaltura in healthcare, pharma, payer, and public health pitches:
- Adherence is the unsolved multi-billion-dollar problem in chronic care. Any intervention that moves adherence moves the whole economic model.
- The mechanism is not "better information". The text chatbot and the digital human delivered the same content. Delivery matters more than content.
- "Artificial empathy is sufficient" is the ethical frame that lets payers and providers deploy this without making existential claims about AI consciousness.
Pull-quote: "Digital human health coaches produced higher empathy perception, trust, satisfaction, and measurable medication adherence than text chatbots delivering the same content (p = .006 on adherence). Users did not need to believe the agent felt empathy - the simulation was sufficient." (Seymour et al., 2024-2025)
Harvard Business Review: the practical manager's framework
Article: "Digital Humans Are Coming to Work" (HBR, 2024, Seymour et al.). Non-academic but widely cited. Two useful artifacts for sales conversations.
The 4-question decision flowchart
Before deploying a digital human, the HBR framework asks:
- Would the interaction benefit from being conversational rather than transactional?
- Does the task involve emotion, trust, or ambiguity that text struggles with?
- Is the content repeatable enough to justify automation but varied enough to need intelligence?
- Are there privacy or disclosure constraints that affect which voice, face, and identity to use?
If yes to all four, digital humans are the right choice. If no to one or more, a chatbot, video library, or human agent may be better. Use this flowchart in discovery calls - it's copy-paste qualifying.
The Four Types of Digital Humans
Positioning frame that maps cleanly to Kaltura's roadmap:
- Brand ambassadors (marketing, lead-gen, product guidance)
- Expert assistants (finance, healthcare, legal triage)
- Corporate trainers (L&D, onboarding, compliance)
- Synthetic executives (leadership comms at scale, CEO updates, M&A town halls)
Each type has different risk, ROI, and design constraints. Use this to segment avatar use cases in an account plan.
Headline stats worth quoting
- Cosmetics brand saw a 4.5x lift in purchase conversions after switching from text product guides to digital human guides.
- Synthesia generates 3,000+ business videos per day, a proxy for how fast the avatar stack has industrialized.
- Named customers cited in the article: EY, WPP, Yahoo Japan, ABC, ANZ, USC Keck, ZOZOtown. Useful for the "your peers are already doing this" move.
- US Department of Veterans Affairs study: veterans preferred talking to a digital human about mental health than to a human doctor, because of reduced judgment and stigma. Pairs with Lucas and Gratch 2014.
Pull-quote: "A cosmetics retailer replaced static product guides with a digital human beauty advisor and saw a 4.5x lift in conversions. Synthesia now produces over 3,000 digital human videos per day for enterprise customers." (HBR, 2024, Seymour et al.)
Computers Are Social Actors (Nass and Reeves)
Foundational paradigm from Clifford Nass and Byron Reeves at Stanford, built out across 30+ experiments in the 1990s and consolidated in The Media Equation (1996). Core finding: people apply social scripts to computers automatically, even when they deny doing so. Politeness, gender stereotypes, flattery effects, reciprocity, personality attribution, they all transfer.
For Kaltura Avatars the takeaway is precise: adding voice and a face does not just "feel nicer", it triggers measurably different behavior. Users are more polite, more patient, more forgiving after errors, and more likely to attribute expertise. The MASA extension (Media Are Social Actors, 2020+) updates the framework for today's avatars and agents: primary social cues (voice, face, name, conversational turn-taking) are enough to activate social treatment.
Use this when a CIO asks "why not just a better chatbot". Answer: because decades of evidence show that the presence of a face and voice is not a skin on top of a chatbot, it changes the interaction itself.
Self-disclosure: people tell AI things they won't tell humans
The most counterintuitive and most useful body of research for our pitch.
Lucas and Gratch (USC Institute for Creative Technologies, 2014)
Paper: "It's only a computer: Virtual humans increase willingness to disclose", Computers in Human Behavior, 2014. Platform: SimSensei, virtual human named Ellie.
239 participants. Same Ellie interviewer in all conditions. Half were told she was autonomous AI ("computer frame"), half were told she was teleoperated by a clinician ("human frame"). Participants in the computer frame disclosed more personal and embarrassing information, showed more intense sadness when discussing difficult topics, and were rated by observers as more open. Perceived absence of human judgment removed the impression-management barrier.
Why this matters for Kaltura: prospects worry avatars will "feel cold". The research says the opposite for the behaviors we care about. Users share more with a non-judgmental AI agent than with a human, particularly on sensitive topics. HR onboarding, compliance training, healthcare intake, employee listening, customer feedback - all of these benefit.
Broader Gratch and ICT work
Gratch, Lucas, Marsella, and colleagues at USC ICT have 15+ years of work on empathetic virtual humans, including EMA and GRETA platforms for modeling emotion and nonverbal behavior. The through-line: virtual humans that display contingent nonverbal feedback (head nods, gaze, expressions) produce disclosure and rapport at or above human-interviewer levels.
Anthropomorphism: how humanlike is enough
Blut et al. (2021) meta-analysis
Published in Journal of the Academy of Marketing Science, 2021. Synthesized 108 independent samples and 11,053 participants across physical robots, chatbots, and AI agents. This is the go-to number when a prospect wants "the big meta-analysis".
Headline: anthropomorphism positively predicts intention to use AI, mediated by perceived intelligence and usefulness. Effects are moderated by robot gender, service type, and customer traits. Anthropomorphism is not a single lever, it is a design space - cues must match task.
Practical synthesis from HCI work (2023-2025)
High anthropomorphism in chatbot avatars raises perceived empathy (β = 0.32) and trust (β = 0.27), which in turn drive user experience (β = 0.48, p < 0.01). Direct effects are small; the action is in the mediation. Translation: the face does not "make people buy", it makes them feel heard, which makes them buy.
Humanlike AI agents also change interaction style: users greet them, say please and thank you, accept errors more gracefully, and treat them as collaborators rather than tools. Workplace studies (Cambridge Judge Business School, 2025) show this shifts the tone from transactional to relational.
Uncanny valley: the real constraint
Mori (1970) first proposed the curve. MacDorman and colleagues (2009) empirically validated the drop in comfort and trust when realism is high but imperfect, especially in motion.
Consequences for avatar design:
- Low-realism, stylized avatars are safe. Comfort scales with likeability, not realism.
- High-realism avatars work only if motion, gaze, lip-sync, and micro-expressions are coherent. Otherwise users feel eerie and trust drops hard.
- The middle of the curve is dangerous. "Almost human" is worse than "clearly stylized".
Useful when a customer asks "why do your avatars look the way they look?". Answer: intentional design choice, validated by 50 years of research. We optimize for coherence across all cues, not for pure visual realism.
Pedagogical agents and learning
When the pitch lands in L and D, corporate training, or higher education, this is the relevant evidence base.
Schroeder et al. (2013) and Castro-Alonso et al. (2021)
Meta-analyses show small-to-moderate positive effects of pedagogical agents on learning performance versus no-agent conditions. 2D animated agents often beat 3D agents. Animated beats static. Effects are stronger on retention, motivation, and social presence than on raw knowledge tests.
Recent work (2024, Frontiers in Education)
Humanoid AI agents as teachers, with learner avatars, positively predicted performance, satisfaction, attention, and cognitive presence. Extends the meta-analytic finding into AI-driven agents.
Bottom line for training use cases: avatars do not magically raise test scores, but they reliably raise attention, completion, and perceived social presence, which are the levers that matter in corporate L and D where disengagement is the real enemy.
Comparison: video agents vs text chatbots
Synthesized from the sources above, useful as a one-slide summary.
| Dimension | Text chatbot | Video agent / digital human |
|---|---|---|
| Media richness | Lean | Rich (voice, face, gaze, timing) |
| Best use | Conveyance, routine tasks | Convergence, trust, emotion, decision |
| Social treatment | Minimal | Full social script activation (CASA) |
| Disclosure | Moderate | Higher when framed as non-judgmental AI |
| Conversion lift | Baseline | +20pp field lift vs chatbot, BMW, d = 1.08 (Seymour 2024) |
| E-commerce effect | Baseline | Full mediation via affect + perceived quality (n = 1,021) |
| Purchase intent | Baseline | Higher affinity, trust, preference (ISR 2025, n = 957) |
| Healthcare effect | No adherence signal | Adherence lift via artificial empathy (p = .006, n = 195) |
| Algorithm aversion | Triggered | Buffered - embodiment reduces "it's just AI" dismissal |
| Informal language | Feels dishonest, hurts CR | Feels authentic, amplifies conversion (honesty framework) |
| Celebrity transfer | Not possible | Trust transfers to digital twin (Jackman PSA, n = 247) |
| Learning use | Low engagement | Measurable lift in attention and retention |
| Risk | Dull, abandoned | Uncanny valley if realism is mismatched |
Healthcare and customer support
The diabetes adherence study (Seymour et al., n = 195) is the strongest direct evidence: digital human coaches produce measurable adherence lift vs text chatbots delivering identical content, mediated through empathy → trust → satisfaction (p = .006 on the adherence outcome). The artificial empathy finding is the ethical frame - simulation is sufficient for clinical benefit.
Adjacent evidence. The VA study cited in HBR found that US veterans preferred discussing mental health with a digital human rather than a human doctor, because of reduced judgment and stigma. This pairs directly with Lucas and Gratch 2014: framed as non-judgmental AI, users disclose more on sensitive topics. For mental health, chronic condition management, and any care pathway where adherence and disclosure are the bottleneck, avatars are a clinically justified upgrade, not a UX preference.
General conversational-agent research in healthcare consistently shows higher patient satisfaction with video than with voice-only (86% vs 77%), higher disclosure on sensitive topics, and better comprehension when an agent pauses, rephrases, and responds to confusion cues. Applies directly to our pitches in insurance, pharma, payer, and any post-sale support scenario.
How to use this in sales conversations
Short list of moves, mapped to buyer type.
- Skeptical CIO: cite Dennis 2025 (Information Systems Research, n = 957 across four studies) and Blut 2021 meta-analysis. "This is not a demo, this is 40 years of IS research and a four-study package from the Kelley School of Business."
- CRO or CMO wanting a hard conversion number: cite the BMW field experiment. "75% to 95% qualified-lead conversion, large effect size, peer-reviewed field study with a Fortune 50 brand. Informal language paired with a digital human. The combination is the finding."
- Head of CX: cite Nass and Reeves + MASA + honesty framework. "Users apply social scripts to faces and voices automatically. Text UIs leave that value on the table, and informal tone in text reads as dishonest rather than friendly."
- HR, L and D, or compliance lead: cite Lucas and Gratch 2014 and the VA veterans study. "People disclose more to non-judgmental AI interviewers, including veterans discussing mental health. Better signal for you."
- Marketing leader: cite Dennis 2025 on AI-labeled recommendations amplifying persuasion and the e-commerce three-study package. "An AI avatar saying 'I recommend X' carries more weight than a team page saying it. The effect is largest for hedonic and sensory categories, where emotion carries the decision."
- CCO or public-affairs leader: cite the Hugh Jackman PSA study. "Celebrity and exec trust transfers cleanly to a digital twin. Interactivity and celebrity status compound. PSAs become PSIs - public service interactions."
- Healthcare, pharma, payer: cite the diabetes adherence study. "Digital human health coach, same content as the chatbot, measurably better adherence. The mechanism is artificial empathy and it is sufficient for the clinical outcome."
- B2B buyer who worries about algorithm aversion: cite the ISR 2025 metainference. "Once the face is credible, users do not penalize the agent for being AI. Embodiment buffers the aversion."
- Design-minded buyer worried about uncanny valley: cite Mori 1970 and MacDorman 2009. "We design for cue coherence, not pure realism. That is why our avatars look the way they do."
- Skeptical operator who wants "real examples": cite HBR 2024. "EY, WPP, Yahoo Japan, ABC, ANZ, USC Keck, ZOZOtown. Synthesia alone ships 3,000 digital human videos per day."
Open gaps worth flagging internally
- The BMW paper is B2C retail finance. We still want a pure B2B SaaS / enterprise software conversion study. Opportunity for Kaltura to run and publish one with a flagship customer (Syngenta? UBS? ABB?).
- Effect-size data in Blut 2021 requires full paper access. Get it for the library.
- Dennis's full post-2023 bibliography includes unpublished working papers on voice-only vs voice+face and on multi-turn dialogue effects. Worth a direct outreach if we want early access or a co-authored case study.
- Longitudinal adherence data beyond the diabetes study (Seymour et al.) is thin. Most studies are single-session. We need multi-week and multi-month data to fully de-risk the "novelty effect" objection.
- Cross-cultural generalizability is understudied. Most samples are US, UK, Australian. Relevance for our EMEA and APAC accounts needs a caveat.
Source library (for citations)
- Daft, R.L., Lengel, R.H. (1986). Organizational Information Requirements, Media Richness and Structural Design. Management Science.
- Dennis, A.R., Valacich, J.S. (1999). Rethinking Media Richness: Towards a Theory of Media Synchronicity. HICSS-32. Later refined in MIS Quarterly (2008).
- Seymour, M., Riemer, K., Yuan, L., Dennis, A.R. (2025). Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans. Information Systems Research, 36(2), 1096-1128. https://ideas.repec.org/a/inm/orisre/v36y2025i2p1096-1128.html
- Dennis, A.R. et al. (2025). Artificial Intelligence Recommendations Amplify the... Journal of Management Information Systems.
- Seymour, M., Zhang, M., Riemer, K., Dennis, A.R. (2024-2025). Honesty and the Digital Customer Experience: A BMW field experiment on digital humans, informal language, and conversion. Working paper / under review.
- Seymour, M., Riemer, K., Yuan, L., Dennis, A.R. (2023-2024). Celebrity-as-a-Service: From Public Service Announcements to Public Service Interactions with Digital Celebrity Agents. Working paper / ICIS.
- Seymour, M., Chen, X., Riemer, K., Dennis, A.R. (2024-2025). Digital Human Salespeople in E-Commerce: Three experiments on avatars, cartoons, and category fit. Under review.
- Seymour, M., Bellet, P., Yuan, L., Riemer, K., Dennis, A.R. (2024-2025). Artificial Empathy: Digital Human Health Coaches and Medication Adherence in Type-2 Diabetes. Working paper.
- Seymour, M., Riemer, K., Yuan, L., Lombard, M., Dennis, A.R. (2024). Digital Humans Are Coming to Work. Harvard Business Review.
- Reeves, B., Nass, C. (1996). The Media Equation. Cambridge University Press / CSLI.
- Nass, C., Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues.
- Lucas, G.M., Gratch, J., King, A., Morency, L.P. (2014). It's only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior.
- Blut, M., Wang, C., Wünderlich, N.V., Brock, C. (2021). Understanding Anthropomorphism in Service Provision: A Meta-analysis. Journal of the Academy of Marketing Science, 49, 632-658.
- Mori, M. (1970, translated 2012). The Uncanny Valley. IEEE Robotics and Automation.
- MacDorman, K.F., Green, R.D., Ho, C.C., Koch, C.T. (2009). Too real for comfort? Uncanny responses to computer generated faces. Computers in Human Behavior.
- Schroeder, N.L., Adesope, O.O., Gilbert, R.B. (2013). How Effective are Pedagogical Agents for Learning? Journal of Educational Computing Research.
- Castro-Alonso, J.C. et al. (2021). Meta-analysis of pedagogical agents.
- Frontiers in Education (2024). Humanoid AI agents as teachers with learner avatars.
- NVIDIA GTC 2025 S72988. AI Agents and Digital Humans Shaping the Future of Interaction in Telecoms. Alan Dennis. https://www.nvidia.com/en-us/on-demand/session/gtc25-s72988/
Positioning one-liners grounded in the research
Copy-paste ready for decks, emails, RFPs.
- "Forty years of information systems research say that face, voice, and turn-taking change how people make decisions. Text chatbots were built for the opposite problem."
- "Users trust an AI agent differently than a human. In healthcare studies they disclosed more to a virtual interviewer when they believed it was fully automated. That is a feature, not a bug, for onboarding, training, and feedback."
- "The latest peer-reviewed research (Seymour, Riemer, Yuan, Dennis, 2025, Information Systems Research, four studies, n = 957) finds digital human agents produce higher affinity, higher trustworthiness, and higher purchase intent than chatbots or point-and-click UIs. We did not invent that curve, we built a product for it."
- "A BMW field experiment lifted qualified-lead conversion from 75% to 95% by switching from a chatbot to a digital human with informal brand-appropriate language (Cohen's d = 1.08). Informal language on a chatbot did not move the number - the medium had to carry the message."
- "People apply social rules to computers with a face and a voice, automatically. Our avatars turn that reflex into engagement, at enterprise scale."
- "A digital human health coach produced measurably better adherence than a text chatbot delivering the same content, with the effect running through empathy and trust (p = .006, n = 195 type-2 diabetes patients). Artificial empathy is sufficient - the simulation is the outcome."
- "In e-commerce, digital human salespeople outperformed cartoon avatars on purchase intent across three studies. The effect is fully mediated by affect and perceived quality, and it is largest for hedonic and sensory categories."
- "The uncomfortable metainference from Dennis's ISR paper: once the face is credible, users do not penalize the agent for being AI. Embodiment buffers algorithm aversion."
- "EY, WPP, Yahoo Japan, ABC, ANZ, USC Keck, ZOZOtown, and BMW have run digital human deployments. Synthesia ships 3,000 digital human business videos per day. The question is not 'is this real' - it is 'where do you start'."