PostHog Handbook Library / Growth

3,552 words. Estimated reading time: 16 min.

Customer Experience

Auto TL;DR

At a Glance

This long page covers these main areas. The list is generated from the article headings, so it updates with every handbook rebuild.

  1. What is the job to be done?
  2. What PostHog products are relevant?
  3. Adoption path and expansion path
  4. Entry point
  5. Primary expansion path
  6. Alternate expansion paths
  7. Business impact of solving the problem
  8. Personas to target

What is the job to be done?

"When a customer runs into an issue, we're able to quickly understand exactly what happened, identify the problem, and verify a fix, without bouncing between multiple tools or wasting engineering time trying to reproduce it."

Most companies don't have a customer experience system. They have tickets in one place, errors in another, logs somewhere else, analytics owned by product, and engineers manually trying to reproduce bugs. The goal of this use case is to help a company build a unified debugging workflow where support, product, and engineering share the same context.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Session Replay or Product Analytics. Common entry scenarios:

  1. "We can't reproduce bugs": Support needs to see what happened instead of relying on screenshots and user descriptions. Session Replay is the direct answer.
  2. "Something is breaking but we don't know why": Product notices drop-offs or support volume spikes and needs visibility into what's causing them. Product Analytics surfaces the pattern, Session Replay provides the detail.

Primary expansion path

Product Analytics → + Session Replay → + Error Tracking → + Logs / LLM Observability → + Surveys

The logic of each step:

This expansion happens naturally because each step removes a layer of uncertainty.

Alternate expansion paths

Starting from Session Replay as a replacement for another session recording tool. They adopt Session Replay to replace Hotjar, FullStory, or LogRocket. Expand by introducing autocapture (Product Analytics), Error Tracking for structured bug data, and Group Analytics for account-level views.

Business impact of solving the problem

Engineering time savings. If bug reproduction drops from 2 hours to 30-60 minutes, teams get fewer context switches, fewer escalations, and more roadmap velocity. Even modest improvements here can easily justify the cost of the entire PostHog contract.

Escalation reduction. When support can view replay, check errors, and inspect logs, they resolve more issues without pulling in engineering. That means the roadmap doesn't stall and customer response times improve.

Revenue protection. When enterprise customers report issues, speed and clarity matter. Being able to say "here's exactly what happened and here's the fix" builds trust. Slow, unclear debugging erodes it.

AI risk mitigation. For AI-powered products, LLM Observability catches the things that would otherwise go unnoticed: hallucinations that are hard to trace, prompt regressions, and latency spikes. Without it, product credibility degrades quietly.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | | --- | --- | --- | --- | | Support Leader | Head of Support, Support Ops | Faster resolution, fewer escalations | MTTR, escalation rate | | Engineering Lead | EM, Staff Eng | Reproducible bugs, fewer interruptions | Debugging time, context switches | | Product Manager | PM, Product Lead | Understanding friction, user-reported issues | Drop-off rates, issue frequency | | AI Lead | Head of AI, Applied AI Eng | Model reliability, output quality | Output quality, latency, trace coverage | | CS Leader | VP CS, Head of CS | Customer trust, proactive issue resolution | NPS trends tied to product issues |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | | --- | --- | --- | | Users with a support title | User list in Vitally | They're already bringing support folks into PostHog. CX workflow is emerging organically. | | High session replay spend / volume | Product spend breakdown, usage metrics | They're investing heavily in replay. This use case helps them get more value from that spend by connecting replay to errors, logs, and surveys. | | High support ticket volume | vitally.custom.supportTickets | They're dealing with a lot of customer issues. PostHog can help them debug faster. | | Multiple user roles in PostHog (eng + support + product) | User list, admin emails | Cross-functional usage signals that CX workflows are already forming. |

PostHog usage signals

| Signal | How to Check | What It Means | | --- | --- | --- | | Session Replay filtered by error events | Replay usage patterns | They're connecting replay to debugging. The CX workflow is clicking. | | Person profile lookups increasing | Product Analytics usage | Support or CS is investigating individual users. Group Analytics could formalize this. | | Error Tracking adoption alongside replay | Product spend data | They're building the debugging stack. Logs and surveys are natural next steps. | | Console log / network tab usage in replays | Replay engagement metrics | They're using replay for technical debugging, not just UX review. Strong CX signal. |

Health score implications

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Where we are strongest: We win when teams want behavioral and technical context in one place, engineering and product collaborate closely, AI is part of the product, and speed and simplicity matter more than enterprise ceremony.

Where we are weaker: We're not the right fit when deep distributed tracing or advanced APM is required, enterprise ITSM workflows (ServiceNow, Jira Service Management) dominate the support stack, or security policies prohibit session replay. In those cases, we complement rather than replace.

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | | --- | --- | --- | --- | | FullStory | Session replay + digital experience analytics | Error tracking, logs, AI observability, experiments all in one platform; developer-first; better pricing | More mature DXP features; enterprise CX tooling; dedicated support workflow integrations | | LogRocket | Session replay + error tracking + performance monitoring | Broader product suite (analytics, flags, experiments, surveys); AI observability; consolidation story | Purpose-built for debugging workflows; tighter Jira/Zendesk integrations out of the box | | Hotjar | Session replay + heatmaps + surveys | Full analytics platform; error tracking; feature flags; engineering-grade tooling | Simpler UX for non-technical users; lower barrier to entry for marketing/UX teams | | Sentry | Error tracking + performance monitoring + session replay | Deeper product analytics; session replay tied to behavior data; AI observability; surveys | More mature error tracking; broader language/framework support; larger install base | | Datadog | Full observability: APM, logs, metrics, errors, RUM | Product analytics integration; session replay depth; significantly cheaper | Complete observability stack (APM, traces, metrics); enterprise-grade; massive ecosystem |

Honest assessment: Our strongest position is against teams already using PostHog for analytics or feature flags who are paying separately for a replay/debugging tool. The consolidation pitch is concrete and saves money. We're weaker against teams with deeply embedded ITSM workflows (ServiceNow, PagerDuty integrations) or teams that need enterprise-grade distributed tracing. Our sweet spot is product-led companies where engineering, product, and support are closely aligned and want one platform for the full debugging loop.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | | --- | --- | --- | | No native ticketing system integration | Support teams using Zendesk/Intercom can't auto-link replays to tickets | Share replay URLs manually in tickets. Data Pipelines can push events to external tools. Webhook integrations available for some platforms. | | Logging is beta | Teams expecting production-grade centralized logging may find gaps | Set expectations on maturity. For teams with existing logging (ELK, Papertrail), PostHog logging complements rather than replaces initially. | | Session replay privacy controls require configuration | Sensitive data in replays may block adoption for regulated industries | PostHog has extensive privacy controls including masking, blocking, and network payload filtering. Requires upfront configuration. | | No APM or distributed tracing | Can't replace backend performance monitoring for complex microservice architectures | Be honest about the roadmap. Position PostHog as the user-facing debugging layer. Backend APM stays in their existing tool (Datadog, New Relic) for now. | | Mobile replay limitations | Mobile session replay is newer and less mature than web | Check mobile replay docs for current platform support. Set expectations on feature parity with web replay. |

Exceptions / edge cases:

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Objection handling

| Objection | Response | | --- | --- | | "We already have a session replay tool (Hotjar/FullStory/LogRocket)" | PostHog connects replay to errors, logs, analytics, and surveys in one platform. With separate tools, your support team still has to switch between 3-4 tabs to debug one issue. Consolidating also saves on vendor costs. | | "Our support team isn't technical enough for PostHog" | The replay viewer is visual and intuitive. Support doesn't need to write queries. They search for a user, watch the session, and share the link. We can do a training session to get them comfortable. | | "We need this integrated with Zendesk/Intercom" | You can paste replay links directly into tickets today. For automated workflows, Data Pipelines can push events to external tools via webhooks. | | "Session replay has privacy concerns" | PostHog has extensive privacy controls: input masking, DOM element blocking, network payload filtering, and more. We can configure these during onboarding. HIPAA BAA is available with the Boost package. | | "We're not sure this justifies adding another tool" | If you're already on PostHog for analytics or flags, this isn't another tool. It's enabling more of the platform you already pay for. If you're not on PostHog yet, the free tiers let you evaluate without financial risk. |

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | | --- | --- | --- | --- | | Session Replay only | Error Tracking | They're watching replays to find bugs. Structured error data makes this systematic instead of manual. | "You're watching sessions to find bugs. What if errors were automatically captured and grouped so you could see which ones affect the most users?" | | Session Replay + Error Tracking | Logging | They have frontend context but need backend visibility when debugging server-side issues. | "You can see the user's session and the error. But what was happening on the server at the same time?" | | Session Replay + Error Tracking | Product Intelligence (for the product team) | Support and engineering are in PostHog for debugging. The product team would benefit from the same analytics for feature development. | "Your support team is using PostHog to debug issues. Has your product team seen what they can do with funnels and retention in the same platform?" | | Replay + Errors + Analytics | Surveys (NPS/CSAT) | They're debugging reactively. Surveys let them detect frustration proactively and tie it to specific sessions. | "You're great at debugging reported issues. But how do you find the frustrated users who never file a ticket?" | | Replay + Errors (debugging AI features) | LLM Observability | Traditional debugging misses AI-specific issues: prompt quality, hallucinations, latency. | "You're catching errors in your AI features. But are you seeing when the model gives a bad answer that isn't technically an error?" | | Replay + Errors (engineering in PostHog) | Release Engineering (Feature Flags) | Engineering is in PostHog for debugging. Feature flags for safe releases is a natural add. | "You're tracking bugs after releases. What if you could gate features behind flags and roll back without a deploy?" | | Group Analytics + Person Profiles | Data Infrastructure (Data Warehouse) | They want to combine PostHog user/account data with CRM or billing data for a complete customer view. | "You're looking at users in PostHog. What if you could see their Stripe revenue and HubSpot status alongside their product behavior?" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | | --- | --- | --- | --- | | AI Native — Early | "Your AI features will break in ways that aren't exceptions. PostHog lets support see the user's session, engineering sees the error, and you can trace the LLM call that caused it. All in one place, free tier included." | Session Replay, Error Tracking, LLM Observability | CTO, founding engineer | | AI Native — Scaled | "Support escalates AI issues to engineering because they can't see what the model did. PostHog gives support replay + LLM traces so they can triage without pulling engineers off the roadmap." Bridge to AI/LLM Observability and Product Intelligence. | Session Replay, Error Tracking, LLM Observability, Logging, Surveys | VP Eng, Head of Support, AI Lead | | Cloud Native — Early | "Stop asking users to send screenshots. Session Replay shows you exactly what happened. Error Tracking catches it automatically. Support and engineering share the same context." | Session Replay, Error Tracking, Person Profiles | CTO, Head of Support, founding engineer | | Cloud Native — Scaled | "Your support team escalates everything because they can't see errors or logs. PostHog gives them replay + errors + backend logs so they can resolve more issues without pulling in engineering." Consolidation pitch: replace FullStory/LogRocket + Sentry with one platform. | Session Replay, Error Tracking, Logging, Group Analytics, Surveys | VP Eng, Head of Support, VP CS | | Cloud Native — Enterprise | "Multiple teams, multiple products, and debugging context spread across 5 tools. PostHog gives support, engineering, and product a shared view: replay, errors, logs, and satisfaction data tied to the same user and account. Fewer escalations, faster resolution, better customer trust." | Full CX stack + Enterprise package (RBAC, SSO, dedicated support) | VP Eng, VP CS, Director of Support, CTO |

Canonical URL: https://posthog.com/handbook/growth/use-case-selling/customer-experience

GitHub source: contents/handbook/growth/use-case-selling/customer-experience.md

Content hash: 649ba01e0d37758f