PostHog Handbook Library / Growth

3,008 words. Estimated reading time: 14 min.

Observability

Auto TL;DR

At a Glance

This long page covers these main areas. The list is generated from the article headings, so it updates with every handbook rebuild.

  1. What is the job to be done?
  2. What PostHog products are relevant?
  3. Adoption path and expansion path
  4. Entry point
  5. Primary expansion path
  6. Future expansion (roadmap dependent)
  7. Business impact of solving the problem
  8. Personas to target

What is the job to be done?

"Help me know when things break, understand why, and fix them fast."

This is where our roadmap is heading and where significant market opportunity exists. The long-term vision is a full observability stack that competes with Datadog and Sentry on their home turf, but with the massive advantage that our observability data is connected to product analytics data. No other vendor can tell you "this API endpoint is slow AND here's the business impact in terms of user drop-off and revenue loss."

Separating this from Release Engineering is important because the buyer is often different (SRE/platform team vs. product engineering), the competitive landscape is different (Datadog/Sentry vs. LaunchDarkly), and the expansion path is different.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Error Tracking. Team wants to catch exceptions and regressions. Common entry scenarios:

  1. Sentry replacement: They're paying for Sentry and want to consolidate into PostHog (which they're already using for analytics or flags). Error Tracking is the direct replacement.
  2. First observability tool: Early-stage company that hasn't invested in error tracking yet. PostHog's free tier (100K exceptions/month) lets them start without a new vendor relationship.
  3. Session Replay → Error Tracking: They're already using Session Replay for debugging and discover that errors surfaced in replays could be tracked systematically with Error Tracking.

Primary expansion path

Error Tracking → + Session Replay (error context) → + Logging → + Product Analytics (impact analysis)

The logic of each step:

Future expansion (roadmap dependent)

As APM and tracing ship, the path extends: Logging → APM → Tracing, completing the full observability stack. Position this honestly: name the vision, be transparent about what's available today vs. what's coming.

Business impact of solving the problem

Observability data connected to product analytics is a moat. Every other observability tool (Datadog, Sentry, New Relic) can tell you "this endpoint threw an error." Only PostHog can tell you "this error affected 500 users, 30 of whom were in the middle of checkout, resulting in an estimated $15k in lost revenue this week." That's a fundamentally different conversation with engineering leadership.

Session Replay as error context is a killer feature. Sentry shows you a stack trace. PostHog shows you the user's actual experience. For frontend and full-stack debugging, this is dramatically faster for reproduction and resolution.

Consolidation play for accounts already using PostHog. If they're already on PostHog for analytics or flags, adding Error Tracking and Logging means one fewer vendor (Sentry, Datadog) to manage. The consolidation saves money and reduces context-switching.

This use case has the highest growth ceiling. The observability market is enormous (Datadog alone is $25B+). Our story gets stronger with every product we ship in this space.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | SRE / Platform Engineer | SRE, Platform Eng, Infrastructure Eng | Reliability, alerting, mean time to resolution, not getting paged at 3am | "Will this catch issues before users report them? How fast can I triage?" | | Backend Engineer | Backend Eng, API Engineer, Server-side Eng | Stack traces, log correlation, reproducing bugs efficiently | "Can I see what happened on the server when this error fired?" | | Product Engineer | Full-stack Eng, Frontend Eng | User-facing bugs, reproduction, understanding the user impact of errors | "Can I see the user's session when this error happened?" | | Engineering Manager | EM, VP Eng, Director of Eng | Team velocity, incident metrics (MTTR, error rates), cost of observability tooling | "How does this reduce our incident response time? What does it cost vs. Sentry/Datadog?" | | Founder (early stage) | CTO, first engineer | Catching bugs before users complain, not paying Datadog prices | "Does this work out of the box and is it affordable?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | Error Tracking is active but low product count | Product spend breakdown | They've started with errors. Full Observability expansion path available. | | Customer mentions Sentry or Datadog in notes | Vitally notes / conversations | Competitive displacement opportunity. Consolidation pitch. | | High Session Replay usage with error-related viewing patterns | Product usage data | They're using replay for debugging already. Error Tracking formalizes this. | | Engineering-heavy user base, no PM users | User list in Vitally | Engineering-first account. Observability and Release Engineering are the primary use cases. |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | Error tracking exceptions growing week over week | Product usage metrics | They're instrumenting more of their stack. Good adoption signal. | | Session Replay filtered by error events | Replay usage patterns | They're connecting replay to error debugging. The integration is clicking. | | High error volume but no alerting configured | Error tracking settings | They're collecting errors but not acting on them. Help them set up alerts. | | Product Analytics queries referencing error events | Saved insights | They're starting to connect errors to business impact. Encourage this. |

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | Sentry | Error tracking, performance monitoring, session replay | Deeper product analytics integration; business impact context; flag/experiment connection; better pricing | More mature error tracking features; broader language support; larger install base; dedicated performance monitoring | | Datadog | Full observability: APM, logs, metrics, errors | Product analytics integration; session replay depth; much cheaper | Complete observability stack (APM, traces, metrics); enterprise-grade; massive ecosystem | | New Relic | Full observability: APM, logs, errors, distributed tracing | Product analytics integration; session replay; simpler pricing | Complete observability stack; mature enterprise features |

Honest assessment: Our Observability story is credible but incomplete. Error Tracking + Session Replay + Logging is a meaningful starting point, and the connection to product analytics is genuinely differentiated. But we don't have APM or tracing yet. We can't position PostHog as a full Datadog replacement today. The honest pitch is: "For error tracking, we're better than Sentry because of the user context. For full observability, we're building toward it, and in the meantime, the product analytics connection gives you something no other observability tool offers." Be transparent about what's available today vs. what's on the roadmap.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | No APM or tracing yet | Can't replace Datadog for teams that need full backend observability | Be honest about the roadmap. Position PostHog as complementary for now: errors + replay + analytics in PostHog, APM in their existing tool. The consolidation play gets stronger as we ship more. | | Logging is beta | Teams expecting production-grade centralized logging may find gaps | Set expectations on maturity. For teams with existing logging (ELK, Papertrail), PostHog logging complements rather than replaces initially. | | Error Tracking language/framework support may lag Sentry | Sentry supports a very wide range of languages and frameworks | Check Error Tracking docs for current support. For unsupported frameworks, generic exception capture via the API may work. | | No built-in on-call/incident management | Teams wanting PagerDuty-style incident workflows won't find it here | PostHog alerts can trigger webhooks to PagerDuty, Slack, etc. Error Tracking is about detection and context, not incident management workflows. |

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | Error Tracking only | Session Replay | They see stack traces but can't reproduce the user experience | "You can see the error. Want to see exactly what the user was doing when it happened?" | | Error Tracking + Session Replay | Logging | They have frontend error context but need backend logs | "You can see the user's session. But what was happening on the server at the same time?" | | Error Tracking + analytics correlation | Product Intelligence (for the product team) | They're connecting errors to user impact. The product team would benefit from the same analytics. | "You're measuring error impact on users. Has your product team seen what they can do with funnels and retention in the same platform?" | | Error Tracking (engineering in PostHog) | Release Engineering (same engineering team) | Engineering is in PostHog for errors. Feature flags for safe releases is a natural add. | "You're tracking errors after releases. What if you could gate features behind flags and roll back without a deploy?" | | Error Tracking for AI features | AI/LLM Observability | Traditional error tracking misses AI quality regressions | "You're catching exceptions, but are you catching when your model starts giving worse answers? That's a different kind of 'error.'" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | "You're shipping fast and breaking things. PostHog catches errors and shows you the user's experience when they hit a bug. No Sentry bill required." Error Tracking + Session Replay is the sweet spot. | Error Tracking, Session Replay | CTO, founding engineer | | AI Native — Scaled | "Your AI features have failure modes that traditional error tracking misses: hallucinations, slow responses, quality regressions. PostHog catches the technical errors AND lets you evaluate output quality." Bridge to AI/LLM Observability. | Error Tracking, Session Replay, Logging, AI Evals | VP Eng, Platform Lead, SRE | | Cloud Native — Early | "Stop finding bugs from user complaints. Error Tracking catches exceptions automatically, and Session Replay lets you see exactly what happened. 100K exceptions/month free." | Error Tracking, Session Replay | CTO, founding engineer | | Cloud Native — Scaled | "Your team is juggling Sentry, Papertrail, and Datadog. PostHog consolidates error tracking, logging, and user context into the platform you already use for analytics." Consolidation pitch. | Error Tracking, Session Replay, Logging, Product Analytics | VP Eng, SRE Lead, Platform team | | Cloud Native — Enterprise | "Multiple teams, multiple services, and incident context spread across 5 tools. PostHog gives you errors + logs + user sessions + business impact in one platform. No more switching between Sentry, Datadog, and Amplitude during an incident." | Full Observability stack + Enterprise package | VP Eng, Director of SRE, Platform leadership |

Canonical URL: https://posthog.com/handbook/growth/use-case-selling/observability

GitHub source: contents/handbook/growth/use-case-selling/observability.md

Content hash: 87050ffe8dbb20d0