Print-ready generated edition

PostHog Handbook Library

Generated 2026-05-04 from PostHog's public handbook source. The live handbook remains canonical.

Art, brand, and merch requests

Brand | Source: https://posthog.com/handbook/brand/art-requests

🎨 Need artwork or merch? Please request it using the request templates. Do not request art or merch over Slack or email.

All artwork and merch requests are handled by Lottie Coxon, Heidi Berton, and Daniel Hawkins on the Graphics team.

They can help you with things like:

They get a lot of work requests, so they use two separate project boards to organize work – one for merch and one for other art projects. This reflects that merch projects often have much longer timelines and need to be handled differently.

Whenever you want to request a new merch design or other artwork, you should use the relevant design request templates in the posthog.com repo – one template for merch, one for other art requests. Each template automatically assigns work to the correct project board.

Art board automations

The Art & Brand Planning board uses GitHub Actions to keep work moving:

To establish a clear connection between the task and the working file, designers will create a frame containing a link to the task. They should then add a link to that frame within the task for easy reference.

Lottie and Daniel usually ask for two weeks minimum notice, but can often work faster on things if needed. If your request is genuinely urgent, please share your request issue in #team-marketing channel and mention Lottie, Daniel, and/or Cory.

Hedgehog library

For team members we keep all our currently approved hedgehogs in this Figma file. This enables us to look through the library of approved hogs, and to export them at required sizes without relying on the design team.

Here's how:

  1. Open the Figma file. You can manually browse, or use Cmd + F to search based on keywords such as 'happy', 'sad', or 'will smith'.
  2. Select the hog you want. If needed, adjust the size using the 'Frame' menu in the top of the right-hand sidebar.
  3. At the bottom of the right-hand sidebar, select the file type you need in the 'Export' menu, choose @2x, then select 'Export [filename]' to download the image.

If you can't find a suitable hog, you can request one from the design team.

Non-team members can find some of the most-used hogs to download on our press page.

Designing posthog.com

Brand | Source: https://posthog.com/handbook/brand/designing-posthog-website

The is responsible for everything you see on posthog.com. We treat our website & docs as a product, which means we're constantly iterating on it and improving it.

Because our website has a well-defined aesthetic, we often skip the hifi design process and jump straight from wireframes into code. Having a designer who can code means we can reach the desired level of polish without _always_ having to produce hifi designs, thus leading to huge time savings.

Step 1: Wireframes [Balsamiq]

We often produce hi-fidelity wireframes because this allows us to closely envision a design which in turn helps us skip the hi-fi Figma process.

_Note: Balsamiq uses its own Comic Sans-style font. Don't get hung up on this!_

Step 2: Hi-fi designs [Figma]

Designs are scattered across a variety of unorganized Figma files, but here's some of the most recent iteration.

If there are multiple iterations of a single page, we typically work left to right.

Any mocks in pages that appear to be faded out are considered _old_ and _out of date_ and can be ignored, as there is a better replacement nearby. (We sometimes want to keep them around for easy reference (and to leave a comment trail), but they're easily identifiable because their artboards are set to 50% opacity.)

Even with this loosely-documented process, things move quickly and we don't always follow this process. If you're looking for something in particular, it's worth pinging in the #team-brand channel.

We're also working on creating a singular place for product screenshots, which are exported in light and dark mode using html.to.design.

Email comms

Brand | Source: https://posthog.com/handbook/brand/email-comms

Our email communications can be broadly divided into broadcasts (one-off emails to specific lists, like a newsletter), campaigns (repeatable workflows which users move through dynamically), and API triggered emails (self-explanatory).

This page doesn't deal with our Product for Engineers newsletter, which is sent through Substack and managed by the Content & Docs team.

Email broadcasts

We regularly send three types of email broadcasts.

  1. Changelog, a product announcement email sent bi-weekly via Customer.io.
  2. PostHog for Startups, an email to users of our startup program. Sent monthly, via Customer.io.
  3. Launch emails, which are 'random acts of marketing', but fairly consistent with every product team shipping alpha, beta and GA features each quarter.

Occasionally we send other ad-hoc email broadcasts for specific activities such as outages, reminders, announcements, or deprecations.

Changelog

The changelog email is part of the new release process and is used for product announcements.

Every month, we use Customer.io to share a broadcast which summarizes the highlights from the weekly changelog over the last month. We use our discretion to choose which updates to highlight, usually showcasing three or four of the most impactful changes. We usually reserve the top spot for making users aware of new beta features. A test is shared with the team before we send to users.

We tag these emails as Product updates in Customer.io, so users can manage their subscriptions. In order to maintain high deliverability, we target users in the Recently Engaged (4 months) segment which includes everyone who has logged in the last quarter.

PostHog for Startups

Each month, we send an email to users in our PostHog for Startup program. A test is shared with the team before we send to users. This email is targeted to users in the following segments, all at once: PostHog for Startups (Old), Users in the YC program, old and new, Old startup teams (Backfill only), and PostHog for Startups and YC (new).

The email is usually comprised of three sections, which inform users of new guides which are relevant to startup use-cases, new betas which are available for them to try, and a spotlight written about a new org in the program. We end by asking for feedback.

We categorize these emails as Actually useful marketing emails in Customer.io, so users can unsubscribe if they wish. This email usually comes directly from Joe.

Launch emails

Most product and feature launch emails come from the Product Marketer who sent them -- but sometimes campaigns trigger from others, such as billing@posthog.com.

The exceptions and other solutions are:

We specifically do not want emails we think people will reply to going into hey@posthog.com because it is sporadically monitored at best, and hard to collaborate through.

When we ask users to share feedback through email, it should either link to beta-feedback@posthog.com, the support modal, or to ourselves personally. Never hey@posthog.com.

Doing this lets us filter out the noise for everyone else while still giving good visibility on meaningful feedback internally. If a user sends you feedback, you should share that with the relevant product team or in https://posthog.slack.com/archives/C011L071P8U

When adding yourself as a send-from address in Customer.io, be sure to edit the display name to '[your name] from PostHog'.

Other broadcasts

Any ad-hoc customer email broadcasts are owned by the , and are usually sent via Customer.io. These can include product updates, outage alerts, or other PostHog news if needed.

These emails are usually tagged as Service updates in Customer.io when they include important account or product information. These emails are given a dedicated unsubscribe option in the footer, making it clear that we do not recommend users unsubscribe to these emails.

Important service updates are the _only_ type of email we may send to unsubscribed users, and only if we feel it is warranted to do so.

Service updates emails are often part of an engineering incident. We handle comms for those too.

Whenever we need to send an email broadcast like this we begin by creating an issue in the Meta repo, unless it involves discussion of personal information - in which case it is discussed in Company Internal. This enables us to summarize information and seek approval from teams while also keeping our work open source, and without requiring everyone to log in to Customer.io. Issues are closed when an email is sent.

If you'd like to work with Marketing on an email activity, please begin by opening an issue in the meta repo.

Email campaigns

We maintain many email campaigns to help users get the most out of the product. The most developed and documented of these are our four onboarding campaigns.

Onboarding emails

Generally, when we talk about onboarding emails we refer specifically to the flow for PostHog Cloud sign-ups, but there are also other flows in use for other occasions.

PostHog Cloud onboarding emails

The latest revision is Onboarding 8. You can read about old revisions on the blog.

The onboarding flow regularly changes as we test new ideas. Any changes to it are, as with all other email campaigns, documented in the Meta repo.

We aim for all content in this flow to be relevant and helpful to users, without being salesy. All emails come directly from Joe and he triages replies on a daily basis, answering or redirecting as needed. The campaign is triggered when a user signs up for the first time and has a goal of users achieving billing product activated within 7 days of opening any email in the flow.

We tag all these email flows as onboarding in Customer.io and categorize them as Welcome emails so that users can easily manage their preferences.

Self-hosted and open source onboarding emails

We sunset our paid self-hosted product a long time ago, but some users still try to use the legacy version. For this reason we run a dedicated self-hosted onboarding campaign which includes three emails sent over a course of six weeks. These emails come from the hey@posthog.com email address.

The goal of this flow is to set expectations for what the self-hosted experience is like and to encourage users to move to the PostHog Cloud product for a better experience.

Our open source onboarding email is essentially identical to the self-hosted onboarding flow, but excludes information about the sunsetting of the self-hosted product.

Beta onboarding emails

When a user opts in to a beta via the feature preview menu we enter them into an email flow designed to help us collect feedback from users.

This flow currently comprises a single, personal email from either Joe or the team lead working on the beta feature. This email is sent one week after the user joins the beta and features tailored content based on which beta the user joined.

When responses come in, Joe generally triages replies and directs feedback to the relevant team, as well as rewarding users with merch as thanks for their feedback.

Launching a beta? It helps to let the Brand team know in the team Slack. The team can then add your beta to the beta onboarding flow, and plan ahead for marketing announcements as needed.

Onboarding - new hires

This is an internal email flow for new hires, which triggers whenever a new user signs up with a PostHog email address. We currently exclude most old-time hires from this flow, to avoid blocking their inboxes.

This campaign runs for a new hire's first 30 days and sends them 7 emails with information to help them get set up at PostHog.

There's no way to unsubscribe from these emails, but if you're triggering them with test accounts then let the Brand team know and they can exclude you from the campaign.

Other email campaigns

We run a series of other small campaigns with smaller volumes. These include:

API triggered emails

We maintain a series of API triggered emails by working with the . These are found in Customer.io's transactional tool and broadly encompass billing and security updates, such as an upcoming bill or a change to 2FA settings. These emails are triggered by API in order to keep them highly relevant and with high deliverability.

Transactional emails feature Liquid code to help personalize their content. All transactional emails should contain Liquid in the main body content to clearly indicate to the user which project or organization the email is regarding, with suitable fallbacks. For example:

We turned the free allowance for {{ trigger.product_name | default: "a product" }} on {% if trigger.team_name %}{{ trigger.team_name }}{% else %}your account{% endif %} off and on again, giving you another month of free usage.

In-app comms

Brand | Source: https://posthog.com/handbook/brand/in-app

These are instructions for internal in-app comms tools at PostHog. To do in-app comms of your own, check out surveys.

Occasionally, we use in-app messages to tell users about certain things. We recognize that in-app messages can be intrusive and we want to avoid spamming our users with too many of them, too frequently. For that reason, we're judicious about the way in which we use them.

We currently don't have a separate system for tracking in-app messages, so Brand currently owns the channel and is responsible for ensuring that messages aren't used excessively.

Types of in-app message

Currently, there are three ways in which we can send in-app messages.

How we use in-app messages

We use each of the three channels above for different purposes, guided by the needs of a message and the level of intrustion.

Creating new in-app prompts

In-app prompts are intrusive to users, but can be used for a wide variety of reasons. Therefore, if you create one we ask that you...

This will enable others in the team to more easily keep track of what in-app messages are being shown, and what their content is. As a reminder:

If you have any questions, please ask in #ask-posthog-anything on Slack.

Brand overview

Brand | Source: https://posthog.com/handbook/brand/overview

The Graphics team focuses on creating all illustration and art work for PostHog. As the team responsible for PostHog's visual identity, they have the final say on all such matters, including in regards to brand.

The Graphics team works closely with all teams at PostHog.

This team does not own product design or website design, which are handled by the engineering teams and the respectively.

Partners

Brand | Source: https://posthog.com/handbook/brand/partners

We're frequently contacted about revenue-sharing partnerships, or individuals and agencies that want to be listed as official partners. Also, technology integrations!

If someone contacts you about partnering with PostHog, refer them to our partnerships page and ask them to complete the survey there. This will directly alert relevant teams internally.

We recommend users who need implementation help explore our existing resources, purchase time with the onboarding team or contact us. If we can help, we will!

Helpful resources for users

Users who contact us about wanting support from a partner often want particular types of help. We've curated some resources below which we can give them so they can self-serve where possible.

Migration help

If a customer contacts us about migrating data into PostHog we should first refer them to the Sales & CS Team, who will triage them. We also have guides to help teams migrate data on their own.

Implementation help

Sometimes teams want help or advice on their event taxonomy, or creating specific insights. Users who look like they have the potential to pay >$20k should generally be referred to the Sales & CS team, otherwise they should go through the regular support flow. We also have a wide variety of dashboard templates and tutorials to help teams get started.

If the user is very new then we usually strongly advise enabling auto-capture and creating an AARRR dashboard as a first step.

Self-hosted help

We no longer provide support for self-hosted deployments. If users contact us for help with self-hosted deployments then we refer them to our legacy docs and strongly recommend they migrate to PostHog Cloud).

Getting more help (from someone else)

If users need more help than we can reasonably provide, they may ask for external support or partners. We do not have any official partners and users should know that any suggestions we may make are not vetted or accreddited in any way.

That said, some users have found success working with the following external partners:

Sometimes teams are able to find success by posting on platforms such as Upwork.

Design philosophy

Brand | Source: https://posthog.com/handbook/brand/philosophy

Looking for our brand style guide? Look no further.

Different ~~by~~ _with_ design

Design at PostHog works differently than most companies. We fundamentally believe that we can differentiate ourselves with design – by thinking outside the box and pushing boundaries. This means we're not structured like a typical design org.

How our customers interact with product analytics (and other tools) has largely remained unchanged since these tools were created over the past couple decades. There's nothing _wrong_ with how they work currently, but people were also very happy with riding horses until cars were widely adopted.

Does that mean we're going to change how everything works? Not necessarily. It just means we have the freedom to try different things and see what sticks.

Our philosophy started with our website

Our first design hire was our graphic designer, Lottie. It's not every day you see a graphic designer in the first 5 hires! This was the result of a belief by our founders that having a bold, yet relatable brand would be a differentiator.

After a year of constant iteration on our website and docs over 2021-2022, we landed in a place where our website is now a reference for many other startups who are looking to do something innovative with their websites.

We're now extending this thinking into our product.

Press

Brand | Source: https://posthog.com/handbook/brand/press

Press enquiries

Any press-related enquiries should be directed to press@posthog.com - this includes any emails you receive personally. Only Joe, James, Tim or Charles should be talking to the press on PostHog's behalf.

With the exception of occasional major press releases (see below), PR is purely a reactive activity at PostHog. We do not invest in proactive PR yet, as we believe other channels are a higher priority.

Managing press releases

From time to time, we may have significant company news that we want to release via the press, in addition to our usual channels. This is usually for significant company milestones such as funding rounds.

We have a simple process to ensure that any press releases go smoothly.

First steps

Two weeks before release

We currently prefer working with a single media partner on an exclusive basis, as we believe a single, high-quality story is more impactful than taking a broad approach, given our current early stage.

One week before release

On the day of release

Press release template

Include media and quotes from James, Tim or influential people.

# Headline

News

## About PostHog

PostHog is an open source developer platform. PostHog enables software teams to understand user behavior – auto-capturing events, performing product analytics and dashboarding, enabling video replays, and rolling out new features behind feature flags, all based on their single open source platform. The product’s open source approach enables companies to self-host, removing the need to send data externally.

Founded in 2020 by James Hawkins and Tim Glaser, PostHog was a member of Y Combinator’s Winter 2020 batch, and has subsequent raised $12m in funding from GV, Y Combinator and notable angel investors including Jason Warner (CTO, GitHub), Solomon Hykes (Founder, Docker).

## About Y Combinator Continuity Fund

YC Continuity is an investment fund dedicated to supporting founders as they scale their companies. Our primary goal is to support YC alumni companies by investing in their subsequent funding rounds, though we occasionally invest in non-YC companies as well.

Like YC’s early-stage partners, the entire YC Continuity team has strong operating experience. We work to create opportunities for founders to continue their personal growth and scale their companies successfully.

We also run the YC Growth Program, which brings together founder-CEOs who are leading rapidly growing companies.

Startups & Y Combinator

Brand | Source: https://posthog.com/handbook/brand/startups

Want to apply for our startups program? Sign-up here, or apply on Bookface if you're in Y Combinator.

We run two special programs for early-stage teams. The primary place for discussing both programs is the #project-startups-and-yc channel in Slack.

| Feature | Startups | Y Combinator | | --------------------------- | ----------------------------------------------------- | ----------------------------------------------------- | | Eligibility | <2 years old, <$5M raised, not acquired | Must be in YC, <$25m raised | | Credit | $50,000 for 12 months | $50k per year, whilst eligible | | Can use credit for add-ons? | ⚠️ Yes, but cannot use credit for BAA in Boost add-on | ✅ Yes, and can use credit for BAA in Boost add-on | | Founder merch | Welcome pack (max 1) | Different welcome pack (max 4) | | Community | — | Tim's Whatsapp, priority support | | Apply via… | Startup page | Secret YC page |

PostHog for Startups

Any company that is <2 years old and has raised less than $5M in funding is eligible to apply and claim the following:

❗Credits cannot be used toward a BAA under the Boost plan.

Small open source projects without corporate backing and less than $200k annual revenue can contact support to have the 12-month credit expiry waived.

All applications are automatically approved, then manually reviewed for eligibility.

We track all PostHog for Startups applications in this Zapier table and this Zap.

PostHog for Y Combinator

This program is similar to our startup program but has some key differences for YC teams. Teams can be in any YC batch, with any amount of funding raised, and can claim the following:

You can find the copy for the latest deal on Bookface in this doc. To post updates, you need to ask James or Tim to do it.

This deal is not available to YC alumni, who started another company - if they're eligible, they can apply for PostHog for Startups instead

✅ Credits can be used to claim a BAA under the Boost plan.

YC teams must apply via our secret YC page, where we ask for a screenshot from Bookface to prove their eligibility.

We track all PostHog for YC applications in this Zapier table.

What happens after companies apply?

  1. Application

A company signs up to PostHog, adds billing details, and applies via the startup form.

  1. Credit

If they meet the basic criteria, we automatically apply the correct amount of Stripe credit.

  1. Welcome + merch

Shortly after, they receive an automated email) from Joe Martin, in which we

  1. Milestones

When teams reach 50%, 75%, or 100% of their credit usage — or when credits expire — they receive milestone emails. These come from Customer.io and are managed by Joe Martin.

  1. Post-credit

Once credit is fully used or expired, teams are moved to a standard paid plan automatically. We automatically email users to let them know and offer a one-time $500 credit bonus to help soften the transition.

Reviewing applications

Applications are automatically enriched with Clearbit and Clay, then approved. We then manually review all emails to ensure eligibility. If there's a mismatch (e.g. on founding date or funding raised), we’ll email the founder. If we don’t hear back in a week or confirm ineligibility, we remove the credits.

Merch

All merch is fulfilled through the PostHog store by Micromerch.

Issues? Reach out in #merch Slack. Founders can also email merch@posthog.com.

Monthly Newsletter

We send a short, founder-focused newsletter once per month to all program participants. This is handled as a Customer.io broadcast using a prebuilt template.

Credit usage

Credits can be used for almost all PostHog products and add-ons, including platform packages.

Credits are valid are not transferable, and don’t carry over or convert to cash. They are valid for 12 months and that timer begins at application. Once expired or fully used, teams are moved to standard billing.

Partners

We currently partner with:

Discount codes are sent in the welcome email after signup.

If users run into issues with redemption, we can help liaise — though all offers are ultimately at partner discretion.

Contacts:

We previously offered DigitalOcean credits ($25k) and a Mintlify partnership, but these were retired in Q2 2025.

Program extensions

We don’t usually extend credits — the 12-month window is intended to be firm and fair. However, we’re open to requests in exceptional cases.

Founders must clearly explain why they couldn’t use the credit in time and provide evidence of recent progress or changes. Requests are reviewed manually by the Customer Success team.

Reporting

We have a dashboard for this.

Troubleshooting

If the Slack invite isn't sent or you discover founders did not receive it, you can manually invite users to the #posthog-founders-club channel. Make sure to select that they are "An external organization" when prompted right after adding their email address. A Slack admin will need to approve them before they're fully added to the channel.

If they did not receive an automated coupon to order the YC Kit from the merch store), you can generate a new coupon code manually in the Shopify admin view. The easiest way to do that is to duplicate an existing coupon, regenerate the coupon code, and save it. You'll have to repeat the process for every founder.

Credentials for Zapier, Shoify, etc. are available in the shared 1Password account.

Style guide

Brand | Source: https://posthog.com/handbook/brand/style-guide

This guide explains how the PostHog brand should appear across:

Brand != marketing

It's the sum of how people experience PostHog – from the homepage to documentation to support conversations.

Our goal is simple: earn the trust of developers.

Developers tend to distrust marketing. They prefer tools that feel human, honest, and thoughtfully built, not overly polished corporate brands.

Because of this, PostHog deliberately avoids sounding or looking like typical B2B SaaS companies.

Brand is everyone's job

Every interaction contributes to the brand.

Two ideas shape how we build and present PostHog.

Yes and…

We expand ideas instead of shutting them down. This mindset encourages creativity and experimentation.

It'd be better if we built it ourselves

The best products are built by people who care deeply about what they create.

In practice: everyone who ships something contributes to the brand.

Brand personality

| PostHog should feel | Avoid being | |---|---| | Opinionated | Corporate | | Human | Generic | | Slightly weird | Overly polished | | Thoughtful | Forced or cheesy | | Direct | | | Honest | |

Developers connect with products that feel authentic, not brands trying to sound impressive.

Voice and tone

Write the way you would explain something to a smart friend.

Humor is welcome, but it should never feel forced.

The dating profile test

Most SaaS companies write like they're submitting a résumé. Safe. Formal. Generic.

PostHog writes more like a dating profile: authentic, memorable, slightly weird, showing personality.

Users don't connect with product specs. They connect with people and ideas.

Design philosophy

PostHog design focuses on thoughtfulness and clarity, not looking luxurious.

Taste > polish

Design should feel crafted, not corporate. A thoughtful design builds trust.

Care about the details

Small details communicate quality. Examples: typography, spacing, illustration, layout, visual balance.

Users may not consciously notice them, but they still feel them.

Be intentional

Every design element should serve a purpose. Design should help:

Avoid decoration without meaning.

Visual identity

The PostHog visual style is intentionally distinctive. It should feel: handcrafted, playful, slightly weird, thoughtful, recognizable.

If something could exist on any other SaaS website, it probably isn't PostHog enough.

Core visual elements

Hedgehogs

Hedgehogs are a core brand element (though not every asset needs one).

Guidelines:

The hedgehog is a creative vehicle for translating our personality into something expressive and alive.

Illustration

Illustrations help explain ideas and add personality. They should:

Avoid:

Simple is usually better.

Typography

Primary fonts:

Typography should prioritize readability, hierarchy, and clarity.

Color

Color should guide attention rather than dominate the page.

General approach:

Avoid:

PostHog vs. typical SaaS

| | Average SaaS | PostHog | |---|---|---| | Headline | Buzzwords | Clear and direct | | Visual style | Gradients and abstract shapes | Custom illustrations | | Tone | Formal | Conversational | | Design goal | Look professional | Look intentional | | Brand | Generic | Distinctive |

Common mistakes

Generic SaaS design – Gradients, blobs, and stock illustrations make designs forgettable.

Decoration over clarity – Design should support the content, not distract from it.

Forced humor – Humor should feel natural.

Overly polished designs – Perfect designs can feel corporate.

Copying competitors – PostHog aims to be distinctive, not trendy.

Design checklist

Before publishing something, ask:

If the answer to most of these is yes, you're probably on the right track.

The goal isn't to look expensive. The goal is to make people think: "Someone clearly cared about making this."

Testimonials and G2

Brand | Source: https://posthog.com/handbook/brand/testimonials

Social reviews on G2

We collect reviews from users on G2, both to act as social proof and to collect feedback on our product. After a process of trialling incentives, messaging and processes throughout 2022 we have established that:

As such, we have automated our review request process using Customer.io.

The automation currently invites users to leave an honest review in exchange for a $25 gift card, if they match the following criteria.

OR

OR

OR

AND

AND

This process is handled in Customer.io using the G2 Review Requests segment and the G2 Review Requester campaign workflow. Users are only asked to review PostHog once, with a 2-day delay after the targeting confirms a match. This is important so we can avoid bombarding users with emails and do not nag users for reviews after the initial request.

More information about the G2 review process is available in the initial G2 automation RFC.

New reviews are automatically collected for team members in the internal #posthogfeedback Slack channel.

Testimonials

We speak to our users regularly and are often fortunate enough that they say nice things about our product or our way of working. Other times users talk about us in public, such as on social media or on review platforms and forums.

Not all of the feedback we receive can be used publicly. We don't assume that comments from product feedback calls can be used without explicit approval, for example, though approved customer stories, public reviews and social media comments certainly can.

If feedback can be used publicly then we collect it here, so that we can use it elsewhere to enhance our website or docs.

PostHog community

Community | Source: https://posthog.com/handbook/community

We want to build a self-sustaining and scalable community of engaged users because it will enable us to own our audience in a way that third party social media platforms do not. Like brand or content, building a thriving community is a (very) long term bet, so we will need to both invest a lot of time up front and then wait to see what works and what doesn't.

Our approach to building community at PostHog differs from most devtools in two ways:

Responsibility for community

This is shared across multiple teams and people - we (deliberately) do not have one person responsible for 'community':

Support should not considered part of community at PostHog. Support is driven by the Customer Success team, primarily using in-app support and decdicated Slack channels. Good customer support helps build positive word of mouth, but replying to support queries is not an engaging or scalable way to build a thriving community.

Content hubs

We are in the process of building these out. We have created two hubs targeting our ICP:

We have a bunch of features we are building here – more details to come!

Community forums

Our community forums live at posthog.com/questions – but they come with a twist...

Anyone can ask a question within the forums, but they can _also_ ask a question at the end of any docs page (under the "Questions?" subheading). We've found this to be a great place for people to ask very specific questions after attempting to find an answer in documentation, as it acts as a mini-FAQ section.

Questions that are asked within the docs are also automatically aggreagated to the correct category in the community forums.

Asking a question

A user can write a question, but they'll need to create a PostHog.com account before posting. (Note: This authentication system is currently separate from PostHog Cloud accounts, though we have plans to unify them.) Users can write Markdown and upload images to a question.

Once it's posted, a question permalink page is generated, which gets indexed in our site search (and tends to rank well in Google, too). The user is automatically subscribed to reply notifications by email.

Anyone can subscribe to thread replies by clicking the bell icon in a thread (after signing in).

Answering questions

If you're a PostHog team member, read the guidelines for responding to community questions.

Points & achievements

Community members can earn achievements for activities like asking questions, helping others, voting on the roadmap, and completing their profile. Each achievement awards points that can be redeemed for stickers, merch credits, and other rewards from the Points tab on their profile.

Points & rewards

Community | Source: https://posthog.com/handbook/community/points

PostHog community members can earn points by completing achievements and redeem them for stickers, merch credits, and other rewards from the Points tab on their profile.

Image: Points tab on profile

How points work

Points are earned by completing achievements. Each achievement has a point value based on the time and effort we expect someone to spend earning it. For example, voting on the roadmap, updating your bio, and asking your first question are each worth a few points – enough to earn a sticker.

Users can also receive points as one-off gifts from moderators for special contributions that don't fit neatly into an achievement category.

Earning points

Points are primarily earned by completing achievements. To see available achievements and plan which to tackle next, visit the achievements page.

Achievements are awarded for activities like:

Balance & transactions

Redeeming points

Redemptions are completely self-serve from the Points tab on your profile.

The points store offers two types of rewards:

Products (e.g. stickers)

When you redeem a product like a sticker:

  1. Click "Redeem" on the reward card
  2. Confirm the redemption
  3. A button appears letting you order it immediately
  4. Enter your name and shipping address to complete the order

Merch credits

When you redeem a merch credit (gift card):

  1. Click "Redeem" on the reward card
  2. Confirm the redemption
  3. You receive a discount code
  4. Click "Use in store" to open the merch store with your code pre-applied
  5. Shop for anything you'd like!

Merch codes are saved in your transaction history, so you can always find them again if needed.

---

For moderators

Gifting points

Moderators can gift points to users for contributions that don't fit into existing achievements:

  1. Navigate to the user's profile on posthog.com
  2. Click the gift icon (present button) in the profile header
  3. Enter the number of points and a reason for the gift
  4. Click "Send gift" and confirm

Monitoring redemptions

Redemptions are self-serve, but we have a Slack channel set up to monitor them while we're ironing out any kinks. This gives us visibility without requiring manual approval.

Creating achievements

See Achievements in the community profiles documentation for instructions on creating, assigning, and revoking achievements.

Community profiles

Community | Source: https://posthog.com/handbook/community/profiles

When a user signs up to ask a question, a community profile is created for them at /community/profiles/[id] where they can add a bio and links to social profiles.

Their profile page also aggregates any community disucssions they've participated in. (As a byproduct, this is an easy way to track down a user who primarily creates a community profile for self-promotion!)

Team members have access to special profile features, like:

We also use data from these profiles in other areas of the site:

Creating a profile for a new team member

To reduce the onboarding steps, Lottie or Cory can help create a profile for a new team member. To do this:

  1. On posthog.com/questions
  2. Create an account with their @posthog.com email
  3. First name, Last name, Email, Random password
  4. Via the PostHog website, visit the newly created profile in our website CMS
  5. Click the newly created profile link in the right sidebar (below My profile and above Edit profile)
  6. Click the "View in Strapi" link in the right sidebar
  7. Now, in their profile on our website CMS, update the following, then hit Save:
  8. companyRole *
  9. startDate *
  10. location * - use a string, like “London, UK”
  11. country - use TWO-CHARACTER country code, eg: GB
  12. avatar (can be a placeholder image until an illustration is drawn, but should be a png with a transparent background)

*This information can be found in their onboarding checklist

  1. Update their user permissions
  2. Under user, click their email address to be taken to their user page
  3. Under role, change to Moderator and hit Save
  4. Let the team member know their profile is created, and that they should add a bio!
  5. To access their account, use the password reset option on the login form at posthog.com/questions
  6. Uploading profile illustration (when ready)
  7. Make sure it's uploaded on a square canvas at @2x PNG, and that the portrait fills as much of the canvas as possible. If an arm has to be clipped, set the image to be clipped on the right side so their arm on the left side of the image doesn't get cut off.

Achievements

PostHog community members can earn achievements for various activities. Each achievement awards points that can be redeemed for stickers, merch credits, and other rewards. See Points & rewards for details on the points system.

Achievements are valued based on the time we expect someone to spend earning them. For example, someone who votes on the roadmap, updates their bio, and asks their first question has earned enough points for a sticker.

Assigning, revoking, and creating new achievements is handled in Strapi, our website CMS.

Creating a new achievement

  1. Login to our website CMS. (Request an account from Eli or Cory if you don't already have one.)
  2. Click "Content Manager" > "Achievements" > "Create New Entry"
  3. Fill in the achievement details
  4. Click "Save" > "Publish"

Manually assigning an achievement to a community member

  1. Click "Content Manager" > "Profiles"
  2. Find and click the desired profile
  3. Scroll to the "achievements" field > Click "Add an entry"
  4. Select the desired achievement from the "achievement" dropdown
  5. Click "Save"

Revoke an achievement from a community member

  1. Click "Content Manager" > "Profiles"
  2. Find and click the desired profile
  3. Scroll to the "achievements" field > Click the trash icon on the desired achievement
  4. Click "Save"

Answering community questions

Community | Source: https://posthog.com/handbook/community/questions

The Website & Docs team can help in configuring Slack notifications for small teams to receive alerts to questions in a team channel – usually the one designated for support.

Individually, you can also subscribe to topics of your choosing (with your PostHog.com account) by clicking the bell icon next to the topic's title. You'll receive a daily summary of new questions by email, and you'll find open threads for that topic in your personalized community dashboard (available when signed in).

Who should answer community questions?

We encourage all team members to watch for new community questions, and answer them if they can. (Questions are sent into Zendesk for the support hero, but you can help ease the burden _while_ contributing to faster response times, which can lead to more positive interactions with customers (or prospective customers).

If a question needs a follow-up later on, tag it with Internal: follow-up and the Website & Docs team can make sure there's a resolution.

Guidelines

Phrasing & tone

When possible, respond in a phrase that doesn’t directly indicate you work for PostHog. (We can encourage community engagement by intentionally separating ourselves from the image it's a support forum where only PostHog employees respond.)

Various cases you may come across...

Some questions don’t make sense to be public, and some answers should be more widely accessible. Here’s how to handle those:

Thread resolution

We want the OP (original poster) to mark a solution themselves. Never mark your response as a solution immediately, as it can look like we're too presumptuous in assuming we correctly answered a question, when there may be more nuance.

Context

Moderators can see additional info about a user when viewing a question. (If you're not yet a moderator, create an account, then ask your team lead to add you to your small team's page. Once you're added there, you'll instantly be upgraded to moderator status.)

Image: Moderator view

  1. Below the question is a moderator panel with the user's name and email, as well as a link to their record in PostHog Cloud.
  1. In the right sidebar is an embedded version of PostHog Sidecar, a yet-to-be-released Chrome Extension that reveals the user's activity from PostHog Cloud wherever they can be identified across the web (usually by email). _Note: You don't need to install the Chrome extension as the pane is embedded directly within the community forums._

Soc2

Company | Source: https://posthog.com/handbook/company/_snippets/soc2

Utilize our Trust Center powered by SafeBase to self-serve reports, policies, and certifications.

PostHog is certified as SOC 2 Type II compliant, following an external audit.

Our latest security report is publicly available (covering controls as of May 31st, 2025). Our reporting period runs from 01 June - 31 May each year.

Policies

We have a number of policies in place to support SOC 2 compliance. All team members have been invited to Drata to review these and to complete security training and background checks as part of onboarding.

All of our policies are available for viewing and upon request via our Trust Center.

Adding company-wide tools and vendors

Company | Source: https://posthog.com/handbook/company/adding-tools

What is it?

In the software section of our spending money page we say:

There needs to be a very significant upside to introducing a new piece of software to outweigh its cost.

This is our mechanism for making decisions where we need to assess the cost of introducing a new piece of software.

It is inspired by this post on "fad resilience" from Slack. We want to be able to introduce new tools and services, without introducing overlapping tools and unnecessary complexity.

What makes us fad resilient is that you are free (and encouraged) to try new things. But by introducing new things, you become responsible for rolling them out. And for replacing anything they make obsolete.

What is it not?

This doesn't apply to making "cheap decisions". A cheap decision is one that can be easily completed or reversed, or one that only affects your work not other people's. For those types of decisions you should continue to follow the guidance in the the software section of our spending money page. This is about the adoption of new company-wide tools, or implementation of vendors that are going to be used in the PostHog product.

How does it work?

If you find yourself saying something like:

Then you need to do the following:

1. Try the tool in a low-risk context

Use the tool in a context where it is easily replaced and does not involve sensitive data. If you have doubts about what information can be shared at this stage, check with #legal first. Similar to a spike.

The goal is to:

2. Open an pull request in Company Internal

At the same time open an issue describing why we should adopt the tool. Anyone proposing a new vendor should think about the impact on the whole company, not just their team or use case.

You should carefully be thinking about and your proposal should consider the types of things described below:

What to think about?

Problem and motivation

Trial/Proof of concept

Data exposure and privacy

From least to most sensitive:

Also consider:

Vendor due diligence

Alternatives and competition

Internal impact

Customer defensibility

These are guidelines, not a rigid checklist. The goal is for everyone to be thinking about the overall impact of introducing a new tool, and to allow for a holistic review of the risks against the benefits.

Many proposals will not make it past this stage – that's good. We don't want a stack that changes constantly, but we also don't want one that never improves.

After a decision is made: Review process

Once a decision has been made to adopt a tool/vendor, the person proposing the tool is responsible for coordinating the next steps.

1. Finalize business terms

Work with the vendor to negotiate the commercial and business terms, such as:

Once the business terms are mostly settled, the vendor’s documents will need to go through legal review before anything is signed.

Typically, these includes:

As soon as it looks like we intend to move forward with the vendor, post in #legal, and give a heads-up that:

As soon as documents are available for review, send the documents to #legal (in an editable format such as .docx).

2. Plan time for legal review

Legal review usually takes a few business days depending on bandwidth, priorities, and existing obligations, and negotiations may take longer depending on the use case, the vendor’s contract terms and how quickly they review and negotiate proposed changes.

Plan accordingly and involve legal early. If you have a deadline for implementing the tool or there is another reason the standard timeline above needs to be expedited, please make sure to let legal know ahead of time.

3. Additional requirements for Subprocessors

If a vendor qualifies as a subprocessor, the review process will usually be more involved.

Generally speaking, a subprocessor is a vendor or tool that is going to be used to processes customer or end-user data as fundamental part of the PostHog product or infrastructure. For example, infrastructure providers (like cloud hosting) or services that process production data are clearly subprocessors.

Many internal tools used for productivity or operations (for example, documentation and productivity tools) are not necessarily subprocessors.

As a rule of thumb, any vendor that needs to have access to customer end-user data in order for a part of the PostHog product to function should raise alarm bells, but if you are unsure whether a tool/vendor qualifies as a subprocessor, always check with #legal early.

For new subprocessors:

4. Using the tool

Once:

...the tool can be used in production.

Logos, brand, hedgehogs

Company | Source: https://posthog.com/handbook/company/brand-assets

Looking for brand voice, design philosophy, and visual identity guidelines? Check out our Style guide.

Want to use our hedgehogs for your community event or article? We have a huge library of them you can use. Can't see what you need? Let us know! Please don't use AI art though. We're quite particular about our illustrations and AI just doesn't get it right.

Logo and brand usage for third-parties

We’re really happy people want to build on top of PostHog, but we want to keep it clear when something is made by us or made by someone else. If you've built a third-party app on top of PostHog or want to partner with us in some way, here is some high-level guidance for you to bear in mind.

We don't like doing it, but if we spot some name, brand asset or logo usage that are inconsistent with our guidelines or brand, we will reach out to try to get that sorted out, so please try to be thoughtful about branding and try to be consistent with the guidelines we've set out here. If you have questions, please reach out to us at marketing@posthog.com for clarification.

If you're looking for the PostHog logo, you came to the right place. Please keep the logo intact. SVG is always preferred as it will infinitely scale with no quality loss.

(Images shown below have transparent backgrounds but appear here with a solid background color.)

| Preview | Name | Vector | PNG | PNG w/ padding\* | | ----------------------------------------------------------------------------------------------------------------------------------- | -------------- | ---------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | | <div style="background:#EEEFE9;padding:5px 5px 0;margin-left:-5px;"><img src="/brand/posthog-logo@2x.png" width="157" /></div> | Standard logo | <a href="/brand/posthog-logo.svg" download>SVG</a> | <a href="/brand/posthog-logo.png" download>PNG</a> \| <a href="/brand/posthog-logo@2x.png" download>PNG @2x</a> | <a href="/brand/posthog-logo-padded.png" download>PNG</a> \| <a href="/brand/posthog-logo-padded@2x.png" download>PNG @2x</a> | | <div style="background:#EEEFE9;padding:5px 5px 0;margin-left:-5px;">Image: /brand/posthog-logo-black.svg</div> | Dark logo | <a href="/brand/posthog-logo-black.svg" download>SVG</a> | <a href="/brand/posthog-logo-black.png" download>PNG</a> \| <a href="/brand/posthog-logo-black@2x.png" download>PNG @2x</a> | <a href="/brand/posthog-logo-black-padded.png" download>PNG</a> \| <a href="/brand/posthog-logo-black-padded@2x.png" download>PNG @2x</a> | | <div style="background:#111;padding:5px 5px 0;margin-left:-5px;">Image: /brand/posthog-logo-white.svg</div> | Light logo | <a href="/brand/posthog-logo-white.svg" download>SVG</a> | <a href="/brand/posthog-logo-white.png" download>PNG</a> \| <a href="/brand/posthog-logo-white@2x.png" download>PNG @2x</a> | <a href="/brand/posthog-logo-white-padded.png" download>PNG</a> \| <a href="/brand/posthog-logo-white-padded@2x.png" download>PNG @2x</a> | | <div style="background:#EEEFE9;display:inline-block;padding:5px 5px 0;margin-left:-5px;">Image: /brand/posthog-logomark.svg</div> | Logomark | <a href="/brand/posthog-logomark.svg" download>SVG</a> | <a href="/brand/posthog-logomark.png" download>PNG</a> \| <a href="/brand/posthog-logomark@2x.png" download>PNG @2x</a> | <a href="/brand/posthog-logomark-padded.png" download>PNG</a> \| <a href="/brand/posthog-logomark-padded@2x.png" download>PNG @2x</a> | | <div style="background:#EEEFE9;display:inline-block;padding:5px 5px 0;margin-left:-5px;">Image: /brand/posthog-logo-stacked.svg</div> | Logo (stacked) | <a href="/brand/posthog-logo-stacked.svg" download>SVG</a> | <a href="/brand/posthog-logo-stacked.png" download>PNG</a> \| <a href="/brand/posthog-logo-stacked@2x.png" download>PNG @2x</a> | <a href="/brand/posthog-logo-stacked-padded.png" download>PNG</a> \| <a href="/brand/posthog-logo-stacked-padded@2x.png" download>PNG @2x</a> |

\*PNGs with padding are useful when uploading the logo to a third-party service where there is limited control over padding/margin around the logo.

When using the logo on a dark background, use the white-only version of the logo. _Never_ modify the colors in the logomark (like changing the hedgehog's face color to white when using on a dark background).

The @2x version of PNGs are designed for hi-dpi (or "Retina") screens. When using the logo in third party services that support uploading multiple versions (standard and hi-dpi), please be sure to include the @2x logo as it will appear crisper on newer devices, tablets and high resolution mobile devices.

Important: We updated our logo in 2021. (Note the square font and sharp edges on the logomark in the old version.) Please be sure to use the _correct_ version. 👇🏼

Image: Logo usage examples

If you have any questions or need clarification about which version to use, ask Cory, or reach out in our community page and we'll be happy to help.

Typography

We use Displaay's typeface called _Matter SQ_. (SQ = square dots.)

On the website, we use this for all text. In-product, we only use for titles and buttons.

Building for web

On posthog.com, we use the variable font version. This allows us to specify our own font weights, which we do for paragraph text.

Context: _Matter Regular_'s weight is 430 and the next step up is _Matter Medium_ at 570, so we use our own weight of 475 for paragraph text.

Developing locally

Fonts are hosted outside of our posthog.com GitHub repo (due to licensing reasons). To protect the font files, they are restricted to loading on posthog.com and are not currently used for local development. Contributors will see the system default font load in place of Matter.

Workaround for local development

Restricted to PostHog employees, it's possible to reference the font locally to see an exact replication of what will be published on posthog.com.

global.css contains some commented out code which can be used, in conjunction with the variable webfont files (restricted to PostHog organization members). Here's how to use them:

  1. Download the webfont files from the zip above
  2. Extract the files and place them in /public/fonts
  3. In global.css, comment out the src for both fonts with production (Cloudfront) URLs and uncomment the relative URLs.
  4. Optionally use .gitignore to keep the files locally without inadvertently checking them in

Note: When submitting a PR, be sure to revert changes made to global.css

Designing on desktop

We use 4 cuts of Displaay's _Matter SQ_ typeface (SQ stands for square dots):

  1. _Bold_ (titles and section headers)
  2. _Semibold_ (paragraphs accompanying headers and paragraph links)
  3. _Regular_ & _Regular Italic_ (paragraph text)

Note that _Regular_ and _Regular Italic_ are lighter than the font-weight we use on the web, so paragraph text in Figma mockups will look noticeably thinner than how it appears on posthog.com.

When designing ads or other content with non-paragraph text, use _Semibold_ instead of _Regular_.

We have a handful of licenses for desktop use of Matter. Contact Cory if you need the desktop fonts (OTFs).

| Name | Weight | Size | Letter spacing | Line height | | ------------------------------------- | -------- | ---- | -------------- | ----------- | | h1 | Bold | 64px | -1% | 100% | | h2 | Bold | 48px | -1% | 120% | | h3 | Bold | 30px | -2% | 140% | | h4 | Bold | 24px | -2% | | | h5 | Semibold | 20px | -2% | | | h6 | Semibold | 16px | 0 | | | Paragraphs accompanying large headers | Semibold | 20px | -1% | 125% | | p | Regular | 17px | | 175% | | p (small) | Regular | 15px | | 150% |

Other fonts

We use two other fonts for special purposes. Please adhere to their usage guidelines listed below.

Squeak

Squeak is used in informal settings, generally accompanied by hedgehog artwork.

Usage guidelines
Examples

Image: Squeak font example

Image: Squeak font example

Image: Squeak font example

Loud Noises

Loud Noises is used for quotes in hedgehog artwork.

Usage guidelines
Example

Loud Noises is used in the sign the hedgehog is holding:

Image: https://res.cloudinary.com/dmukukwp6/image/upload/loud_noises_5919818659.png

---

If you have questions about which font to use, please ask in #team-website - don't just do what feels right to you!

Colors

We have two color schemes (light and dark mode), but primarily use light mode.

We use the same set of colors, and only swap out a couple hues depending on the color scheme.

Colors denoted with an asterisk (\*) are the same between palettes.

| Name | Light mode | Dark mode | | --------------------------- | --------------------------------------------------------------- | --------------------------------------------------------------- | | Text color (at 90% opacity) | <span style="color:#151515; font-size: 20px">■</span> #151515 | <span style="color:#EEEFE9; font-size: 20px">■</span> #EEEFE9 | | Background color | <span style="color:#EEEFE9; font-size: 20px">■</span> #EEEFE9 | <span style="color:#151515; font-size: 20px">■</span> #151515 | | Accent | <span style="color:#E5E7E0; font-size: 20px">■</span> #E5E7E0 | <span style="color:#2C2C2C; font-size: 20px">■</span> #2C2C2C | | Dashed divider line | <span style="color:#D0D1C9; font-size: 20px">■</span> #D0D1C9 | <span style="color:#4B4B4B; font-size: 20px">■</span> #4B4B4B | | Red\ | <span style="color:#F54E00; font-size: 20px">■</span> #F54E00 | | | Yellow | <span style="color:#DC9300; font-size: 20px">■</span> #DC9300 | <span style="color:#F1A82C; font-size: 20px">■</span> #F1A82C | | Blue\ | <span style="color:#1D4AFF; font-size: 20px">■</span> #1D4AFF | | | Gray\* | <span style="color:#BFBFBC; font-size: 20px">■</span> #BFBFBC | | | Links | Use Red | |

Use opacity over more colors

When possible, use opacity to modify colors. This allows us to use fewer colors in our palette, which is light years easier when working with two color schemes.

| Paragraph text | rgba($value, 90%) | | -------------- | ----------------------------------- | | Links | rgba($value, 95%) (and semibold) | | Links:hover | rgba($value, 100%) (and semibold) |

Presentations

We use Pitch for polished presentations (like when giving a talk). Read more about this in our communication guidelines.

Illustration guide

Our hedgehog mascot is called Max and we're quite particular about how he (or any of his hoggy pals) are illustrated. We're exploring AI tools for internal use, but currently ask that you don't use AI tools to create your own hedgehog art. Instead, you can follow the guidelines below, or create a new art request.

Image: How to draw a hedgehog

If Max is drawn in color he should always have a beige body with brown spines, arms, and legs. His arms should only bend once in the middle and he doesn't have fingers unless swearing or pointing. His feet are stubby by design and his snout lines should be visible unless obscured by a mask or beard. His expression comes mainly from his eyebrows.

Image: Draw the rest of the hedgehog

He should be outlined with a strong, black monoline with consistent thickness. He should always face left, right, or straight-on but shouldn't be drawn with a side profile or from behind as he's self-conscious.

A more detailed version of this guide is available on Figma for team members.

Hedgehog library

For team members we keep all our currently approved hedgehogs in this Figma file. This enables us to look through the library of approved hogs, and to export them at required sizes without relying on the design team.

Here's how:

  1. Open the Figma file. You can manually browse, or use Cmd + F to search based on keywords such as 'happy', 'sad', or 'will smith'.
  2. Select the hog you want. If needed, adjust the size using the 'Frame' menu in the top of the right-hand sidebar.
  3. At the bottom of the right-hand sidebar, select the file type you need in the 'Export' menu, choose @2x, then select 'Export [filename]' to download the image.

If you can't find a suitable hog, you can request one from the design team.

Non-team members can find some of the most-used hogs to download on our press page.

Communication

Company | Source: https://posthog.com/handbook/company/communication

Introduction

With team members across many countries, it's important for us to practice clear communication in ways that help us stay connected and work more efficiently.

To accomplish this, we use asynchronous communication as a starting point and stay as open and transparent as we can by communicating on GitHub through public issues and pull requests, as well as in our PostHog User and internal Slack.

Our communication values

  1. Assume positive intent. Always coming from a position of positivity and grace.
  2. Form an opinion. We live in different locations and often have very different perspectives. We want to know your thoughts, opinions, and feelings on things.
  3. Feedback is essential. Help everyone up their game in a direct but constructive way.

Golden rules

  1. Use asynchronous communication when possible: pull requests (preferred) or issues. Announcements happen on the appropriate Slack channels and people should be able to do their work without getting interrupted by chat.
  2. Discussion in GitHub issues or pull requests is preferred over everything else. If you need a response urgently, you can Slack someone with a link to your comment on an issue or pull request, asking them to respond there. However, be aware that they still may not see it straight away (and that's OK in our book). That said, casual conversations in Slack are completely normal — it’s our main space for day-to-day communication.
  3. You are not expected to be available all the time. There is no expectation to respond to messages outside of your planned working hours.
  4. It is 100% OK to ask as many questions as you have - please ask in public channels! If someone sends you a handbook link, that means they are proud that we have the answer documented - they don't mean that you should have found that yourself or that this is the complete answer. If the answer to a question isn't documented yet please immediately make a pull request to add it to the handbook in a place you have looked for it.
  5. When someone asks for something, reply back with a deadline or by noting that you already did it. Answers like: 'will do', 'OK', or 'it is on my todo list' are not helpful. If it is a small task for you but will unblock someone else, consider spending a few minutes to do the task so the other person can move forward.
  6. By default, avoid creating private groups for internal discussions.

Public by default

We make things public by default because transparency is core to our culture. The kinds of information we share falls into one of three buckets:

Information that is not publicly shared is in areas with complex signals that can impact our ability to sell, raise money or are inappropriate to share more widely for personal privacy reasons.

We have two repos to centralize and document private internal communication. These are the source of truth for any internal information, and anything that should be written down (as established in these guidelines) should live in these repos or (better) in this Handbook, not on Slack. This will make it easier when having to search for older stuff, sharing context between public and internal repos, and for newcomers to have all information they might need readily available.

Company Internal

Repository can be found in https://github.com/PostHog/company-internal

Documents any company-wide information that can't be shared publicly within People, Ops, Legal, Finance or Strategy.

Examples of information that should go here:

For company-related issues that _can_ be discussed publicly, these should go in the meta repo which can be found in https://github.com/PostHog/meta/

Examples of information that should NOT go here:

Product Internal

Repository can be found in https://github.com/PostHog/product-internal

Contains internal information related to the PostHog product. Documents any non-public information (as established in these guidelines) that specifically relates to engineering, product, growth or design.

This repository was introduced to aid maintenance and day-to-day usage of internal repositories. Having these discussions together with the company-wide information proved unwieldy. More context on this decision.

Please be sure to read the README of the repo for guidelines on how to file specific issues.

Examples of information that should go here:

Examples of information that should NOT go here:

Written communication

GitHub

Everything starts with a pull request

It's best practice to start a discussion where possible with a Pull Request (PR) instead of an issue. A PR is associated with a specific change that is proposed and transparent for everyone to review and openly discuss. The nature of PRs facilitate discussions around a proposed solution to a problem that is actionable. A PR is actionable, while an issue will inevitably lead to a longer period before the problem is addressed.

Always open a PR for things you are suggesting and/or proposing. Whether something is not working right or we are iterating on new internal process, it is worth opening a pull request with the minimal viable change instead of opening an issue encouraging open feedback on the problem without proposing any specific change directly. Remember, a PR also invites discussion, but it's specific to the proposed change, which facilitates focused decisions.

By default, pull requests are non-confidential. However, for things that are not public please open a confidential issue with suggestions to specific changes that you are proposing. When possible, consider not including sensitive information so the wider community can contribute.

Not every solution will solve the problem at hand. Keep discussions focused by _defining the problem first_ and _explaining your rationale_ behind the Minimal Viable Change (MVC) proposed in the PR. Have a bias for action and don't aim for consensus - some improvement is better than none.

Issues

GitHub Issues are useful when there isn't a specific code or document change that is being proposed or needed. For example, you may want to start an issue for tracking progress or for project management purposes that do not pertain to code commits. This can be particularly useful when tracking team tasks and creating issue boards.

However, it is still important to maintain focus when opening issues by defining a single specific topic of discussion as well as defining the desired outcome that would result in the resolution of the issue. The point is to not keep issues open-ended and to prevent issues from going stale due to lack of resolution. For example, a team member may open an issue to track the progress of a blog post with associated to-do items that need to be completed by a certain date (e.g. first draft, peer review, publish). Once the specific items are completed, the issue can successfully be closed.

Note: If you're new to using GitHub, check out this handy primer - it's specific to how we use GitHub at PostHog. You'll learn the key concepts and how to manage notifications. It's important, as this is where the bulk of our company-wide communication happens. (Think of GitHub notifications as a replacement for your work email.)

Keeping on top of reviews, issues and notifications

Keeping track of everything that's happening in GitHub can be daunting, but it's important to make sure your team receives reviews and feedback on a timely manner.

To keep on top of this, we suggest going through issues where you've been mentioned regularly. Some tricks which can help are:

Tip for easy searching through everything

To search all code, PRs and issues ever written at PostHog you can search everything in the PostHog organization on Github. To do that can go to github.com/posthog and search in the top left corner.

For extra convenience, you can also add this search as a 'search engine' in Chrome. That way you can type in ph <tab> and instantly find anything. To do that, follow these steps:

  1. Hit command + , in your browser
  2. Type search, find "manage search engines"
  3. Click "add" next to "other search engines"
  4. For "Search engine" type in github posthog organization
  5. For "keyword" type in ph
  6. For "url" copy in https://github.com/search?q=org%3Aposthog+%s&type=issues

You can now type ph + tab into your browser and search issues directly

Slack

Slack is used for more informal communication, or where it doesn't make sense to create an issue or pull request. Use your judgment to determine the appropriate channel, and whether you should be chatting publicly (default) or privately.

Also keep in mind that, as an open source platform, PostHog has contributors who don't have access to Slack. Having too much context in a private location can be detrimental to those who are trying to understand the rationale for a certain decision.

Slack canvasses are useful for storing information like schedules, bookmarks, personal to-do lists, scratch notes etc. However, things like quarterly goals, runbooks, sprint plans, FAQs etc. should live in the Handbook, Docs, or in a GitHub RFC or Issue by default. If you find yourself documenting something useful in Slack, it's much better to put it in GitHub instead and link to it from Slack so that PostHog AI can include it in future search results. Slack canvasses are terrible for searchability!

Slack recap is a great way to learn from others by adding channels like #ask-max and #today-i-learned to the recap. You can also use it to keep tabs on teams you may not directly work on, but still want to know what's being discussed.

Slackbot is a handy AI agent that can search across the PostHog workspace in Slack to help answer your questions. If you're looking for information or trying to find a past conversation, Slackbot is a great place to start.

Slack etiquette

Slack is used differently in different organizations. Here are some guidelines for how we use Slack at PostHog:

  1. Keep #general open for company-wide announcements.
  2. @channel or @here mentions should be reserved for urgent or time-sensitive posts that require immediate attention by everyone in the channel. (Examples: changing a meeting invite URL just before a meeting, or soliciting urgent help for a service disruption, where you're not sure who is immediately available)
  3. Make use of threads when responding to a post. This allows informal discussion to take place without notifications being sent to everyone in the channel on every reply.
  4. When possible, summarize multiple thoughts into a single message instead of sending multiple messages sequentially.
  5. You don't need to tell people if you're away from your computer, especially on no-meeting days. There's no general expectation people are available to reply to messages in real time, including in Slack.
  6. Keep your Slack profile up to date with the right information, including the appropriate name eg with surname or surname initial if you share a name with a colleauge.

Channel naming conventions so people don't get confused:

On the very rare occasions you need to create a private channel for some reason - most commonly hiring-related - then it's probably worth sticking #private-xxxxx in front so people don't accidentally add external parties who shouldn't be in there.

Google Docs and Slides

Never use a Google Doc / Slides for something non-confidential that has to end up on the website or this handbook. Work on these edits via commits to a pull request. Then link to the pull request or diff to present the change to people. This prevents a duplication of effort and/or an out of date handbook.

We mainly use Google Docs to capture internal information like meeting notes or to share company updates and metrics. We always make the doc accessible so you can comment and ask questions.

Please avoid using presentations for internal use. They are a poor substitute for a discussion on an issue. They lack the depth, and don't add enough context to enable asynchronous work.

When giving a talk which requires a presentation, use Pitch to build your slides. (It offers more control over design than Google Slides.) They also have a desktop app. We don't (yet) have templates configured, but you can draw from existing slides in other presentations - just copy/paste into your own presentation and modify accordingly. If you'd like assistance with slide design (or using Pitch), talk to Cory.

James (H) and Cory are admins on the Pitch account. Because Pitch charges per seat, we remove users who only need periodic access but can easily re-add when needed.

Email

  1. Internal email should be avoided in nearly all cases. Use GitHub for feature / product discussion, use Slack if you cannot use GitHub, and use Google Docs for anything else.
  2. The only uses we have for internal email are:

Writing style

  1. We use American English as the standard written language in our public-facing comms, including this handbook. This extends to date formats (September 4, 2021) and defaulting pricing to the US Dollar ($42).
  2. Do not use acronyms when you can avoid them. Acronyms have the effect of excluding people from the conversation if they are not familiar with a particular term.
  3. Common terms can be abbreviated without periods unless absolutely necessary, as it's more friendly to read on a screen. (Ex: _USA_ instead of _U.S.A._, or _vs_ over _vs._)
  4. We use the Oxford comma.
  5. Do not create links like "here" or "click here". All links should have relevant anchor text that describes what they link to. Using meaningful links is important to both search engine crawlers (SEO) and people with accessibility issues.
  6. We use sentence case for titles.
  7. When writing numbers in the thousands to the billions, it's acceptable to abbreviate them (like 10M or 100B - capital letter, no space). If you write out the full number, use commas (like 15,000,000).

Requests for comment (RFCs)

We use RFCs to communicate and gather feedback on a decision. RFCs are useful because they help us stay transparent, and the process of writing them forces you to clearly articulate your thoughts in a structured way.

Here are the steps for an RFC:

  1. Identify a problem and a decision to be made
  2. Create an RFC as a pull request using one of the RFC templates.
  1. Share the RFC:
  1. If an RFC is cross-team and is causing a large amount of disagreement, it might be worth having a sync meeting to reach a decision
  2. Once a decision is made, include the decision in the pull request, merge it in and share this in the relevant channel and #github-rfcs again.

When does it work best to write an RFC?

Writing an RFC may be helpful when any of the following is true:

When does a meeting / another approach work better than an RFC?

An RFC is likely to be unhelpful as a first step in other circumstances. Specifically, when you want to ship or suggest a change to something that significantly affects teams outside your own. In this instance, we've seen that RFCS can lead to 10 to 25+ comments, which feels antagonistic (teams having to explain all the context around their strategy down to why this decision is something they perhaps disagree with), and creates a lot of work. A single call in this instance is likely much faster than lots of frustrated people in 1/1s talking about it _and_ the energy/time needed to respond to everything in a long thread.

_However_ please write notes on such a call - to ensure everyone _is_ on the same page. This could then be copy pasted into an RFC for transparency's sake / future reference.

Top tips for RFCs

Internal meetings

PostHog uses Google Meet for video communications. For large meetings, use CMD + minus key to zoom out and see everyone - you'll usually need to do this in All Hands.

Use video calls if you find yourself going back and forth in an issue/via email or over chat. Sometimes it is still more valuable to have a 40+ message conversation via chat as it improves transparency, is easy to refer back to, and is friendlier to newcomers getting up to speed.

  1. Most scheduled meetings should have a Google Doc linked or a relevant GitHub issue. This contains an agenda, including any preparation materials.
  2. Please click 'Guests can modify event' so people can update the time in the calendar instead of having to reach out via other channels. You can configure this to be checked by default under Event Settings.
  3. Try to have your video on at all times because it's much more engaging for participants. Having pets, children, significant others, friends, and family visible during video chats is encouraged - please introduce them!
  4. As a remote company we are always striving to have the highest fidelity, collaborative conversations. Use of a headset with a microphone, is strongly recommended - use your company card if you need.
  5. Always advise participants to mute their mics if there is unnecessary background noise to ensure the speaker is able to be heard by all attendees.
  6. You should take notes of the points and to-dos during the meeting. Being able to structure conclusions and follow-up actions in real time makes a video call more effective than an in-person meeting. If it is important enough to schedule a meeting, it is important enough to have taken notes.
  7. We start on time and do not wait for people. People are expected to join no later than the scheduled minute of the meeting, and we don't spend time bringing latecomers up to speed.
  8. It can feel rude in video calls to interrupt people. This is because the latency causes you to talk over the speaker for longer than during an in-person meeting. You should not be discouraged by this, as the questions and context provided by interruptions are valuable.
  9. We end on the scheduled time. Again, it might feel rude to end a meeting, but you're actually allowing all attendees to be on time for their next meeting.
  10. It is unusual to smoke or vape in an open office, and the same goes for video calls - please don't do this out of respect for others on the call.

For external meetings, the above is also helpful.

Indicating availability

  1. Put your planned away time including holidays, vacation, travel time, and other leave in your own calendar.
  2. Set your working hours in your Google Calendar - you can do this under _Settings_ > _Working Hours_. This is helpful as we work across different timezones.

Google Calendar

We recommend you set your Google Calendar access permissions to 'Make available for PostHog - See all event details'. Consider marking the following appointments as 'Private':

  1. Personal appointments
  2. Particularly confidential & sensitive meetings with third-parties outside of PostHog
  3. 1-1 performance or evaluation meetings
  4. Meetings on organizational changes

Calendly

We use Calendly for scheduling external meetings, such as demos or product feedback calls. If you need an account, ask Simon in #sales to invite you to the PostHog team account.

Communication Methods

PostHog employees are frequent targets of scams and phishing. Expect all communication to occur over Slack. Phone calls, SMS, and WhatsApp are never used for initiating requests, approvals, or asking for sensitive info. With few exceptions, email is never used for this either.

If someone contacts you outside of Slack, treat it as untrusted until verified. Message them on Slack to confirm, and only continue the conversation over Slack. Other communication methods are susceptible to phishing, but our Slack instance is locked down and generally well protected from phishing and impersonation.

Best practices

James, Tim, and other execs will never ask for wire transfers, gift cards, MFA codes, or access changes over email/SMS/WhatsApp/phone. Treat such requests as phishing and report them to #phishing-attempts.

By email: Only trust @posthog.com senders. Verify via the company directory. Be cautious of look-alike domains (e.g., posthog.co vs posthog.com), unexpected attachments, and “urgent” requests.

Phone/SMS/WhatsApp: Never used for initiating requests, approvals, or asking for sensitive info.

If something feels off, it probably is. When in doubt, slow down and verify in Slack.

Culture

Company | Source: https://posthog.com/handbook/company/culture

So, what's it like working at PostHog?

We're 100% remote

Our team is 100% remote, and distributed across more than 20 countries.

Being all remote has a bunch of advantages:

In addition to all the equipment you'll need, we provide a budget to help you find coworking space, or to cover coffee shop expenses. Everyone also has a $1,500 quarterly travel budget for ad-hoc meetups.

We're extremely welcoming

This is so important to us that it has its own dedicated page.

We're extremely transparent

As the builders of an open-source product, we believe it is only right that we be as transparent as possible as a company.

This isn't just a meaningless corporate statement. Most of our communication happens publicly on GitHub, our roadmap is open for anyone to see, and our open-source handbook explains everything from how we hire and pay team members to how we email investors!

Almost everything we do is open for anyone else to edit. This includes things like the contents of this very Handbook. Anyone can give direct feedback on work they think could be improved, which helps increase our responsiveness to the community.

We're committed to much more than just public code.

We write everything down

We're an all-remote company that allows people to work from almost anywhere in the world. With team members across many countries, it's important for us to practice clear communication in ways that help us stay connected and work more efficiently.

To accomplish this, we use asynchronous communication as a starting point and stay as open and transparent as we can by communicating through public issues, pull requests, and (minimally) Slack.

Putting things in writing helps us clarify our own ideas, as well as allow others to provide better feedback. It has been key to our development and growth.

We give direct feedback early and often

Everyone should help everyone else raise their game. After completing difficult work, fatigue tends to set in. It is challenging to maintain objective views of the quality of your own work when you are fatigued. It's easier for outsiders with fresh eyes and energy to raise the level of others around them.

We are direct about the quality of work. That doesn't always mean work needs to be completely polished, as it depends on the speed and impact of a task. Being great at giving and receiving feedback is a key part of our culture.

We bias for action

If given a choice, go live. If you can't go live, reduce the task size so you can.

Default to _not_ asking for permission to do something if you are acting in the best interests of PostHog. It is ok to ask for more context though.

We're on the maker's schedule

We're big believers in the importance of the maker's schedule. If we have meetings at all, we'll cluster them around any stand-ups, so our day doesn't get split up.

Image: Screenshot of an engineer's calendar at PostHog

On Tuesdays and Thursdays, we don't have internal meetings at all. Occasionally an external meeting will slip in on those days, such as interviews, but we try to keep those to an absolute minimum.

We're structured for speed and autonomy

Hiring high-performing and self-sufficient team members means we don't need the typical corporate processes that are designed to slow teams down. Instead, we're organized into small teams, which prioritize speed by delegating decision-making autonomy as much as possible.

Our management approach is super simple - small teams report to their team leader, and each of the team leaders reports to one of our four execs. We don't want to create a fancy hierarchy of titles, as we believe this can lead, consciously or not, to people feeling less empowered to make changes and step on toes, especially if they are not in a 'senior' role.

It's up to you how to get things done. If you want to make a change, feel free to just create the pull request. If you want to discuss something more widely for a bigger piece of work, it might make sense to use an RFC for a change inside your team.

If your RFC could significantly impact other teams as well, it usually works best to book a call with them as well as it usually saves time – "fewer meetings" doesn't mean "no meetings", just that they should be meaningful and intentional, not routine.

Read How you can help to understand how you can contribute to this culture.

Do more weird

Company | Source: https://posthog.com/handbook/company/do-more-weird

For some (all?) at PostHog, do more weird isn't just a benign corporate value - it's a way of life. A craft, to be honed. This page will help you navigate do more weird at PostHog so that you too may do... weird things.

What do more weird is

PostHog's competition for attention is not with other boring B2B SaaS companies - it's with the internet as a whole. Memes. TikTok. HackerNews. We bring a consumer mindset to a B2B context.

Things to bear in mind:

Weird ideas are fragile, so the most important feedback you can give if you see a weird idea is 'do I think this is a good idea' not 'can we do it'.

What do more weird isn’t

Weirdness isn't _purely_ vibes based - it has to be good and vaguely relevant to our users.

Some of the pitfalls we've learned to avoid:

Weirdness is relative

Sometimes the thing just isn't _that_ weird, but that may still be ok depending on the context. For example, transparent pricing is weird in the context of how we bill customers, but it wouldn't make sense to do something truly weird like bartering grain for PostHog credit or something.

On the other hand, the bar for weirdness in a marketing campaign is extremely high, because the world is full of marketing teams trying to do the same thing. A wry smile in response to the idea will not cut it.

Got a weird idea?

Depending on the idea, you have a couple of options:

Making bigger weird happen

Sometimes weird ideas take a lot of money and/or people's time. We have a monthly do more weird marketing budget that we can put towards such things. Lottie and Charles, together with the Council of Weird, meet monthly to pull from the frozen locker of weird things to see what we want to invest in next.

Fuzzy ownership

Company | Source: https://posthog.com/handbook/company/fuzzy-ownership

As we continue to grow, it can be hard to figure out who is the owner of something at PostHog. This is especially difficult if you are new to the team and don’t have a lot of historic context.

There are also some things that don’t have an owner at all, so we've created a simple process to deal with these that you might find helpful. Ideally, we want the default assumption to be to _not_ a) hire a new person, or b) escalate to James/Tim.

Figuring out who owns a thing at PostHog

An owner can be a single person, or a small team - either is fine. There are several places you can figure out who owns something at PostHog:

If you spot anything out of date or not obviously clear, please raise a PR!

Figuring out a new owner for a thing we’ve identified

Generally, we will keep hiring people who have an ownership mentality and are willing to put their hand up when they see a thing with no clear owner. This is better than PostHog asking people to do it, which should be a last resort.

Setting quarterly goals

Company | Source: https://posthog.com/handbook/company/goal-setting

We plan objectives every quarter. The set the direction and overall objectives for PostHog, and then small teams set their own objectives that feed into these. Longer-term planning that the Blitzscale team does is covered separately in the annual planning guide.

How quarterly planning works

  1. ~3 weeks before the end of each quarter, the Blitzscale team meets to come up with larger goals for the company, which sometimes (but not always) trickle down to individual teams.
  2. ~2 weeks before the end of the quarter, team leads should schedule planning meetings to go through these - these will be run by the team lead and include the relevant Blitzscale team member, following the template below. Each small team can change or propose alternate objectives, goals, and/or key results (we are not prescriptive about the exact terms used here - use these as a starting point).
  3. After the planning meeting, the team lead creates a PR on their small team page with the new goals. Make sure you tag the relevant member of the Blitzscale team for review at a minimum.
  4. Goal PRs need to be merged before the next quarter starts. We usually then run through the objectives in the first all hands of the next quarter.

In terms of accountability, Scott Lewis will notify all the small teams and make sure that the quarterly meetings happen (and that each small team has a PR), but he will not schedule the meetings for you.

If you prep properly (see below), planning meetings should take 1 hour max. The meeting is not the end of the process - you may still have some back and forth on the PR, but the meeting should give the team lead enough info to write a good PR.

Planning template

Teams should fill in the previous quarter and HOGS sections async in the doc before the meeting starts. The meeting itself should be 20% reviewing the past, and 80% talking about goals for next quarter. Don't fall into the trap of spending most of your time reviewing and then rushing the goals right at the end.

## Last quarter objectives reflection (5 mins - do as a team)

[Paste in previous quarter objectives from the team page]

For each objective, write up a reflection - usually the person who was the lead on the objective should do this, but some might be shared.

What else did we get done? List items from the changelog or PRs merged if they were significant items that deviated from the original goals (changing goals mid-quarter is okay!)

You may also want to write some overall thoughts about how the quarter generally went.

## HOGS (10 minutes - all the content should be here **before** the meeting starts)

This should be done BEFORE the meeting by everyone, independently and in this doc before the meeting starts. Spend 10 minutes in the meeting silently reading through everyone’s HOGS during the meeting. Add any common themes or otherwise important items to the section below as you read, then only discuss the themes (there generally isn't time to discuss every line of the HOGS).

Add entries under each question with your initials/name like: "- (your name): My comment"

- Hope
  - What are you most excited about this quarter?
  - What exploration do you want to do?
  - For Error Tracking/Session Replay/Experiments/Conversations/Experiments/Conversations/Product Analytics/LLM Analytics:
    We want to make PostHog proactive. How do we programmatically emit signals - i.e. research prompts for an agent with code access & PostHog MCP – that will result in quality concrete code changes?
- Obstruction
  - Is there anything embarrassing about your product?
  - What’s stopping you from shipping 2x what you’re shipping now?
  - Can your product be used by agents (Claude Code/Cursor/etc.) via our MPC server? What MCP tools or skills are missing right now?
- Growth
  - What single thing would move the needle the most this quarter?
    - _If it’s more people, please expand on how many and exactly what type of hire you’d be looking for._
  - What are users asking for? Which of these are we ignoring?
    - _Check Vitally for feature requests (if applicable for your team/product)_
  - How will users interact with your product in 1 year's time and what can we do now to get this ready?
- Sneak attack
  - Say a competitor beats your team’s product, what would that product do differently?
  - What are we not talking about enough?

## Themes (20 minutes - do as a team)

What themes can we distill from the above HOGS list? What are categories of things we should consider working on? What are other things we might want to consider?

## New goals (15 minutes - do as a team)

This is an example - feel free to adapt as you need. Generally it is a good idea to have at least one person's name against each thing for accountability even if multiple people work on it - shared goals usually results in less getting shipped.

_For engineering teams - did you do anything cool last quarter that other engineers could learn from? Consider adding a goal to write about those things on our company blog!_

Objective 1: PostHog in the EU
Motivation: Unblock 1,000s of customers [link to data] who need to keep data in the EU but are not capable of self hosting.
What we'll ship:
  - This thing (Name)
  - Another thing (Name)
  - Maybe another thing (Name)

If you aren't on a product team, replace 'product' with the equivalent 'thing' on your team - e.g. if you do recruitment, your 'product' is how we do recruitment at PostHog and your users are job applicants.

Publishing and viewing goals

When a team has set their quarterly goals it is the responsibility of the team lead to document the goals on their team page, publicly. This enables teams to see what each other are working on, helps us hold teams accountable to their goals, and creates a shared sense of urgency and direction.

You can easily see what goals teams have set on the WIP page, which pulls all goals from the respective team pages.

Teams can choose to document their goals publicly in a number of formats, but below is a useful template for getting started. Individuals may also choose to create planning issues to track their work in greater detail.

<details>
<summary>Write the goal here (owner)</summary>

- **Rationale:** Why is this important?
- **What we'll ship:** Keep this brief
- **We'll know we're successful when:** Metric or outcome

</details>

Good goal setting

Bear the following in mind:

FAQ

What if I don't have time to do work towards my objectives because of customer support/urgent board reporting/something else?

Picking up the occasional thing that isn't technically going to help your goal is ok. This is because we're small and may not set 100% perfect goals. As ever, prioritize as you see fit. However, spending a bunch of time on a pet project is not - this means the planning process has failed.

If my team repeatedly miss objectives, what happens?

Objectives should be ambitious but achievable - you should be able to hit them by challenging yourself, but not to the point of burnout. If your team is consistently missing objectives, they are too hard or possibly the wrong objectives for PostHog/your team.

A grown-up company

Company | Source: https://posthog.com/handbook/company/grown-ups

We’re very proud to have a genuinely welcoming environment where PostHog treats you, and we treat each other, like grown ups.

We’re an international bunch of weirdos, but one thing us weirdos have in common is that everyone is kind, courteous, and professional towards each other - and that’s something we’re really proud of. And we all ship, of course!

Things we do to create a welcoming environment

We have tried many different tactics over the years, and these are the things we have found _actually_ make a difference.

Things we don’t do that other companies might

We care about doing what works for PostHog’s culture, rather than worrying too much about what other companies are doing or how they judge this. These are some of the things that we don’t do as a result.

Are you a potential candidate reading this? Excited to join a grown up company? Get in touch!

Kudos

Company | Source: https://posthog.com/handbook/company/kudos

Kudos

Definition of 'kudos'

(kjuːdɒs IPA Pronunciation Guide, US kuːdoʊz IPA Pronunciation Guide)

_Kudos is admiration or recognition that someone or something gets as a result of a particular action or achievement._

As an all-remote team, we need to put extra effort into celebrating each others' achievements, as not being in the same physical location can often make good work less visible.

We use Monday All-Hands as an opportunity to acknowledge cool things that people have done in the previous week. Can be _anything_ - shipping a new feature, a great piece of content, fixing an issue, or just generally doing something nice for someone else.

How to give kudos

You can use /kudos @person for [reason] to give someone kudos in Slack whenever you want. The kudos gift won't be visible in the chat for anyone else, but the gifted person will probably enjoy seeing themselves in All-Hands on Monday.

To list all kudos from the last week, use /kudos show 7. This shows the previous 7 days of submissions.

Alternatively, you can just write directly into the All-Hands doc, though this relies on you remembering things that happened the previous week.

Lore

Company | Source: https://posthog.com/handbook/company/lore

Lore of PostHog / inside jokes

A beginner's guide to some of our custom Slack emojis and various anecdotes you'll see and hear about.

Managers and management

Company | Source: https://posthog.com/handbook/company/management

A manager at PostHog has a short list of responsibilities:

  1. Setting the right context for your direct reports to do their jobs
  2. Making sure your direct reports are happy and productive
  3. Acting as the hiring manager for new roles in your team
  4. Creating good plans for new person onboarding and small team offsites
  5. Collaborating with execs on team performance concerns that need early intervention

That's it.

A manager at PostHog is _not_ responsible for:

  1. Deciding compensation - we have a compensation calculator and the process is managed by the exec team
  2. Setting tasks for your direct reports - that is not how small teams work
  3. Providing a career progression plan for your team
  4. Figuring out team structure - today that is all handled by the exec team
  5. "Approving," whether that's projects, expenses, days off, or accounts - people should have admin access by default to most things
  6. Dealing with HR issues - you should escalate these to Fraser or Charles
  7. Anything legal-related, e.g. someone wants to quit or thinks they did something illegal - route this to the exec team
  8. Deciding to hire or fire people - the exec team do this

This guidance applies to all teams, irrespective of whether you manage an engineering or non-engineering team.

Part-time managers

Because of the relatively short list of tasks that managers have, management at PostHog is a part-time job. That means nearly everyone still spends the majority of their time on practising what they do best. For most managers, this isn't actually management!

As an engineer, you want the opinion of someone who can actually code. As a designer, you really want your manager to have an eye for design. As an operator, you want to be managed by someone who has scaled a business. That's why it's important for managers to keep practising their craft.

However, management tasks do come _first_, as giving context to your team tends to have a multiplying effect vs. getting one more PR out. After that though, it's back to work.

Management is intentionally spread thin at PostHog. This is a forcing function for making sure that teams and ICs continue to have high levels of autonomy. Bored managers are micromanagers. By working across several teams, people like #team-blitzscale and product managers are forced to only give their attention where it's truly needed, and give space & autonomy everywhere else.

You'll sometimes hear us use the term "team lead". A team lead is the leader of a small team. By default they also manage the individuals that are part of their team, though very occasionally they don't, such as when a new small team has just been created.

How do I set context?

At PostHog, we hire highly experienced people for 99% of roles. That means managers won't need to spend time telling their direct reports what to do.

However, for those people to make the best decisions, they need context. The things a manager can do to set context include:

Pitfalls to avoid

The biggest difference between PostHog and other places is that in the end it is up to the _individual_ to make the decisions. All you can do as a manager is set context. From there, you'll have to trust that we've made the right hiring decisions and that the individual is able to execute on that. If they can't, we have a generous severance policy.

Decisions aren't just about buying a piece of software or choosing a color for a button. It's also about what to work on, what to invest time in, or where to take entire parts of our product.

As a manager, it's tempting to see yourself as the sole owner of all the information, and give it out sparingly. People will come to you often with questions (because they don't have the context) and when they do you'll get more validation that holding all the context yourself makes you an Important Person. What managers should aim for at PostHog is to make themselves obsolete. Share as much context as possible, in written form and in a public channel. That way everyone will be able to do their best work.

Ways to burn yourself out:

How do I make sure my direct reports are happy and productive?

First, make sure you are setting the right context. Next, the most useful thing you can do here is to schedule regular 1-1s. Typically we find that you should have higher frequency 1-1s with your reports when they join PostHog and reduced frequency over time as they settle in. There are some types that we've found useful:

The key thing here is to be pragmatic - 1-1s should feel useful and not like a waste of time. Everyone should see it as their own responsibility to raise important feedback or issues as they happen and not wait unnecessarily for a scheduled meeting.

Talking about long-term career goals every now and again is also important but easy to let slip when things get busy. If you can help people achieve long term goals while at the same time hitting PostHog's short term needs - whether at PostHog or not - you'll get people's best work!

We have a set of handy templates to use - feel free to adapt these for each team member. These are not to be followed strictly if you don't want to - this is to just save you having to create something from scratch.

Performance

We care about having a consistent, transparent, and fair way to handle recurring performance issues. We don’t want this to be a source of stress for you - it’s not your core responsibility as a team lead, and we want you to feel supported. The People & Ops team will prompt you to consider performance within your team at key moments to make this easy and straightforward, but you should proactively give feedback and raise concerns with your exec as they arise.

i. Is the person a driver or a passenger? ii. Does this person get things done proactively? iii. Are they optimistic by default?

The keeper test

As PostHog grows, it's increasingly important that all team leads help us keep the bar for performance high - we can't centralize this with the founders. To help us scale this, each team lead will be asked to do a keeper test on their team members throughout the year, this will be sent in an automated form, by Deel, through Slack. The format is as follows:

  1. Ask the team lead 'if X was leaving for a similar role at another company, would you try to keep them?' - the answers should be derived from our values, similar to the questions above.
  2. Dig in where the answer is 'no' - what would it take for this to be a 'yes'? Is this just temporary, or is there a deeper issue to resolve?
  3. Make sure the manager is sharing all of this feedback with their team to help them improve.

That form will be shared with the relevant team Blitzscale member, so they can help where necessary.

Side note: anyone can ask their manager 'how hard would you work to change my mind if I were thinking of leaving?'. It's a great way to solicit valuable feedback!

Weave

We use a tool called Weave to collect stats for engineers. Engineers can log in to see their numbers and those of other engineers.

We understand that all the work an engineer does can't be properly represented in a tool that just looks at PR output. Data in Weave is _not_ the decision-maker for whether someone is succeeding in their role at PostHog. It can be, however, a part of the conversation.

We use Weave to:

We don't use Weave to:

We have compared statistics in Weave against other (imperfect!) metrics that can be used to gauge productivity, such as number of commits, number of PRs, total github activity, etc, and see similar patterns amongst them. Weave gives us more detail and a nice UI for evaluating output across all the engineers we have, which we don't have any other good interface for. In addition, it gives engineers access to the same information we have about them, so using it increases transparency.

What does being a hiring manager entail?

Two things:

See the #technical-interviewers channel for more info here.

What you can expect as a manager

Management roles at PostHog are often (but not always) temporary. That's because as the company changes, our needs for different people in different roles will change as well. Because all of our managers are also strong ICs (individual contributors), sometimes putting someone back into an IC role makes sense if that's what's best for the company. This has happened many times with people at PostHog, some who have gone back and forth between being a manager and not being a manager multiple times (hi Marius Andra!).

As such, management roles are paid on the same pay scale as other ICs. Becoming a manager does not mean you get a pay raise, and going from a manager role back to an IC role does not mean you get a pay decrease.

Management is a skill of its own, and it's not any more important than any other skills that make someone a great IC. It's possible that you may be a manager for a short time, but it becomes clear that your strengths lie primarily in the other skills that are involved with being an IC. In this case we might move you back to a pure IC position, where your skills can really shine, and move someone else from your team or from around the company into the manager / team lead role.

Additionally, managers who are excelling with their teams may have limited interaction with their own manager. This is because, as discussed above, management is intentionally spread thin. If you feel like your manager is mostly ignoring you, this isn't necessarily a bad thing and usually means you and your team are doing a fine job!

These have been recommended by multiple managers on the team:

Engineering-specific:

Merch store

Company | Source: https://posthog.com/handbook/company/merch-store

We have a merch store where our community can purchase high quality PostHog-branded merch. The People & Ops team is responsible for managing merch inventory, fulfillment etc. even though multiple people contribute, and Kendal is the point person.

We use Micromerch to manufacture and fulfill our merch. Anyone can suggest a product for us to sell or give away.

The Brand team ultimately decide on what items we wish to sell or give away (including how many and sizes), and Lottie provide assets to produce and order these items in to stock.

We generally try to launch new products in line with the typical fashion cycle (spring/summer and fall/winter). However, this doesn't mean we can't do fun side quests! If you are looking to do an off-cycle merch run, just make sure you keep Kendal in the loop so the admin side goes smoothly.

How to reorder merch

All of our permanent merch items are reordered via Micromerch. To do this you need to:

  1. Request a restock quote for the item(s) in the Slack channel and enter the quantity you need
  2. Approve the estimate that will be sent from Micromerch
  3. Pay the invoice via Brex once it comes in (usually in 1-2 days after estimate approval)

It's really important that we do not allow stock levels to run low as restocking items can take a couple of weeks, so the Ops team will regularly check inventory levels. However if you happen to see anything looking amiss, or you know you want to place a big order for a customer that may affect our stock levels a lot, just let Kendal know ahead of time!

Adding new items

Micromerch is integrated with our Shopify store, so all orders are made and processed through there. To add new products to Shopify, follow these instructions..

Shipping

Shipping is also done through Micromerch (in partnership with Shiphero) - they can ship to over 200 territories worldwide:

Merch giveaways

Customers

Create a discount code in Shopify admin. You don't need to be invited to Shopify, instead the login details are stored in 1Password.

When creating the discount, select "amount off products" then choose if it is a percentage off or a fixed amount - usually we do fixed amounts of $30, $50, or $100 depending on the purpose. The you can choose "specific collections" and choose "All Products".

Limit the use to one use only (_not_ one use per customer), otherwise it's unlimited free stuff for them, unlimited high cost for us!

For feedback or general rewards we typically give users $30, which is enough for a t-shirt. For code contributions we tend to do $50, which is enough for a bigger selection of things. We don't put expiration dates on the codes, typically.

If you need any help just send a message to the #merch channel and somebody will be happy to help. Merch codes can also be generated directly from within Zendesk.

If you want to send physical merch to a customer instead of a merch code, this can be done in Shopify by creating an order, selecting the chosen merch and applying a discount for the whole price of the item (don't forget to do this step otherwise it'll try and charge the customer!)

PostHog team

If you want more, here's how to get it!

As always, we expect you to use this with restraint and with your own good judgement. The merch store should not become your sole source of clothing for your wardrobe, nor where you go any time a friend has a birthday. But sure, go ahead and buy your mom (or yourself) a hat or a hoodie!

YC Deal

You can find instructions for this on the dedicated YC Deal page.

Troubleshooting customer orders

Sometimes customers get in touch with us because their order hasn't arrived. There are a couple of things you can do:

  1. Check the Shopify. This will show you the status of the order, if something looks amiss, please mention this in the #merch channel immediately so Kendal can look into it.

Note: There have been some issues with fulfilling orders to Brazil due to the country's customs policies.

If for some reason their second order attempt doesn't make it through, refund their money and apologetically let them know that unfortunately our supplier is having issues shipping to their address. It's better to stop the back-and-forth at that point, rather than having a frustrated customer placing multiple orders that don't work. We aren't an e-commerce business, so ensuring a flawless merch store experience for a handful of edge case orders is not a priority!

If the customer was given a merch code to thank them for submitting a PR, you can offer to make a donation on their behalf for the equivalent amount to a company of their choice on Open Collective instead.

A primer on using GitHub at PostHog

Company | Source: https://posthog.com/handbook/company/new-to-github

If you’re new to GitHub, it can be a little confusing. (Heck, I’ve been using GitHub for years and it’s _still_ confusing.) It doesn’t have the best search and notifications can get out of hand — and in general, it can be really intimidating to join a company that uses a tool you’ve never used before as its primary means of communication.

I wrote this guide to help explain how we work, and how to stay on top of the volume of information that flows through our team's organization on GitHub.

— Cory Watilo

P.S. Have questions? Feel free to file an issue on GitHub - I explain how to do this later in the article!

Key concepts

At its core, GitHub essentially hosts code that helps keep everyone in sync. Each team member can download this code, make changes, and upload their changes back into GitHub.

Code is stored in a “repository” (or “repo” for short) - it’s like a folder for code. (As of writing this, PostHog has 401 repos - like the code for posthog.com and even a repo for internal company discussions that doesn’t actually contain any code.) This is because each repo comes with a handful of collaboration tools. Here’s a list of the key concepts on GitHub:

  1. Discussions
  2. Issues
  3. Projects
  4. Pull requests
  5. Actions

You can take any task linearly from start to finish using this set of tools, though you don’t have to use them all. (For example, PostHog doesn’t really use Discussions, and Projects are only used by certain teams.) But if you wanted to use the whole suite, here’s how it would work:

  1. If you decide you want to change something in the product or website, you could start a _discussion_ about it. This is like a casual forum-style conversation. (Again, we don't use these.
  2. A discussion can be converted to an _issue_, which is a formalized proposal of the discussion.) People can reply to these posts with feedback.
  3. In my workflow, this is a good time to add the issue to a _project_, because it’s something you want to track through to completion. Project boards are a great way to stack-rank tasks (issues), because you can order them in a way that makes sense based upon the project and see everything in one place. This helps keep a team in sync.
  4. A _pull request_ (also known as a _PR_) references the code that’s changed to solve an _issue_. It’s a way to summarize the changes in code and explain them so others can review them.
  5. _Actions_ usually occur after you commit code. It makes sure things are working as expected (and that whoever wrote the code didn’t break anything). (Don't worry about these for now.)

You can use any of these features on their own, or use them together. Primarily, PostHog uses issues, pull requests, and actions. If you’re not super familiar with GitHub, just focus on issues and pull requests, as that’s where the bulk of the interesting work happens.

Note: The PostHog handbook covers GitHub issues and pull requests, and suggests everything should start with a pull request because it represents one of our values, "Why not now?".

Notifications

The best way to stay up-to-date with what happens on GitHub is by subscribing to (following) the areas that are most relevant to what you do. This sends updates to your GitHub notifications.

By default, you’ll receive email notifications for everything you subscribe to. There are a few ways this happens:

  1. Creating an issue or pull request
  2. Commenting on an issue or pull request
  3. “Watching” a repository

As I’m not a huge fan of email, I prefer to visit a centralized place for my GitHub notifications, although many engineers prefer email notifications. Personally, I don’t like GitHub’s /notifications page, as it feels cumbersome (slow) to read through updates. Here are two much better ways to consume GitHub notifications (entirely my opinion):

  1. GitHub’s iPad app - provides an email-like interface that feels a lot more natural to reading notifications than github.com/notifications. (If only GitHub had this UI on the web...)
  2. octobox.io - uses the same email-like interface, but in a browser

I have octobox.io set to my homepage in Chrome, so anytime I want to see my notifications, I just click the Home button and I have one-click access to my work “inbox”.

Install the GitHub app in Slack

A great way to get realtime updates about what’s happening in GitHub is to install the GitHub Slack app and subscribe to repos. After linking with your Slack account, type /github subscribe posthog/posthog.com (org/repo-name) in Slack, for example, to get updates when things happen in the posthog.com repo.

Finding issues or pull requests

Given the volume of issues and PRs, search will be your best friend. Unfortunately GitHub’s global search leaves something to be desired, so usually the easiest way to find something is to visit a repo, then clicking either Issues or Pull requests (depending on what you're looking for) and searching from there. Type a few keywords, and if you know who authored the issue or PR, apply an author search. (You’ll see GitHub pre-populate search syntax (eg: is:open is:issue author:corywatilo), similar to how Gmail’s search works.

Filing an issue

Issue is the primary method of getting a message in front of the team. Think of it like creating a ticket in a typical project management system. (We prefer issues over Slack messages because it's public and can sync with the rest of our code workflow. You can use Slack if you’d like to bump an issue to a group of people, but link to the issue (or PR) as GitHub acts as our source of truth.)

Issue templates

Some repos have issue templates set up to make issue creation faster. However, if the issue you’re going to create doesn’t fit into one of these templates, don’t worry about these! Just create a new blank issue.

Referencing another issue

This isn’t mandatory, but if your issue is related to other (previous) issues, it’s worth cross-linking so others have full context. To cross-link in an issue or PR, type a # and either part of an issue’s/PR’s name or number and GitHub will populate a list of items that match.

You can find an issue’s or PR’s number in the URL.

Writing Markdown

It can take some getting used to if you’ve never written Markdown syntax. Fortunately GitHub makes it easy by providing WYSIWYG buttons. When you press a button like B, I, or U, GitHub will insert the Markdown code required to format your text accordingly.

Tips for faster writing

Creating a pull request

If you see something minor on posthog.com (in Handbook or Docs) that needs to be updated, you can easily propose the change by creating a pull request _without_ having to run the full codebase on your computer. (This is a great way to contribute if you're in a less-technical role.) To make a small change, find the _Edit this page_ link within the Handbook or Docs which will take you to GitHub where you’ll see the source file. From there, click the pencil icon. (Our Handbook and Docs use the same Markdown format as GitHub’s issue and PR editor, so this should look familiar!)

When you’re done making your changes, be sure to preview what the changes look like (to make sure formatting is accurate). At the bottom of the page, you’ll see a section called _Commit changes_. Here’s how to use it:

Clicking _Propose changes_ will create a pull request!

"Closing keywords"

If you’re changing code to address an open issue, you can tell GitHub to automatically close the issue when the PR is merged by using a closing keyword. For example, in your PR description, you can write “Closes #123” (where #123 is an issue number).

Requesting a review

Now that your PR is created, you can request a review (best practice) from someone relevant so they can make sure everything looks good and that they agree the change is ready to go live. They’ll be notified of your request. (By the way, you can filter to reviews that others request from you by going to your notifications, then choosing the Review requested filter.)

Previewing changes

If you're making changes to posthog.com, you'll be able to see your changes on a "preview" version of the website. It takes 10-20 minutes for this preview to be ready.

(Remember when I said we also use GitHub Actions? It basically runs some automated tests to make sure everything is spelled correctly and that nothing else broke.)

Near the bottom of a pull request page, you'll see a box like this:

Image: Checks

(Note: This box only appears if you're a member of the PostHog GitHub org - it's not available to the public.)

You can click the _Visit Preview_ link in the Vercel bot comment to see the preview.

Merging changes

Once a team member approves your pull request, you (or they) can publish the changes by clicking the _Squash and merge_ button. It will take another 10-20 minutes for your changes to appear on the site, but they'll go live automatically. At that point, you can send a link to your friends and family and tell them you're a coder now!

Next steps

This was a primer on using GitHub for communication at PostHog. If you’re interested in making more substantial changes to the website, you can follow our instructions on how to develop the website. It can take a little work to get your computer set up to run the site from your computer, so don't hesitate to reach out for help if you get stuck – or don't even know where to begin. That's what we're here for!

Offsites

Company | Source: https://posthog.com/handbook/company/offsites

While we’re async by default, there’s a very real upside to being in the same room - we’ve consistently found that a lot of our best ideas come from actually building things together in real life.

We understand organizing travel can be a challenge when you have personal/family commitments to manage, so we try to take a balanced approach to meetups:

All-company offsites

Curious about our all-company offsites? Check out these links:

- 2021 We shot this video at our Portugal offsite

- 2023 What we built at our sun-kissed Aruba hackathon

- 2024 What we built at our windswept Mykonos hackathon

Once a year, the entire company will get together somewhere in the world for a week. Usually we'll all fly on Sunday, have an opening dinner, spend the week doing a mix of hard work, strategy, culture and fun activities and we then all fly back home on Friday. Our past offsites have been in Italy, Portugal and Iceland. We try to ensure that everyone has their own bedroom.

These are organized by the Ops & People team, and we budget up to $3,000 per person in total for these.

Typical agenda:

Small team offsites

We want to try to encourage small teams to get together once each year. These are more focused on work and on creating strong bonds within teams. Ideally they are spaced appropriately through the year in relation to the all-company offsite.

Planning a small team offsite? Kendal’s got you covered. Here’s how it works:

If there’s anything ad hoc you’d like Kendal to take point on, just let her know, she’s happy to help! Each team member is still responsible for booking their own flights.

Some guidelines:

Ideas for the agenda:

Don't run a hackathon during an onboarding offsite. Other offsites normally do have a hackathon. Participation should be _very_ strongly encouraged but not mandatory - if not everyone is taking part make sure that working spaces are available to accommodate the different styles of work. It is super important that people taking part are fully available and focused on participating. Given the offsite is an opportunity to work together there should be no teams of one. This extends beyond the formation of teams and into the hackathon itself in cases where there is team switching.

Here's a real-world example: Product Analytics team's Munich offsite agenda (internal Slack link). Feel free to take inspiration – though your team's needs and wants might be quite different!

The budget for these trips is up to $2,000 per person in total. We ask team members to use their best judgement for these and try to be thrifty where possible - these should be enjoyable, but not feel like a holiday. Generally it's easier to hit budget if you have people travel in on a Monday and out on a Friday - they don't need to be as long as a whole team offsite.

You should assign someone on the small team to be responsible for planning the offsite (doesn't have to be the lead), and they will be supported by the Ops & People team to ensure a successful experience.

On occasion during busy hiring peak time, we do recommend any team member involved in the interview process to dedicate at least one hour block per day during the offsites to accommodate candidate interviews so that this does not delay the hiring process on your team while you're away. Please coordinate directly with #team-talent if you have additional questions.

Hedge House

PostHog runs two Hedge Houses in the UK - a small one in Cambridge and a larger one in London. They are actual houses (yes, with a few bedrooms attached!) designed for small teams to run their offsites, host in-person onboardings, or come together for larger internal events like hackathons. Anyone at PostHog is welcome to use them as much as they like. We'd recommend using the Hedge House for small team offsites if you are in Europe as it removes a lot of the friction of finding somewhere new, and they're genuinely great places to get work done at a very high standard.

Cambridge

Message Kendal Ijeh to check availability or make a booking at the Cambridge Hedge House.

London

Our light-filled, studious office is a reliable homebase between Farringdon and Barbican. It’s entirely ours, open 24/7 and the perfect place to stay if you're visiting from abroad. Use the Hedge House London slack tool to see the full address, book a room and/or desk, plus see who else will be there during the week you visit. This means you can easily self-serve, but ask Kendal Ijeh with any questions.

If you're planning an offsite or onboarding in San Francisco, Hogpatch is the perfect spot to focus, talk to users and get product feedback.

London hotel recommendations

For offsites and onboardings in London, below is a list of hotels recommended in our #London Slack channel by folks who have stayed at these hotels.:

If hotel prices are above £200 per night, it is worth quickly looking for alternatives as ~£170 per night should be achievable midweek in London. If prices are high, you should optimise travel for total cost (flights & accom) so if you can get cheaper flights or hotel by moving dates +/- 1 day, then look into these options.

Border Control

Quite often you will be required to travel to places where some kind of visa is required even if just a visitor visa like an ESTA. When entering places like the US, for work purposes, border control agents may ask the purpose of your trip. In these instances it's best to avoid using PostHog terms like "onboarding" as this can be confusing. It's much better to more generally describe the purpose of your trip. In nearly all circumstances this will be to hang out with your colleagues and to take part in team building exercises. It's usually good to emphasize that you'll be on a short trip and that the company is paying for everything. You should be prepared with the exact addresses of where you are staying and the details of your flight out of the country.

A successful strategy is usually to start off with a high-level purpose of your trip which is usually something like "hanging out with colleauges" or "I am here for a business meeting with colleauges", it is also usually advisable to only respond with a minimal amount, saying only what is necessary. If the agent asks for more details it's usually good to go into a bit more detail about the company structure "I work for a US tech company and I am based in [Insert your country] where I work remotely. I am here to do some in-person meetings with my colleagues for the next few days and I fly back on [insert date]". Sometimes the border patrol agent will ask more about the business, it's fine to give these details and be as honest about that as you would anybody else. If further details are required of the content of the trip, you can again give some context of how we like to lean into the benefits of in-person working and since most of your colleagues are based in the US, you are travelling their for a few days to meet in-person and will be returning home afterwards.

For all company offsites, it's best to describe this as a company gathering where you will be hanging out with colleagues for the week. Generally, it is best to avoid using the phrase "training" as this can also be confusing.

Travel insurance

Many of our company offsites involve team members traveling abroad, and although we hope that these trips are uneventful and safe for all, in the event of an accident or medical emergency, we carry travel insurance through as well as general & auto liability policies through our partner Embroker.

In the event of an emergency, please cover any related expenses (ideally on your company card) and keep receipts, and then reach out to Kendal as soon as possible. We will assist with making a claim based on our policy binders.

Flight delays

If your travel plans are affected due to a flight delay or an airline-induced missed connection and you are forced to stay somewhere unplanned overnight, push the airline to cover the cost of your accommodations (including meals). It's not uncommon for them to initially tell you they no longer offer free hotel rooms for delays that were caused by the airline, but with a little bit of polite coaxing, they will likely give in.

Partners / family joining offsites

Sometimes at PostHog you will be asked to travel to places you've never been before and it could be a good opportunity to travel with your partner / family. At PostHog, we do infrequent in-person work that we want to maximise this time in person and your focus to be on PostHog, without distraction. This is why we don't allow partners or family to join you for the dates that the offsite / onboarding takes place. If timing allows it, you are able to tack holiday onto either side and for your partner/family to join you for those dates. However, for the dates of the offsite, you should be staying alone and focusing on your time with your teammates.

How to plan an offsite in 8 weeks - a checklist

Below is a rough timeline for planning your next offsite, as well as links to templates and resources that you can repurpose and customize as needed. Here's a spreadsheet template you can use with your team to democratically vote for the meetup location, and in other tabs, include travel information (in case someone's flight gets delayed/cancelled), schedule, project ideas, team activities, etc. To use any of the templates, create a copy to your own drive and edit as you see fit.

8 weeks out

7 weeks out

  1. Preemptively create the new team member a Google account
  2. Issue them a Brex card to their work email with a sufficiently high temporary balance to cover travel costs
  3. Add them as a guest to any planning Slack channels and/or share any necessary itinerary information such as arrival dates/times and airports
  4. Have the new team member book travel as usual

6 weeks out

5 weeks out

4 weeks out

3 weeks out

2 Weeks Out

1 Week Out

1 day before

1 week after

All company offsite hackathon

The hackathon is always a highlight of the offsite. We tend to run them like this:

Session 1: ideation dinner The day before the start of the hackathon we do a casual 'ideation' dinner where we encourage people to chat about ideas

Session 2: hackathon kick-off The hackathon kick off is 1.5 hours at the end of the day. Ideally we do this in a conference room with beers and wine.

Session 3: presentations

This should be the last work related session of the offsite. Again ideally in a conference center with beer and wine provided.

Each group gets 5 minutes to demo and present their idea.

2 weeks after

Feature flags service outage

Company | Source: https://posthog.com/handbook/company/post-mortems/2025-09-29-flags-is-down

Internal post-mortem: <https://github.com/PostHog/incidents-analysis/pull/120>

On September 29, 2025, the PostHog Feature Flags service experienced an outage lasting 1 hour and 48 minutes, from 16:58 to 18:46 UTC. During this period, approximately 78% of flag evaluation requests in the US region failed with HTTP 504 errors.

Summary

A database connection timeout reduction from 1 second to 300 milliseconds coincided with elevated load on our writer database from person ingestion. This combination triggered cascading failures in our connection retry logic, resulting in a service-wide outage. Recovery was significantly delayed by hardcoded configuration values and procedural failures in our incident response.

Timeline

{"timestamp":"2025-09-29T16:54:16.154136Z","level":"ERROR","fields":{"message":"Failed to create database pools","error":"Database error: pool timed out while waiting for an open connection"},"target":"feature_flags::server","threadId":"ThreadId(1)"}
thread 'main' panicked at feature-flags/src/main.rs:119:5:
internal error: entered unreachable code: Server exited unexpectedly
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Full service degradation timeline

Root cause analysis

The outage resulted from three compounding factors:

  1. Configuration change timing: A connection timeout reduction deployed during a period of database stress created conditions where pods could not establish connections within the new timeout window.
  1. Retry amplification: Our retry logic lacked circuit breakers and exponential backoff, causing failed connection attempts to multiply rapidly. This transformed a manageable database load issue into complete service unavailability.
  1. Health check configuration: Kubernetes continued routing traffic to pods in crash loops for up to 45 minutes due to improperly configured liveness and readiness probes.

The incident duration was extended by operational failures: timeout values were hardcoded in the application rather than externalized as configuration, requiring a full deployment cycle to modify. Additionally, our standard ArgoCD rollback procedure failed due to misconfigured permissions.

Impact

Remediation

Immediate actions (completed)

Short-term improvements (Follow along here)

Long-term improvements (Q4 2025 – Q1 2026)

Lessons learned

This incident highlighted critical gaps in our defensive architecture and operational procedures. The coupling of read and write operations created unnecessary failure domains, while our retry logic lacked basic protective mechanisms against amplification. Most significantly, our incident response was hampered by inflexible configuration management and untested rollback procedures.

The architectural improvements underway will provide proper isolation between different operational modes of the feature flags service. This separation, combined with improved circuit breaking and configuration management, will prevent similar cascading failures in the future.

Surveys SDK bug

Company | Source: https://posthog.com/handbook/company/post-mortems/2025-10-03-surveys-sdk-bug

On October 3, 2025, a backwards compatibility issue in the PostHog Surveys SDK (version 1.270.0) caused widespread JavaScript exceptions for customers using SDK versions older than 1.257.1. The issue lasted 5 hours and 26 minutes, affecting 305 teams and disrupting both survey functionality and error tracking metrics.

Summary

A backwards compatibility break in SDK version 1.270.0 introduced a dependency on the isDisabled function from the PostHogPersistence class, which was only added in version 1.257.1 (July 2025). The issue manifested when the asynchronously-loaded survey extension attempted to call this function on older SDK versions where it didn't exist, causing JavaScript exceptions in customer applications. The incident was initially detected through customer support tickets rather than automated monitoring, leading to a 4+ hour detection delay and extended customer impact.

Timeline

All times in UTC.

Total impact duration: 5 hours 26 minutes (10:45 – 16:11 UTC) Detection delay: 4 hours 14 minutes

Root cause analysis

The culprit PR introduced the backwards compatibility issue.

The technical problem

The PR modified the surveys SDK to use posthog.persistence instead of accessing localStorage directly – a reasonable architectural improvement. To ensure backwards compatibility, the code needed to check whether posthog.persistence was available before attempting to use it.

The implementation used the isDisabled function from the PostHogPersistence class, adding a utility in survey-utils.ts to verify persistence availability. However, this function was only introduced in a PR merged on July 11 and first made available in SDK version 1.257.1.

Why it failed

When PR #2355 was merged, both the main SDK code (posthog-surveys.ts) and the extension code (extensions/surveys.tsx) relied on the isDisabled function.

For the main SDK bundle, this worked correctly – customers on older versions never loaded the new code containing the reference to isDisabled.

However, the survey extension creates an asymmetric loading scenario:

  1. The customer's application loads the SDK at whatever version they have installed (potentially months or years old)
  2. The survey extension is loaded asynchronously from our CDN and always downloads the latest version

This created a version mismatch where:

Why it wasn't caught

  1. No version compatibility testing: We lack automated tests that verify new extension code works with older SDK versions
  2. Code review gaps: We don't have a process to flag when new APIs are added to main SDK files that might be called by extensions
  3. No static analysis: No linter rules prevent extensions from calling functions that may not exist in older SDK versions
  4. Detection gaps: No monitoring alerted us to the spike in customer-side exceptions – we learned about it from support tickets

Impact

Severity: Major (High Impact, Service Degradation)

Affected customers: 305 teams running SDK versions < 1.257.1

User-facing impact:

Duration: 5 hours 26 minutes of active impact

Error tracking billing impact:

Remediation

We reverted the problematic changes and released SDK version 1.270.1, which restored compatibility with all SDK versions.

Immediate actions

Action item: Start incidents earlier. We should declare incidents as soon as we confirm an issue (around 14:59), not almost two hours after mitigation. This enables proper coordination and customer communication. Owner: @lucasheriques

Short-term improvements

Long-term improvements (Target: Q4 2025 – Q1 2026)

Lessons learned

What went well

What went poorly

Key takeaways

This incident revealed a critical architectural weakness in how our asynchronously-loaded extensions interact with versioned SDK code. The assumption that extensions can safely call any SDK function breaks down when we have customers on old SDK versions but always serve them the latest extension code.

We also had this similar issue in another incident here.

The 4+ hour detection delay highlights gaps in our observability for client-side errors. We lack visibility into exceptions occurring in customer applications using our SDK.

The improvements outlined above will address both the immediate technical issue and the systemic gaps in testing, monitoring, and deployment practices that allowed this to reach production and persist for over 5 hours.

Feature flags recurring outages

Company | Source: https://posthog.com/handbook/company/post-mortems/2025-10-21-feature-flags-recurring-outages

Between October 21 and October 30, 2025, the PostHog Feature Flags service experienced four separate incidents, exposing systemic architectural weaknesses that required comprehensive remediation. This post-mortem documents all four incidents and our path to stability.

Summary

Over a 10-day period in October 2025, the feature flags service experienced four separate incidents totaling over 14 hours of cumulative major impact (errors or severe latency). While each incident had different surface-level symptoms, three of the four incidents shared the same root cause: improper CPU resource sizing. Our nodes were too small relative to pod resource requests, causing Kubernetes to pack too many pods per node and saturate CPU capacity. This CPU saturation led to connection pool exhaustion, excessive parallelism (too many concurrent operations), and ultimately cascading failures. The fourth incident was a rate limiting misconfiguration unrelated to resource sizing.

Incidents:

Incident timeline

October 21, 2025 – Redis overload

Duration: 21:45 to 23:28 UTC (103 minutes) Impact: ~38% of evaluation requests returning errors in US datacenter

A deployment intended to reduce timeout errors (PR #39821) incorrectly addressed symptoms rather than root causes. While rolled back within 2 minutes, it triggered excessive parallelism and connection pool exhaustion, which manifested as massive data transfer from Postgres to Redis and a surge in concurrent connections that overwhelmed our cache layer. Redis memory exhaustion followed, leading to prolonged service degradation.

What "excessive parallelism" means: Under CPU pressure, degraded requests triggered Envoy retries between the load balancer and service. Each retry spawned new concurrent requests, and each request performed multiple concurrent Redis reads. A single degraded request could fan out to dozens of concurrent Redis operations. Combined with cache misses (on cache miss, we synchronously loaded full flag and team state from Postgres and wrote it into Redis), this created bursty write storms that overwhelmed Redis.

Connection pool mechanics: Each pod maintains its own Postgres connection pool. Creating a pool involves TLS handshakes, authentication, and initial connection establishment—operations that are computationally expensive, especially when pods are CPU-bound. Under CPU pressure exceeding 90%, new pods struggled to initialize these pools within the 20-second startup timeout, leading to crash loops and reduced healthy pod capacity.

Critical issue: The Redis overload from the flags service also impacted the main PostHog application, demonstrating dangerous coupling through shared infrastructure. The flags service can operate without Redis but falls back to heavier database queries, making responses slower.

Root causes:

Timeline:

October 24, 2025 – Rate limiting misconfiguration

Duration: 18:00 to 19:12 UTC (72 minutes) Impact: ~97% of evaluation requests returning 429 (rate limit) errors worldwide

Deployed IP-based rate limiting (PR #40074) as a protective measure following Tuesday's incident. The tower-governor library (our Rust rate limiting middleware) saw all traffic as coming from a single IP (our load balancer) rather than actual client IPs, immediately triggering rate limits for all legitimate traffic.

Root causes:

Timeline:

October 28, 2025 – Connection pool exhaustion and excessive parallelism

Duration: 19:28 to 21:31 UTC (123 minutes) Impact: ~34% of evaluation requests failing in US datacenter

A routine deployment with no changes directly related to the flags service triggered a rollout of feature flag pods in the US region. New pods couldn't connect to Postgres within the 20-second startup timeout, entering crash loops due to excessive parallelism and connection pool exhaustion—the same root cause as October 21. Under CPU pressure, pods couldn't initialize Postgres connection pools (TLS handshakes, authentication, connection establishment) within the timeout. Simultaneously, a massive spike in Redis writes caused key evictions, effectively making the cache unavailable. While the flags service can operate without Redis (falling back to heavier database queries), with both cache unavailable and database under pressure, a significant portion of US traffic failed.

Critical issue: The Redis overload from the flags service also impacted the main PostHog application, highlighting dangerous infrastructure coupling. Unrelated deployments shouldn't trigger feature flags rollouts.

Root causes:

Timeline:

Note: We initially attempted the same remediation approach from October 21 before implementing other solutions to decrease parallelism.

October 29-30, 2025 – CPU-bound latency

Duration: 22:30 UTC on October 29 to 05:39 UTC on October 30 (7 hours 9 minutes) Impact: Slow queries and degraded performance due to node CPU pressure

Query performance was impacted for over 7 hours. While queries were slow to both Redis and Postgres, metrics for both dependencies confirmed they were healthy. The slow queries were due to CPU pressure on the nodes, which exceeded 90%. This impacted connections and slowed response times for the service to several times the usual.

Root causes:

Timeline:

Resolution: After identifying connectivity issues due to resource exhaustion on feature flags nodes, we applied changes that resolved this resource exhaustion. Increasing pod resource requests for the flag service resulted in a healthier distribution of pods per node, which caused per-node CPU usage to go down and the service to return to a healthy state.

Root cause analysis

While each incident had specific triggers, three of the four incidents shared the same fundamental root cause:

  1. CPU resource undersizing (primary root cause): Our nodes were too small relative to pod resource requests, causing Kubernetes to pack too many pods per node and saturate CPU capacity (exceeding 90%). This CPU saturation was the root cause of October 21, 28, and 29-30 incidents:
  1. Connection pool management complexity: Each pod maintains its own Postgres connection pool. Creating a pool involves TLS handshakes, authentication, and connection establishment—operations that are computationally expensive, especially when pods are CPU-bound. This complexity, combined with CPU saturation, exacerbated connection pool exhaustion issues.
  1. Shared Redis is a critical single point of failure: Redis overload from the flags service impacted the main PostHog application, demonstrating dangerous coupling through shared infrastructure. Isolation is critical despite implementation complexity.
  1. Critical monitoring gap: CPU alerting was missing: CPU alerting was completely absent throughout these incidents, preventing early detection of CPU saturation that was the root cause of three outages. This was a fundamental gap in our monitoring strategy that allowed CPU pressure to escalate unnoticed.
  1. Unbounded retries: Unbounded retries in Envoy (between load balancer and endpoint) amplified failures (now fixed with retry limits)
  1. Rate limiting misconfiguration (October 24 only): The October 24 incident was unrelated to CPU sizing—it was caused by rate limiting configuration that didn't account for load balancer architecture

Impact

Remediation

Immediate actions (completed)

Short-term improvements (Tracked in GitHub Issue #40885)

In progress (next 2 weeks):

To complete before re-enabling ArgoCD sync:

Medium-term improvements

Incident response and monitoring:

Architectural improvements:

Long-term improvements

Lessons learned

What went well

What didn't go well

Key takeaways

  1. CPU right-sizing is fundamental – The biggest takeaway: nodes were too small relative to pod resource requests, causing Kubernetes to pack too many pods per node and saturate CPU capacity. This CPU saturation led to excessive parallelism (Envoy retries → concurrent requests → concurrent Redis reads), connection pool exhaustion (pods couldn't initialize Postgres pools under CPU pressure), and slow queries. Right-sizing (fewer pods per node, better-resourced pods) addressed the underlying issues that caused October 21, 28, and 29-30 incidents. This must be a primary consideration for any service deployment.
  1. Connection pool management architecture matters – Each pod maintains its own Postgres connection pool. Creating a pool involves TLS handshakes, authentication, and connection establishment—operations that are computationally expensive, especially when pods are CPU-bound. This complexity, combined with CPU saturation, exacerbated connection pool exhaustion. Better approach: reduce concurrency and run smaller fleets with better-resourced pods rather than larger fleets with CPU-bound pods.
  1. Shared Redis is a critical single point of failure – When flags service overloads Redis, it takes down the main app too. This was evident in October 21 and 28 incidents where Redis overload from flags service impacted the main PostHog application. Isolation is critical despite implementation complexity.
  1. CPU alerting was completely missing – CPU alerting was absent throughout these incidents, preventing early detection of CPU saturation that was the root cause of three outages. This was a fundamental gap in our monitoring strategy. CPU metrics must be monitored and alertable from day one.
  1. Monitor data flow patterns – Postgres-to-Redis transfer spikes should trigger alerts. Watch for unusual data movement.
  1. Test under load – Overload patterns only appeared under production traffic. Load testing is non-negotiable.
  1. Progressive rollouts save lives – Gradual deployments limit blast radius and enable rapid detection. We're implementing rollout/annotation controls to disable staged rollouts and enable "force-merge" for rolling changes.
  1. Configuration must be flexible – Critical settings must be adjustable without full deployment cycles.
  1. Unbounded retries amplify failures – Retries without bounds in Envoy (between load balancer and endpoint) can cascade failures. We've implemented retry limits to prevent this.

Moving forward

These four incidents highlighted critical gaps in our defensive architecture and operational procedures. The compounding failures demonstrated that our service needed fundamental improvements, not just quick fixes. The primary root cause—CPU resource undersizing (nodes too small relative to pod requests, causing too many pods per node)—manifested differently across three incidents (October 21, 28, and 29-30), requiring us to recognize that excessive parallelism, connection pool exhaustion, and slow queries were all symptoms of the same underlying issue. The recurrence of these symptoms between October 21 and 28 showed that we needed to address the root cause (CPU sizing) rather than the symptoms. We initially attempted the same remediation approach from October 21 before implementing CPU right-sizing, which resolved the underlying issues.

We've implemented immediate remediations and are executing a comprehensive review of the entire service architecture. Our strike team is systematically identifying and addressing remaining bottlenecks. Once we complete the short-term improvements tracked in GitHub Issue #40885, we'll have confidence that the service is durable against future outages.

The architectural improvements underway—including Redis isolation, connection pool management, and comprehensive monitoring—will prevent similar cascading failures in the future. We're committed to ensuring the feature flags service meets the reliability standards our customers expect.

Persons database migration

Company | Source: https://posthog.com/handbook/company/post-mortems/2025-11-15-persons-db-migration

Between November 11 and November 15, 2025 we hit a Postgres limit that required us to migrate our Persons database for US Cloud. This led to ingestion delays which had a knock-on effect for products relying on person data, including feature flags and experiments.

This post-mortem document examines the root cause of the issue, steps taken, and our future plans derived from the lessons learned.

Incident summary

From November 11, 4:02 PM UTC to November 14, 3:02 PM UTC the performance of PostHog's ingestion processing pipeline became severely degraded, resulting in processing delays of events of up to 2 days for all customers in the US region.

The root cause of the performance degradation was that our Postgres database responsible for storing our Person information reached a previously unseen limits in Postgres related to the JSONb field we use to store person properties. This led to a state where writes to the database kept waiting for OIDs in the TOAST table to become available, which could take multiple seconds per update, slowing down the ingestion pipeline to the point where there were no standard scaling options available. See Root Cause Analysis below for more technical details.

The root cause was not identified until November 12th 10:17pm UTC, a day and a half after the issue arose. We enlisted help from engineers on the AWS RDS team and external consultants to identify the cause. Diagnosis proved difficult even with specialist support, but we eventually found out we were left with only one option: migrate to a new partitioned table.

By November 14, 2025, 15:02 UTC, ingestion was healthy again and we shifted focus to the accumulated backlog. During this recovery phase, we hit a secondary issue with AWS MSK (Kafka): local disk usage reached 85% because tiered storage keeps the most recent 4 hours of data on disk before offloading to S3. The backlog catchup created an unusually dense last-4-hours window, driving up local disk usage. We temporarily paused ingestion, reduced the topic's local retention window, confirmed disk headroom, and then resumed ingestion.

After that, catchup progressed smoothly. We moved our backfill into Dagster to gain better visibility and stability for long-running backfill jobs, knowing remediation would take at least the weekend. By the morning of November 15, all events since the start of the incident had been processed, all systems were fully operational, and we began a background backfill of older Person data purely for housekeeping. No data was lost.

By November 15, 6:20 AM UTC we had worked through the backlog of events and fully recovered.

Incident impact

Scope

Customers experienced ingestion delays ranging from 10 minutes to up to 2 days. During this period recent events sent to PostHog did not appear, leading to the following potential per-product impact:

Data integrity

No data was lost.

Once the backlog was cleared, all reporting tools indicate accurate values were processed for the delayed period.

Timeline

A full timeline of updates is available on the PostHog status page.

Root cause analysis

Primary issue: Postgres TOAST OID exhaustion

PostgreSQL stores large column values (>2KB compressed) in a separate, out-of-line table, called the TOAST table. Posthog's Persons table has a properties column that frequently exceeds this 2KB threshold, resulting in there being many TOASTed values associated with the Persons table.

Each value that is moved "out of line" is assigned a unique OID (Object identifier) from a finite 32-bit space (~4 billion values), that the main table uses to track it in the table's associated TOAST table.

From postgresql wiki:

The OIDs used for this purpose are generated from a global counter that wraps around every 4 billion values, so that from time to time an already-used value will be generated again. Postgres detects that, and tries again with the next OID.

When the space of used OIDs approaches the limit, there will be longer and longer sequential runs of used OIDs. This results in the database engine having to do an incredible amount of reads (checking every used OID it is given by the counter to see if it's free or not) to make a single INSERT or UPDATE for a TOAST'ed row.

It is important to note that before the table hits the hard limit of 4 billion OIDs, write performance for TOAST'ed rows will be severely degraded, because the space of available OIDs is so sparse. If there is just a single free OID left, the database engine would, on average, have to read through billions of used OIDs and check to see if they are free, before it finds the free OID to complete the write. This OID exhaustion increased the amount of disk reads we were doing per write query from 10kb to 15MB, increasing latency for those queries by 100x and grinding the ingestion of events to a halt.

Secondary issue: AWS MSK disk pressure during catch-up

During backlog processing, we hit a separate but related operational issue:

To mitigate this we paused ingestion, reduced the local retention configuration for the relevant topic, and resumed ingestion once disk usage returned to a safe level.

Appendix: OID exhaustion diagrams

Stage 1: Healthy database state

OID Space ( each block = used OID, each dash = free OID) [----------------------------------------------------] 0-1M OIDs [-----XX-----------------------XX---------------------] 1M-2M OIDs [---------XX--------------------------XX--------------] 2M-3M OIDs [----------------------------------------------------] 3M-4M OIDs ↑ Next OID Counter (finds free OID immediately)

Stage 2: OID exhaustion

OID Space (each block = used OID, each dash = free OID) [XXXXXXXXXXXXXXX--XXXXXXXXX--XXXXXXXXXXXXXX--XXXXXXX] 0-1M OIDs [XXXXXXXXXXXXXXXXXXXXXXXX----XXXXXXXXXXXXXXXXXXXXXX--XX] 1M-2M OIDs [XXXXXXXXXX--XXXXXXXXXXXXXXXXXXXXXXXXXXX--XXXXXXXXXXXXX] 2M-3M OIDs [XXXXXXXXXXXXXXXXXXXX----XXXXXXXXXXXXXXXXXXXXXXXXXXX--X] 3M-4M OIDs ↑ Next OID Counter (must skip over many used OIDs, each skip requiring disk reads)

Why was this hard to detect?

PostgreSQL's OID exhaustion behavior is rare and not commonly encountered even at large scale. Additionally, standard dashboards (CPU, memory, IOPS, lock contention) did not immediately point at OID exhaustion.

Diagnosis of the issue was also frustrated by a lack of dedicated observability on:

Eventual diagnosis was only possible due to the dedicated effort of our engineers working in tandem with external experts and AWS engineers to connect:

Remediation

Immediate actions (completed)

  1. Root cause discovery and isolation

We Identified TOAST OID exhaustion as the root cause by engaging internal teams, external consultants, and AWS engineers to analyze:

  1. Migration to a new partitioned Persons table
  1. Careful deployment of application changes

Given the risk of introducing new issues during an incident, we chose a manually controlled deploy to production for the web app rather than a fully automated rollout. Multiple engineers worked in shifts throughout the weekend and made changes across:

  1. Scaling ingestion to clear backlog

Once ingestion performance was restored on the new Persons table, we scaled ingestion workers to process the accumulated backlog while monitoring:

  1. MSK disk pressure mitigation
  1. Dagster-based backfill

We moved the backfill process into Dagster to provide:

  1. Final cleanup and confirmation

Communicated final resolution and announced a small upcoming maintenance window to consolidate on the new tables. We verified that:

Planned actions (planned or in-progress)

  1. Deeper Postgres engine monitoring

We plan to add metrics and alerts around:

  1. Improved runbooks for engine-level limits

We plan to document symptoms and diagnostics for similar Postgres engine-level issues. This will include clear decision trees for when to migrate vs repair-in-place.

  1. Improved and new runbooks for customer comms

We have begun creating new customer communication runbooks which clarify how and when to communicate with customers about the issue and provide a clear escalation path and redundancies.

  1. Exploring other data stores

We've been exploring using other data stores for the persons database and will continue to evaluate those.

Lessons learned

What went well?

We preserved all incoming events and persons data. While delayed, data was not lost.

Engineers across ingestion, infrastructure, and application teams, plus external consultants and AWS engineers, collaborated effectively to diagnose a rare engine-level problem.

We successfully migrated to a new Persons table using triggers and backfill while the system remained live, minimizing additional downtime.

We provided regular engineer-led status updates and committed to a public post-mortem.

What could have gone better?

It took roughly a day and a half to conclusively identify OID exhaustion as the root cause. We had no dedicated monitoring for TOAST growth, OID usage, or disk read amplification per write.

Many core features (analytics, flags, replay filters, CDP) rely heavily on timely updates to the persons table. When that became unhealthy, a wide surface area of the product was affected.

Our initial backfill approach lacked the visibility and robustness needed for a prolonged, large-scale migration. We had to move this logic into Dagster during the incident.

While secondary to the main cause, disk pressure on MSK during catch-up highlighted that our tiered storage configuration and alerting were not tuned for large backlog scenarios.

All of the team members who are normally responsible for customer communications were unavailable for the duration of this incident and we had to scramble to identify fallbacks.

Moving forward

This incident surfaced a rare but serious interaction between our data model and a low-level PostgreSQL engine limit. It also highlighted how central the Persons data model is to the rest of PostHog: when the persons table slowed down, a wide range of features---from analytics and feature flags to replay filtering and CDP---were indirectly impacted.

We've taken immediate steps to recover by migrating to a new partitioned Persons table, stabilizing ingestion, and clearing the backlog of events. We are now focused on:

We're committed to continuing to invest in the resilience of our ingestion and Persons infrastructure so that incidents like this become less likely, easier to detect early, and faster to remediate when they do occur.

Shai-Hulud supply chain attack

Company | Source: https://posthog.com/handbook/company/post-mortems/2025-11-26-shai-hulud-attack

At 4:11 AM UTC on November 24th, a number of our SDKs and other packages were compromised, with a malicious self-replicating worm – Shai-Hulud 2.0. New versions were published to npm, which contained a preinstall script that:

  1. Scanned the environment the install script was running in for credentials of any kind using Trufflehog, an open-source security tool that searches codebases, Git histories, and other data sources for secrets.
  1. Exfiltrated those credentials by creating a new public repository on GitHub and pushing the credentials to it.
  1. Used any npm credentials found to publish malicious packages to npm, propagating the breach.

By 9:30 AM UTC, we had identified the malicious packages, deleted them, and revoked the tokens used to publish them. We also began the process of rolling all potentially compromised credentials pre-emptively, although we had not at the time established how our own npm credentials had been compromised (we have now, details below).

The attack only affected our Javascript SDKs published in npm. The most relevant compromised packages and versions were:

What should you do?

Our recommendations are to:

  1. Look for the malicious files locally, in your home folder, or your document roots:
find . -name "setup_bun.js" \
  -o -name "bun_environment.js" \
  -o -name "cloud.json" \
  -o -name "contents.json" \
  -o -name "environment.json" \
  -o -name "truffleSecrets.json"
  1. Check npm logs for suspicious entries:
grep -R "shai" ~/.npm/_logs
grep -R "preinstall" ~/.npm/_logs
  1. Delete any cached dependencies:
rm -rf node_modules
npm cache clean --force
pnpm cache delete

Pin any dependencies to a known-good version (in our case, all the latest published versions, which have been published after we identified the attack, are known-good), and then reinstall your dependencies.

We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages.

How did it happen?

PostHog's own package publishing credentials were not compromised by the worm described above. We were targeted directly, as were a number of other major vendors, to act as a "patient zero" for this attack.

The first step the attacker took was to steal the Github Personal Access Token of one of our bots, and then use that to steal the rest of the Github secrets available in our CI runners, which included this npm token. These steps were done days before the attack on the 24th of November.

At 5:40PM on November 18th, now-deleted user brwjbowkevj opened a pull request against our posthog repository, including this commit. This PR changed the code of a script executed by a workflow we were running against external contributions, modifying it to send the secrets available during that script's execution to a webhook controlled by the attacker. These secrets included the Github Personal Access Token of one of our bots, which had broad repo write permissions across our organization. The PR itself was deleted along with the fork it came from when the user was deleted, but the commit was not.

The PR was opened, the workflow run, and the PR closed within the space of 1 minute (screenshots include timestamps in UTC+2, the author's timezone):

Image: initial PR logs

At 3:28 PM UTC on November 23rd, the attacker used these credentials to delete a workflow run. We believe this was a test, to see if the stolen credentials were still valid (it was successful).

At 3:43 PM, the attacker used these credentials again, to create another commit masquerading, by chance, as the report's author (we believe this was a randomly chosen branch on which the author happened to be the last legitimate contributor given that the author does not possess any special privileges on his GitHub account).

This commit was pushed directly as a detached commit, not as part of a pull request or similar. In it, the attacker modified an arbitrary Lint PR workflow directly to exfiltrate all of our Github secrets. Unlike the previous PR attack, which could only modify the script called from the workflow, and as such could only exfiltrate our bot PAT, this commit had full write access to our repository given the ultra-permissive PAT which meant they could run arbitrary code on the scope of our Github Actions runners.

With that done, the attacker was able to run their modified workflow, and did so at 3:45 PM UTC:

Image: Follow up commit workflow runs

The principal associated with these workflow actions is posthog-bot, our Github bot user, whose PAT was stolen in the initial PR. We were only able to identify this specific commit as the pivot after the fact using Github audit logs, due to the attackers deletion of the workflow run following its completion.

At this point, the attacker had our npm publishing token, and 12 hours later, at 4:11 AM UTC the following morning, published the malicious packages to npm, starting the worm.

As noted, PostHog was not the only vendor used as an initial vector for this broader attack. We expect other vendors will be able to identify similar attack patterns in their own audit logs.

Why did it happen?

PostHog is proudly open-source, and that means a lot of our repositories frequently receive external contributions (thank you).

For external contributions, we want to automatically assign reviewers depending on which parts of our codebase the contribution changed. GitHub's CODEOWNERS file is typically used for this, but we want the review to be a "soft" requirement, rather than blocking the PR for internal contributors who might be working on code they don't own.

We had a workflow, auto-assign-reviewers.yaml, which was supposed to do this, but it never really worked for external contributions since it required manual approval defeating the purpose of automatically tagging the right people without manual interference.

One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow _as it's defined in the PR target repo/branch_, and is therefore considered safe to auto-run.

Our engineer opened a PR to make this change, and also make some fixes to the script, including checking out the current branch, rather than the PR base branch, so that the diffing would work properly. This change seemed safe, as our understanding of on: pull_request_target was, roughly, "ok, this runs the code as it is in master/the target repo".

This was a dangerous misconception, for a few reasons:

These pieces together meant it was possible for a pull request which modified assign-reviewers.js to run arbitrary code, within the context of a trusted CI run, and therefore steal our bot token.

Why did this workflow change get merged? Honestly, security is unintuitive.

  1. The engineer making the change thought pull_request_target ensured that the version of assign-reviewers.js being executed, a script stored in .github/scripts in the repository, would be the one on master, rather than the one in the PR.
  1. The engineer reviewing the PR thought the same.
  1. None of us noticed the security hole in the month and a half between the PR being merged and the attack (the PR making this change was merged on the 11th of September). This workflow change was even flagged by one of our static analysis tools before merge, but we explicitly dismissed the alert because we mistakenly thought our usage was safe.

Workflow rules, triggers and execution contexts are hard to reason about – so hard to reason about that Github is actively making changes to make them simpler and closer to our understanding above. Although, in our case, these changes would not have protected us against the initial attack.

Notably, we identified copycat attacks on the following day attempting to leverage the same vulnerability, and while we prevented those, we had to take frustratingly manual and uncertain measures to do so. The changes Github is making to the behaviour of pull_request_target would have prevented those copycats automatically for us.

How are we preventing it from happening again?

This is the largest and most impactful security incident we've ever had. We feel terrible about it, and we're doing everything we can to prevent something like this from happening again.

I won't enumerate all the process and posture changes we're implementing here, beyond saying:

PostHog is, in many of our engineers minds, first and foremost a data company. We've grown a lot in the last few years, and for that time, our focus has always been on data security – ensuring the data you send us is safe, that our cloud environments are secure, and that we never expose personal information. This kind of attack, being leveraged as an initial vector for an ecosystem-wide worm, simply wasn't something we'd prepared for.

At a higher level, we've started to take broad security a lot more seriously, even prior to this incident. In July, we hired Tom P, who's been fully dedicated to improving our overall security posture. Both our incident response and the analysis in this post-mortem simply wouldn't have been possible without the tools and practices he's put in place, and while there's a huge amount still to do, we feel good about the progress we're making. We have to do better here, and we feel confident we will.

Replay SDK fetch wrapper incident

Company | Source: https://posthog.com/handbook/company/post-mortems/2026-01-17-replay-sdk-fetch-wrapper-incident

Date: January 14-19, 2026

Severity: Critical

Status: Resolved

Summary

A customer reported a critical issue in the PostHog SDK's Fetch API wrapper that rendered their site unusable. The issue had started a week earlier and was caused by the wrapper failing to pass the RequestInit object through to window.fetch, resulting in a TypeError for requests with a ReadableStream body.

Two SDK releases were made in an attempt to fix this bug. However, both introduced different but related regressions affecting other customers. Because the changes were in the lazy-loaded Replay extension rather than the core SDK, the issues also impacted customers who had not updated their SDK during this period.

After these regressions were reported, a bugfix release (1.327.0) was prepared. When this failed to resolve the issues, a follow-up release (1.328.0) rolled back all fetch wrapper changes.

Due to a recently introduced manual step in the process for publishing SDKs to the PostHog CDN, neither the bugfix nor the rollback was actually deployed. As a result, even customers pinned to version 1.328.0 continued to receive the broken lazy-loaded script from the CDN. The issue persisted for an additional two days until the missing deployment was identified and completed.

The original customer issue remains unresolved, but the customer has been provided with a temporary workaround.

Impact

Timeline

| Time (UTC) | Event | |------------|-------| | Jan 14, 10:18 AM | A customer reached out to inform us that they had removed the PostHog SDK from their site as it was causing fetch requests to fail with a TypeError. They had first become aware of this issue a week prior and this was a high-severity issue for them that rendered their site unusable. | | Jan 15, 5:33 PM | A new version (1.323.0) of the PostHog SDK was released with an attempted fix for the TypeError issue. | | Jan 16, 5:44 AM | The customer informed us that the fix in version 1.323.0 was not effective. | | Jan 16, 12:52 PM | A new version (1.325.0) of the PostHog SDK was released with an amended fix for the TypeError issue. | | Jan 17, 3:31 AM | Another customer reached out to inform us that their site was down due to issues with the integrity of fetch requests and that disabling PostHog immediately caused the issues to resolve. | | Jan 17, 6:43 AM | A GitHub issue was submitted describing an issue with the fetch wrapper in the PostHog SDK causing mismatched FormData boundaries. | | Jan 17, 8:43 AM | A new version (1.327.0) of the PostHog SDK was released with a fix for the mismatched FormData boundaries. | | Jan 17, 7:38 PM | More customer reports of critical issues with the fetch wrapper in the PostHog SDK surfaced and the decision was made to roll back all recent changes. | | Jan 17, 8:13 PM | A new version (1.328.0) of the PostHog SDK was released, undoing all recent changes to the fetch wrapper, restoring it to the last known working version. | | Jan 19, 4:04 PM | We became aware that the SDK version bump had not been merged into the primary PostHog repository, meaning that even for customers who had pinned their SDK version to 1.328.0, the faulty lazy-loaded script was still being served by our CDN and was still causing issues. |

Root cause

The initial issue was caused by the SDK fetch wrapper being too simplistic and not passing on request options that are sometimes required – specifically any request with a body of type ReadableStream must include the request option duplex: half or duplex: full on all modern browsers. Even if the customer site _does_ provide this option, the fetch wrapper does not pass it down to the original window.fetch method, resulting in a TypeError.

The fixes that were introduced to attempt to address this caused another issue. The updated fetch wrapper was creating a new Request object and passing both this object _and_ the request options to downstream wrappers and window.fetch. This was causing the request body to be consumed multiple times which is not a problem for most requests but which results in mismatching boundaries for FormData requests. When the FormData boundaries do not match, the request is typically rejected by the server.

A fix for this issue was released (1.327.0) but due to the missing manual approval step, this fix was never actually deployed to the CDN. Customers continued to report problems and, believing the fix to be ineffective, the decision was made to roll back all changes to the fetch wrapper rather than attempt to diagnose further.

The decision to roll back was delayed due to a lack of understanding of the scope of the issue – ultimately we had to rely on reports from customers to get the full picture. This also contributed to the incident not being handled within the usual incident response process.

The incident was prolonged because a recently introduced process requiring manual intervention to publish SDK releases to the PostHog CDN was not followed. There was no verification step to confirm that releases were successfully deployed to the CDN. As a result, the bugfix (1.327.0) and rollback (1.328.0) releases were never actually served to customers.

Resolution

All recent changes to the fetch wrapper were reverted, including on the CDN, restoring it to the simple, original implementation:

const req = new Request(url, init)
return originalFetch(req)

The original TypeError issue remains.

Insights

  1. Fetch wrapper changes are high-risk: The fetch API is fundamental to web applications; changes require extensive testing across diverse use cases, which our current test suite does not fully cover.
  1. Use feature flags for high-risk changes: Feature flags would have enabled rapid rollback without requiring a new release.
  1. Issues with the SDK can be hard to detect: Unlike issues with our backend systems, we do not get alerts when the SDK fails and so we relied entirely on customer feedback.
  1. There should never be a mismatch between the latest SDK version deployed to NPM and the latest version being served up by our CDN: This is a sign that something has gone wrong in the release process.
  1. Multiple fix attempts signal deeper issues: When fixes don't resolve the problem, step back to understand the root cause rather than iterating on partial solutions.
  1. Changes to lazy-loaded SDK extensions affect all customers: We only test changes to the SDK extensions with the latest version of the core SDK but then we release it to customers running (much) older versions of the core SDK, without testing.

Action items

We are committed to:

Feature flags cache degradation

Company | Source: https://posthog.com/handbook/company/post-mortems/2026-02-06-feature-flags-cache-degradation

Between February 2-6, 2026, PostHog's feature flags cache workers experienced escalating memory pressure, resulting in degraded cache update reliability. The issue was stabilized on February 6 at 22:34 UTC.

Summary

When a feature flag is updated, PostHog kicks off two Celery tasks: one to update the cache used by the /flags evaluation endpoint, and another to update flag definitions fetched by SDKs using local evaluation. Both tasks run on the same pool of Celery workers.

These workers experienced escalating out-of-memory (OOM) kills over a 4-day period, causing both caches to fall behind. Teams that updated flag rollout conditions or targeting rules would see those changes reflected in the PostHog UI but not propagated to the /flags endpoint or SDKs using local evaluation until the cache backlog cleared.

The root cause was an internal test automation system that had accumulated excessive test data over several months, creating cache update tasks that exceeded worker memory limits.

Timeline

All times in UTC.

Root cause analysis

Accumulated test data

An internal test automation system had been running against production for several months. Due to a bug in test cleanup logic, failed test runs left behind test data that accumulated over time. This created an internal account with far more data than any typical customer workload.

No batching in cache updates

The cache update task loads all data into memory at once — flag definitions, cohorts, serialized representations, and the final JSON payload. For typical workloads this is fine, but the accumulated test data created tasks that exceeded the 8GB worker memory limit on a single execution. Each task for this account required holding all the data in memory simultaneously, causing immediate OOM kills regardless of worker age or prior memory state.

Impact

Detection

The incident was detected through monitoring showing OOM kills escalating on the feature-flags Celery workers. The 116k task backlog was discovered during investigation.

OOMs were observable in the days prior, but the root cause wasn't investigated deeply at first because the numbers were low and seemed intermittent. Initial mitigation attempts focused on isolating the task to a dedicated queue and optimizing memory usage, but these didn't address the underlying issue.

It wasn't until a colleague noticed a team with an abnormal amount of data that the root cause was identified.

Recovery

  1. Increased memory limits for workers
  2. Enabled worker recycling (max_tasks_per_child=100) to give workers more headroom
  3. Reduced worker load by pausing non-critical tasks
  4. Purged backlogged tasks from the queue
  5. Cleaned up stale test data (the actual fix)

Remediation

Completed

In progress

| Follow-up | Priority | |-----------|----------| | Better alerts, metrics, and visualizations for celery queue backlogs | High | | Add metrics for anomalous workloads | Medium | | Task deduplication for cache updates | Medium | | Improve memory usage of cache update task | High | | Merge flag caches into a single cache build | Medium |

Lessons learned

What went well

What went poorly

Key takeaways

  1. Unbounded data loading is risky: Operations that load all data into memory work fine for typical workloads but can fail catastrophically for outliers. Consider batching or streaming for tasks that scale with customer data.
  2. Correlate OOMs with pod health: A drop in OOM kills might mean workers are healthy, or it might mean they're stuck in crash loops and not processing anything. Always check pod status alongside OOM metrics.
  3. Isolate test environments: Even with cleanup logic, test automation against production will eventually accumulate artifacts. Use isolated environments for integration testing.
  4. Monitor queue backlogs: We had visibility into OOMs but not the growing task backlog. Better queue monitoring would have surfaced the issue sooner.

Logs data loss

Company | Source: https://posthog.com/handbook/company/post-mortems/2026-02-20-posthog-us-logs-data-loss

On February 19th, PostHog's Logs product experienced a major incident, which caused the loss of data collected more than 3 days ago in our US region. This data loss only impacted the Logs product, all other PostHog data is intact.

Summary

As with most queryable data in PostHog, we store data for Logs in a ClickHouse cluster. When we started building Logs, we decided to use a new, dedicated cluster, rather than building it on top of our main ClickHouse cluster, which is shared across most other PostHog products. This had a few advantages, allowing us to:

This new cluster uses S3 disks in ClickHouse, with data parts being automatically uploaded to S3 after 24 hours – this is what enables us to handle the significant data volume required for Logs (in PostHog, we alone produce about 500MB/s of logs from across our systems, or about 1PB/month uncompressed).

A bug in ClickHouse caused it to unexpectedly attempt to delete almost all of the data parts in S3. The Logs database is replicated, with two replicas, however very early on in the project we had enabled "Zero Copy Replication" in the Logs cluster nodes. This is an experimental feature that ClickHouse does not recommend in production, for exactly this reason: a bug that should have caused a single replica to be deleted instead deleted the data everywhere.

Timeline

All times in UTC.

Root cause analysis

Zero Copy replication bug

The decision to use zero-copy replication was taken extremely early in the Logs product development when it was an experimental internal-only tool.

Once Logs was released to external users this decision should have been revisited, but wasn't. Due to experiencing no issues at all during several months of internal usage, settings that had been set at the beginning were largely unvisited and unchanged.

Zero-copy replication has been largely unmaintained for the last 4 years, and still contains critical bugs, including the one we hit here. Because Zero Copy replication uses a shared storage medium (S3) for multiple replicas, when the logic on one node failed and issued delete commands for the underlying S3 objects, those files were removed for the entire cluster immediately. There was no redundancy layer between the database application logic and the storage layer.

Lack of detection

We lacked specific monitoring for the integrity of "cold" data stored in S3. Our alerts are optimized for ingestion lag, query latency, and error rates on active queries. Since users rarely query logs older than 24 hours, and the deletion process happened silently in the background without throwing application-level errors, the system remained "green" on our dashboards until the node restart forced a consistency check.

Lessons learned

What went well

What went poorly

Key takeaways

  1. Immediate Configuration Audit: Disable Zero Copy Replication on all clusters immediately. Conduct a full audit of the Logs ClickHouse configuration and ensure no experimental features are used in production.
  2. Implement S3 Object Protection: Enable S3 Versioning on the underlying storage buckets. This ensures that even if the database application issues a destructive command due to a bug, the underlying data objects can be recovered.
  3. Before a product is made Generally Available, we spot check configurations and our data integrity strategies to find and correct for potential single points of failure

Workflow "Wait until condition" steps silently failing

Company | Source: https://posthog.com/handbook/company/post-mortems/2026-04-27-workflow-wait-until-condition

Between March 30 and April 22, 2026, a bug in our workflow engine caused workflows using "Wait until condition" steps to silently stop resuming. Affected workflows appeared to complete normally in the UI but never executed their downstream actions — such as delivering emails or sending Slack notifications. 48 workflows across 33 customer organizations were impacted, with 11,920 invocations silently blocked. The issue has been fully resolved and affected customers have been contacted and they will see a banner on each impacted workflow with a self-serve option to review and replay the silently-blocked runs. Importantly, 99.7% of all workflows triggered during this period executed normally.

Summary

PostHog's workflow engine allows customers to build multi-step automations. Some steps, like "Wait until condition," pause execution and periodically re-check whether a condition has been met before continuing.

On March 30, we deployed a deduplication mechanism to fix an earlier incident where ghost workflow runs were causing customers to receive duplicate emails and notifications. The dedup logic worked by comparing the invocation ID of a workflow when it first entered a step against the ID it carried when it resumed. If the IDs didn't match, the resume was treated as a duplicate.

This only affected "hold-state" actions — steps that pause and re-enter themselves ("Wait until condition"). Steps like "Delay" advance to the next action before pausing, so the dedup check on the next action started fresh and never hit the mismatch.

Unfortunately, the issue went undetected far longer than usual as we were lacking observability for the dedup code path.

Timeline

All times in UTC.

Root Cause Analysis

Invocation ID format mismatch across subsystem boundary

The workflow engine generates invocation IDs using PostHog's UUIDT format. The V1 job queue (job-queue-postgres.ts) validates incoming IDs using the npm uuid package's isUuid check, which rejects UUIDT-format IDs and silently substitutes a fresh UUIDv7.

When a "Wait until condition" step paused and was re-queued through the Postgres V1 path, the invocation ID was rewritten. On resume, the dedup logic compared the stored UUIDT against the new UUIDv7, saw different IDs, and concluded the resume was a duplicate — silently terminating the workflow.

Both sides of this boundary were tested in isolation: the dedup tests called the executor directly (never round-tripping through the queue), and the queue tests used uuidv4() instead of the UUIDT generator that production actually uses. Both passed, but neither caught the mismatch that only surfaces when the two subsystems interact.

Missing observability on a critical code path

The dedup logic was deployed without metrics tracking how many invocations were being filtered. Although legitimate deduplications were expected — thousands of ghost runs were still being correctly blocked — having a baseline would have made the anomalous spike in filtered invocations visible and drawn attention to the issue much sooner.

Lessons Learned

What went well

What went poorly

Key takeaways

  1. We've reverted the dedup logic and are investing in building a solution that fully mitigates this class of problems. The new architecture will also allow us to write more robust end-to-end tests to prevent issues like this from happening again.
  2. We have deployed additional alerting that will notify the teams immediately for this class of failure case in the future.

Public post-mortems

Company | Source: https://posthog.com/handbook/company/post-mortems

For PostHog employees, see the post-mortem guidance for how and when to write a post-mortem.

This page contains public post-mortems for significant incidents at PostHog. We publish these because we believe transparency builds trust, and because we think the wider engineering community benefits from shared lessons.

For security-specific incidents, see our security advisories. For real-time status updates, check our status page.

Our approach to post-mortems

We write post-mortems to understand what happened, not to assign blame. Every incident is an opportunity to improve our systems and processes. Our post-mortems typically cover:

Not every post-mortem is made public. Minor incidents that partially affect services are documented internally. We publish a public post-mortem when an incident results in permanent impact on user data (such as data loss), directly disrupts customers' own services (such as SDK bugs breaking customer sites) or result in extended unavailability of PostHog services for customers (e.g. if dashboards would not load for multiple hours).

For internal guidance on how we handle incidents, see handling an incident.

Public post-mortems

Security advisories

Company | Source: https://posthog.com/handbook/company/security-advisories

This page contains security advisories and Common Vulnerabilities and Exposures (CVEs) related to PostHog. We maintain this page to ensure transparency and help our users stay informed about any security issues that may impact them. In the event that a security incident leads to a confirmed exposure or requires action from users we will always contact users proactively.

For coverage of other, non-security incidents, please check our status page.

Our approach to security advisories

At PostHog, we take security seriously. Not as a checkbox, but with hardware security keys and healthy paranoia. We have a robust security program that includes:

For more information about our security practices, see our main security page.

Reporting security issues

Security vulnerabilities and other security related findings can be reported via our vulnerability disclosure program or by emailing security-reports@posthog.com. Valid findings will be rewarded with PostHog swag.

Updating this page

PRs to this page which update advisories or CVEs should only occur as part of an incident and should follow all our usual processes for an incident. If you need to issue an advisory or CVE and an incident is _not_ declared, you should declare one.

Declaring an incident will ensure that there is good internal visibility and that members of relevant teams, including our Support team, are aware. Once an advisory is posted to this page, you should also update other teams by posting in the #tell-posthog-anything Slack channel.

Security best practices

Security is everyone's responsibility, so we encourage all our users and staff to follow some basic best practices within their own organizations.

We will always proactively reach out to affected users in the event of an advisory requiring attention or action. However, if you'd like to stay updated about future incidents or advisories, please subscribe to our status page. If you want to drink updates from the firehose, you can also follow our GitHub repos for real-time updates about everything we do, as we're committed to working in the open wherever possible.

Current advisories

No active advisories

Currently, there are no active security advisories or CVEs. All is well.

Past advisories

<summary>August 15, 2025 / PSA-2025-00001</summary>

<p><strong>Date:</strong> August 15, 2025<br /> <strong>Advisory:</strong> PSA-2025-00001<br /> <strong>Severity:</strong> Medium<br /> <strong>Status:</strong> Resolved</p>

<h4>Description</h4> <p>An overly permissive table was available in the SQL editor that allowed users to see queries performed by other users in unrelated teams. The results of those queries were <em>not</em> accessible, but the queries themselves were visible.</p>

<h4>Affected users</h4>

<li>Our logs confirm that this feature was never used in our EU cloud.</li> <li>Our historical query log for the US cloud only contains data going back to July 3, 2025, and we can confirm the feature was not used during that period.</li> <li>We do not have query logs between December 12, 2024, and July 2, 2025. While we cannot fully confirm usage during this window, we believe it is very unlikely the feature was used in our US cloud, as it was never advertised.</li>

<h4>Resolution</h4> <p>Once discovered, we immediately removed the ability to query this table. We then reintroduced the feature with queries properly scoped to each user’s team.</p>

<h4>What we learned</h4>

<li>We have a logic guard to ensure that all queries contain a properly authorized <code>team_id</code> when the queried table includes a <code>team_id</code> field.</li> <li>This logic did not help in this case because the query log table did not contain a <code>team_id</code> field.</li> <li>We have since added a <code>team_id</code> field to this table and audited all other tables to verify that they contain a <code>team_id</code> field where appropriate.</li> <li>Going forward, we will introduce automated tests to ensure that all new tables also include a <code>team_id</code> field.</li> <li>Our historical query log contains a longer dataset in the EU cloud simply because it was deployed there first. Going forward, our US cloud logs will continue to accumulate historical data for future incident response.</li>

<h4>Timeline</h4>

<li><strong>Vulnerable code shipped:</strong> December 12, 2024, 14:45 UTC</li> <li><strong>Discovered:</strong> August 13, 2025, 11:32 UTC</li> <li><strong>Reported:</strong> August 13, 2025, 11:39 UTC</li> <li><strong>Fixed:</strong> August 13, 2025, 12:33 UTC</li> <li><strong>Disclosed:</strong> August 15, 2025, 09:00 UTC</li>

Advisory template

  <p><strong>Date:</strong> August 15, 2025<br />
  <strong>Advisory:</strong> PSA-2025-XXXXX<br />
  <strong>Severity:</strong> Low / Medium / Critical<br />
  <strong>Status:</strong>Reported / Fixed / Resolved</p>

  <h4>Description</h4>
  <p>Brief description of the vulnerability and its potential impact.</p>

  <h4>Affected users</h4>
  <ul>
    <li>Confirm if the advisory is limited to specific products.</li>
    <li>Confirm if the advisory is limited to either US or EU customers, or both</li>
  </ul>

  <h4>Resolution</h4>
  <p>Where possible, add a link to a PR. Be clear on any next steps.</p>

  <h4>What we learned</h4>
    <p>If there's a lesson we took to prevent this happening again, document it briefly.</p>

  <h4>Timeline</h4>
  <ul>
    <li><strong>Vulnerable code shipped:</strong> January 10, 2024, 00:00 UTC</li>
    <li><strong>Discovered:</strong> January 10, 2024, 00:00 UTC</li>
    <li><strong>Reported:</strong> January 10, 2024, 00:00 UTC</li>
    <li><strong>Fixed:</strong> January 10, 2024, 00:00 UTC</li>
    <li><strong>Disclosed:</strong> January 10, 2024, 00:00 UTC</li>
  </ul>

Security & Privacy

Company | Source: https://posthog.com/handbook/company/security

It is critical that everyone in the PostHog team follows these guidelines. We take people not following these rules very seriously - it can put the entire company and all of our users at risk if you do not.

Overview

We maintain a robust security program that follows best practice in order to meet the needs of our PostHog Cloud customers, making PostHog the ideal solution for customers who have GDPR, SOC 2, or CCPA obligations themselves. PostHog Cloud customers own the data they send to us for processing. We collect and analyze data about the use of PostHog Cloud by our customers, but that data does not include the user data that customers send to us to process on their behalf.

This page covers SOC 2, GDPR, and CCPA compliance.

For information about security advisories and CVEs, see our advisories & CVEs page.

Multi-factor authentication

All team members are required to enable multi-factor authentication (MFA) on their accounts. Passkeys are the preferred method for securing all accounts — they are phishing-resistant, easy to use, and supported by most major services including Google Workspace, GitHub, 1Password, and macOS.

Please set up passkeys for Google Workspace and GitHub at the very least. If you are new, please do this within your first week so you don't get locked out.

It is recommended to have most passkeys saved in 1Password itself, which will allow you to use them from your phone.

YubiKeys for infrastructure accounts

YubiKeys are required for certain infrastructure-specific accounts as determined by the security team. If your role requires access to these accounts you will be told by the team - if in doubt ask in #team-security.

We recommend purchasing:

Setting up your YubiKeys
  1. Register your YubiKeys with each required service. The security team will let you know which accounts need YubiKey authentication.
  2. Always register both keys with every service so the second acts as a backup if you lose one.
  3. Disable OTP mode — avoid spamming OTPs if you accidentally touch your YubiKey by installing the YubiKey Manager or by running brew install ykman && ykman config usb --disable OTP

SOC 2

These policies are also relevant for GDPR (see below).

GDPR

For the purposes of GDPR, customers use PostHog in one of two ways:

If a customer is using PostHog Cloud, then PostHog is acting as Data Processor and the customer is the Data Controller. We have some GDPR obligations to the customer's end users here.

If a customer is self-hosting PostHog then they are both the Data Processor _and_ the Data Controller because they are responsible for their PostHog instance. We do not have access to any of their user data, so we do not have specific GDPR obligations to the customer's end users here.

PostHog's obligations as a Data Processor

We have reviewed our architecture, data flows and agreements to ensure that our platform is GDPR compliant. PostHog Cloud does not directly interact with our customers’ end users, nor does the platform automatically collect personal data. However, our customers might collect and send personal data to PostHog for processing.

PostHog does not require personally identifiable information or personal data to perform product analytics, and we provide extensive controls for customers wishing to minimize personal data collection from their end users. We provide separate guidance for our customers on how to use PostHog in a GDPR-compliant way in our Docs.

Technical and Organizational Measures ('TOMs')

Charles Cook (VP Operations) is our assigned Data Protection Officer and is responsible for overseeing compliance. Customers can email privacy@posthog.com for any questions relating to GDPR or privacy more generally.

CCPA

Under the California Consumer Privacy Act (CCPA), PostHog as a Service Provider to PostHog Cloud customers only. This is similar to the Processor definition under GDPR. We include a CCPA Addendum in our Privacy Policy.

We give all PostHog customers the tools to easily comply with their end users' requests under CCPA, including deletion of their data. We provide separate guidance for our customers on how to use PostHog in a CCPA-compliant way in our Docs.

We receive data collected by our customers from end-users and allow them to understand usage metrics of their products. We don't access customer end-user data unless instructed by a customer, and customer data is never sold to third parties. We do not have access to data collected by our customers who are using a self-hosted version of PostHog from end-users at all, unless they give us access to their instance.

Pen tests

We conduct these annually, most recently in May 2025 - you can find the report in our Trust Center.

Responsible disclosure

Security vulnerabilities and other security related findings can be reported via our vulnerability disclosure program or by emailing security-reports@posthog.com. Valid findings will be rewarded with PostHog swag.

For information about current and past security advisories and CVEs, see our advisories & CVEs page.

Reporting phishing

If you receive a phishing email/text/whatsapp, it's useful to report it to the security team so that they can make other employees aware. Take a screenshot and post it in #phishing-attempts. You may be asked to forward the email to security-internal@posthog.com for further inspection.

Secure communication (aka preventing social engineering)

We follow several best practices to combat social engineering attacks. See Communication Methods for more information.

Impersonating users

To provide a great customer experience, PostHog employees may occasionally need to access customer data or log in as a user (i.e. impersonate them). We allow this access when it's necessary to deliver our service, following these guidelines:

  1. Only impersonate when there’s a clear, demonstrable benefit for the customer.

For example, to investigate an incident, resolve a support issue, or review a customer’s setup to give recommendations on how to use PostHog more successfully.

  1. Do not make any changes to a customer’s setup without explicit consent.

Exceptions to this are cases where we are reacting to incidents or bad configurations that are negatively impacting PostHog services in order to protect ourselves _and_ the customer.

  1. Ask for permission whenever possible.

While this isn’t always feasible, such as during an active incident, it’s best practice to inform the customer before accessing their account. When a customer raises a support ticket, we take this as consent to be able to impersonate their account and investigate based on the contents of the ticket. Customers will not be actively asked for permission by our support engineers when they are investigating a ticket, and the customer should inform us in the ticket if they explicitly do not wish for our support engineers to access their account.

  1. Use good judgment.

If you’re unsure whether impersonation is justified, or if a customer might object, either seek their consent or find another way to get the information (for example, by checking our internal PostHog instance).

Small teams

Company | Source: https://posthog.com/handbook/company/small-teams

PostHog is structured for speed, autonomy and innovation.

Many traditional organizations have big, separate functions. You have a product team, an engineering team, customer support, and so on. This slows things down when you scale because there are more layers of communication and complex approval chains. This stifles innovation because you have to get your boss to talk to someone else's boss to get work done. It also means that people can't really see the impact of their work.

PostHog started off as a completely flat company with one big goal: to increase the number of successful products in the world.

As we are getting bigger, we anticipate that it will get harder for people to see the direct impact of their work, which reduces the sense of ownership.

We have therefore introduced small teams. These are designed to each operate like a startup. We maintain our full org chart in our ops platform.

How it works

What does owning an area of the product mean?

The product small team is responsible for everything related to their area, particularly:

  1. Usage
  2. Quality
  3. Revenue

What actions should the small teams be doing for their area?

Each quarter:

  1. Create good quarterly goals

During the quarter:

  1. Maintain a prioritized roadmap to help them achieve their objectives
  2. Speak to customers
  3. Monitor relevant metrics including those covering Usage, Quality and Revenue
  4. Triage and fix related bugs
  5. Assist the support hero in answering related questions
  6. Collaborate with other small teams such as marketing
  7. Become power users of their area of PostHog and use PostHog in their processes

What is the role of the team lead?

Overall, the team lead is responsible for ensuring the above happens. They should focus on enabling the team to solve these tasks together rather than trying to do it all themselves.

Team leads do not necessarily = managers. Read more about how we think about management.

Once a new team lead is appointed, or a small team is created, team leaders take on additional responsibilities, along with a checklist of actions. To kick off the process, run /org-change in Slack and select the relevant change type – it'll create a tracked issue in company-internal with the right checklist.

Team leads also take on a range of broader responsibilities that revolve around releasing new features and communicating with other teams. Some helpful guidelines on what team leads should be taking responsibility for are listed below.

Setting up support processes

Setting up support processes is a team lead responsibility, but if you need any assistance just contact the Support team directly.

Team leads are responsible for creating Slack channels for their support function and ensuring integration with Zendesk, so that the team can be alerted to support issues. Once the support process is set up, team leads are responsible for ensuring a sustainable and fair support rotation and setting up SLA and support hero notifications.

To kick off any org change, run /org-change in Slack.

Launching new products and features

It's the responsibility of the team lead to keep Marketing and Billing teams informed about product progress, so that product marketers can coordinate launches and the Billing team can implement pricing.

For a complete walkthrough of the product lifecycle (from initial setup through GA), see releasing new products and features and use the new product RFC template.

Some guidelines on how to do this are below, but if in doubt team leads should always aim to overcommunicate with Marketing and Billing teams.

Adding ideas to the roadmap

Launching a new beta

Launching a new product

Typically, you must give at least 2-3 weeks notice of a product launch and you should reach out directly to marketing team leads if this is not possible.

Leading quarterly goal setting

Team leads are responsible for organizing quarterly goal setting within their team, leading the goal-setting session, and documenting the goals on their team page.

How do small teams and product managers work together?

With our engineering-led culture, the engineers on the small team are normally responsible for their area of the product.

We have a small number of product managers who support the product small teams in achieving their goals. This includes helping with prioritization, creating/updating dashboards, competitors analysis, speaking to customers etc. However, having product managers doesn't mean that the engineers can abdicate these responsibilities. The engineers should be the experts of the product they are building and their customers.

Additionally, the product managers should pay particular attention to cross-team alignment.

How do small teams and designers work together?

Similar to product, designers support small teams. Read our guide for engineers on how to work with design.

Managing larger cross-team projects

Each project should be owned by an individual within a single small team. However, some projects affect multiple other teams and require their support. For example, the performance work owned by Karl in product analytics requires support from the pipeline and infrastructure team.

For these projects, we recommend the individual owning it write a "Status update" every 2 weeks on slack and add a link to this update in the "Updates on bigger projects that affect multiple teams" section of the all hands doc. These status updates might include: what's been done since the last update, any blockers, and what are the next steps.

Small teams intros

Every small team should have an agreed charter which should include:

These should all be visible in the Handbook, updated when changes are made & confirmed ahead of each quarter so everyone is on the same page.

List of small teams

See the list of all small teams.

Forming new small teams

We have a defined process for proposing changes to teams, or creating a new team.

Once a decision is made, the following happens:

The small teams template contains a list of tasks for the Ops team and the team lead.

These include standard tasks, such as creating Slack groups and a team page to ensure the team can communicate efficiently.

FAQ

Who do small teams report to? How does this work with management?

The team lead has the final say in a given small team's decision-making - they decide what to build / work on.

Each person's line manager is their role's functional leader (if possible). For example, engineers, no matter which small team they're in, will report to an engineer.

It's important to note that management at PostHog is very minimalistic – it's critical that managers don't set tasks for those in small teams.

Think of the small team as the company you work for, and your line manager as your coach.

Can someone be in multiple small teams?

Only if they're in some kind of supportive role. For example, product managers and designers can be attached to more than one team, but product engineers should never be in more than one team because this acts against proper ownership.

Who is in a small team?

No more than 6 people, but that's the only rule. It could be any group of people working together.

Will this lead to inconsistent design?

Eventually, yes. Other companies have a UX team that build components for everyone to use. Since we currently use Ant Design, we don't need this just yet.

Can I still step on toes?

Yes. In fact, it's actively encouraged.

We still expect people to have an understanding of the entire company and what various people are working on. In engineering, we still expect you to understand how the entire system works, even if you're only working on infrastructure. You can only do your job well if you understand how it fits in with other parts of the system.

You're actively encouraged to raise pull requests or propose changes to stuff that doesn't have anything to do with your small team.

Can people change teams?

We try to keep moves infrequent and when needed. We anticipate moving people roughly every 3-9 months. We'd rather hire new people than create gaps by shifting people around.

There are two scenarios that will trigger a move:

It is very important to raise any desire for a team change with your relevant teams/blitzscale member early. Any changes are at their discretion, as their job is to ensure that our small teams continue to function and that any moves fit into our current hiring plans. They will also have the best context about which teams you may be a good fit for, based on your skillset but also each team's needs. Please don't go talking to other teams directly first, as it makes it harder to manage everyone's expectations.

Aren't most small teams way too small?

In general, no – it's surprisingly great how much just 2-6 people can get done.

If more mature product areas cannot cope with the workload, small teams will clarify where we need to hire too. In fact, it'll make sure we keep the scrappy fun side of working here as we get bigger. A team doesn't _have_ to be six people.

How does hiring in the small team work?

The small team is responsible for creating roles for those that they need.

We have a centralized team that will then help you hire.

James and Tim used to interview every candidate because it's a standard startup failure for founders to get too removed from hiring. We've relaxed this so that someone Team Blitzscale always interviews candidates, normally whichever team member sponsors the team the candidate will be joining.

Regardless of the team, we aim to retain a high bar for new hires. In the words of James Greenhill: "If it's not a hell yes, it's a hell no." See how we hire for more on this.

How do we create new teams, or make changes to existing teams?

See how we make team changes for a more detailed breakdown of the process.

Does a small team have a budget?

Spend money when it makes sense to do so. See our general policy on spending money.

How do you keep the product together as a company?

James and Tim are ultimately responsible for us having (i) no gaps in product (ii) eliminating duplicate work (iii) making sure all small teams are working on something rational. This is how we manage the product.

How do you stop duplicate work?

James and Tim have the ultimate responsibility to make sure we don't build the same thing in two different teams, or that we don't accidentally compete with each other internally.

By keeping communication asynchronous and transparent, this is made much easier to do than is typical at other organizations.

Can a small team "own" another small team?

Not for now, no. Perhaps when we're much larger this is something to think about.

Sprints

Company | Source: https://posthog.com/handbook/company/sprints

PostHog works based on Sprints. These are when a Small Team meets to discuss how the last Sprint went, and what the plan is for the next one.

Sprints are shared transparently inside the company, for every team – including the Executive Team. This means people can coordinate work without having to do meetings.

Team changes

Company | Source: https://posthog.com/handbook/company/team-changes

There are three key principles here:

  1. Anyone can propose a change by creating an issue suggesting the change.
  2. Decisions should be made quickly – i.e. less than a week.
  3. Team Blitzscale ultimately own the decision to make a change or not.

Complete consensus isn't necessary, but there should always be time for people to share feedback, and alternative solutions, before a decision is made.

We should never run lengthy consultations, or individual meetings with all those affected by a proposed team change, but a group meeting to make a final call can be useful provided you follow the process below first.

How to propose a team change

Follow this process whether you're proposing creating a new team, splitting up an existing team, or even closing down a team.

1. Create a team change proposal issue

You can use the team change proposal template in company internal to do so. A good proposal should:

2. Share your proposal widely

Please share the issue in the relevant team Slack channels, the #team-blitzscale Slack channel, and any relevant public channels, requesting feedback.

It's generally best to post once and then forward that message to other relevant channels to keep things tidy.

Include the deadline for the decision in your message and tag the directly affected people.

Our goal is to make the best possible decision as fast as possible. When giving feedback, consider the following:

3. Share the final decision in Slack

The final decision should always be made by the relevant member(s) of the Blitzscale Team in a timely fashion.

Once made, they should share their decision in #tell-posthog-anything and the relevant team channels, alongside a short summary of why we're making that change.

4. Execute the change

Once the decision is shared, the team lead kicks off execution by running /org-change in Slack and selecting the relevant change type. This creates a tracked issue with the right checklist, assigned to those involved.

FAQ

What if I want to move teams?

This process exists purely for making larger changes to existing teams, or forming new ones, that impact multiple people. If you are personally looking to change team, see the small teams handbook page.

What happens after a decision is made?

This is covered on the small teams handbook page.

List No Nextline Md

Content | Source: https://posthog.com/handbook/content/_snippets/list-no-nextline-md

- **Feature flags:** PostHog offers robust, multivariate feature flags which support JSON payloads. This enables you to push real-time changes to your product without needing to redeploy. Visit our feature flag page for more information. LogRocket doesn’t have any in-built feature flag functions.
- **Experiments:** PostHog offers multivariate experimentation, which enables you to test changes and discover statistically relevant insights. Visit the experimentation page for more information. LogRocket doesn’t have any in-built experimentation features.
- **Open source:** PostHog is entirely open source, under a permissive MIT license. The biggest advantage for users is the ability to build on top of PostHog and to access the source code directly. Our team also works in the open. LogRocket is not an open source company, nor is the product available under an open source license.

List No Nextline

Content | Source: https://posthog.com/handbook/content/_snippets/list-no-nextline

List With Nextline Md

Content | Source: https://posthog.com/handbook/content/_snippets/list-with-nextline-md

- **Feature flags:** PostHog offers robust, multivariate feature flags which support JSON payloads. This enables you to push real-time changes to your product without needing to redeploy. Visit our feature flag page for more information. LogRocket doesn’t have any in-built feature flag functions.

- **Experiments:** PostHog offers multivariate experimentation, which enables you to test changes and discover statistically relevant insights. Visit the experimentation page for more information. LogRocket doesn’t have any in-built experimentation features.

- **Open source:** PostHog is entirely open source, under a permissive MIT license. The biggest advantage for users is the ability to build on top of PostHog and to access the source code directly. Our team also works in the open. LogRocket is not an open source company, nor is the product available under an open source license.

List With Nextline

Content | Source: https://posthog.com/handbook/content/_snippets/list-with-nextline

Content brand guidelines and messaging

Content | Source: https://posthog.com/handbook/content/brand-message

What should we be trying to communicate about PostHog?

PostHog is a developer platform that helps people build successful products. We provide a suite of dev tools to help them do this.

Beyond literally communicating what PostHog is and what it does, we want to equip developers to build successful products. We do this by communicating the following:

Who is our audience?

Ideally, our ICP: the people building products at high-growth startups.

The primary persona of our audience is product engineers, product-minded full-stack engineers with a slight bias towards the frontend.

An important subset of this persona is technical founders. Great product engineers sort of act like technical founders anyway.

When we are working on content (like blogs, docs, and tutorials) for a specific product, we should write it for the persona of that product, which might be different from our primary persona.

Learn more in Who are we building for.

Why do developers, product engineers, and technical founders pick PostHog?

We help them debug and ship their product faster.

We have all the apps in one. This means less time spent patching these tools together and paying for them all separately. When engineers need a new tool, they can just use PostHog.

Our team is technical and speaks the language of developers. Our engineers talk with customers to figure out what to build. Our support team are all former engineers and get into the nitty-gritty of issues. Our sales and CS teams are very technical too. They focus more on your use cases and implementation than steak dinners.

We want engineers to self-serve. They can sign up and use all of the features of PostHog for free. We also work hard to have world-class docs and technical content that enables them to solve their own problems and come up with their own solutions.

See Why buy PostHog and How we make users happy.

Things PostHog is not

PostHog could be a lot of things. We also have a lot of terms for the same things. This creates cognitive load and confusion. We'd rather our audience use their energy elsewhere. To help them, avoid the following:

  1. PostHog is not just an analytics platform or tool. Although we started with analytics, PostHog has grown well beyond this. We're not a product analytics or session replay tool either. Nor a "product improvement platform."
  1. We are not a dev tool platform. This makes it seem like we are just dev tools to use.
  1. We are not a collection, group, set, bunch or any other collective noun of tools or products. We are not “product and data tools” as this isn't developer-focused enough. Product and data should refer to our customer's products and data.
  1. It's not “product analytics product”, it's “product analytics app” or just “product analytics” whenever possible.
  1. We are not focused on non-developer roles by default. We should assume our audience is developers, or technical enough to be one. More people than you think are engineers too, especially thanks to AI coding tools and automation platforms.

Overview

Content | Source: https://posthog.com/handbook/content

The Content team has two core goals:

  1. Increase awareness of PostHog, especially among people in our ideal customer profile
  2. Help developers and PostHog users be more successful through great content, videos, and docs

We do this by:

Content is the main pillar of our marketing strategy. Our strategy is to go _deeper_ and create better content as we grow. We don't rely on AI. We don't take content in exchange for links. We don't have arbitrary volume goals.

Our latest goals can be found on the page. You can share ad-hoc ideas in our #content-ideas Slack channel.

Who is our audience?

It should be the same as who we are building for. Specifically:

What kind of content do we produce?

  1. Opinionated advice: Articles where we offer a strong point of view on a topic that impacts our audience. Examples include The Product-Market Fit Game, Burning money on paid ads for a dev tool, and How to design your company for speed
  1. High intent SEO comparisons: Articles for people actively considering PostHog, or searching for a product like ours. Examples include comparisons between PostHog and competing products, guides on the best alternatives to popular tools, and guides to most popular tools in our segments.
  1. Helpful evergreen guides: Articles on topics of interest to our users and potential users. They generally target popular search terms. Examples include How to measure product-market fit, The AARRR pirate funnel explained, and 8 annoying A/B testing mistakes every engineer should know.
  1. Engineering tutorials: Guides on how to do specific things in PostHog. These can be for existing PostHog users, or aimed at potential users who are trying to solve a specific problem. Some, like How to set up Python A/B testing are SEO focused. Others focus on specific PostHog user pain points.
  1. Newsletters: Our newsletter, Product for Engineers, is both a distribution channel and its own content category. Issues often curate or summarize our existing content, or that of others, into an easy-to-digest, snackable format.

How we work

We work autonomously. You don't need permission or approval to write something or make a change. You're the driver.

It is often helpful to share ideas in our #content-ideas Slack channel or as a GitHub issue. The GitHub issue template provides a structure to help you think through your idea.

You can ask for feedback whenever, but it's often better when it's:

  1. Clear what feedback you're looking for. Ask if specific points or examples work, how you could word something better, where do you get bored, etc.
  2. You have something solid to give feedback on. Again, pull requests are better than issues.

For specific details about writing, see our style guide. When you're ready to get writing reviewed, create a pull request for your Markdown file(s) (*.md or *.mdx) in the posthog.com repo. See developing the website for more.

Once you've gone through the pull request checklist and got an approval from the relevant person on the content team, you're ready to merge (aka publish).

Content distribution

So you've written a great piece of content. Now what? Here are various ways to spread the word:

1. SEO

If we can capture search traffic, we should try to do it. Start by identifying the keywords most relevant to your article, and aim for a mix of short-tail terms (broad, high-volume) and long-tail ones (specific, lower-volume but higher intent). Use them naturally throughout the piece – especially in headings, intros, and anchor text – and make sure your target keyword appears in the meta title and meta description.

Good SEO doesn’t just help your content rank in search, it also improves your chances of being cited by LLMs (aka AEO).

Follow our SEO best practices guide for more on structure, formatting, and linking.

2. LinkedIn

Share a post using either your own account or the company account, but note that the company account will have dramatically less reach than your personal one. To post using the company account, use Buffer (ask Andy Vandervell to add you to it if you don't have access).

See our LinkedIn posting advice for more.

3. Twitter / X

Again, use Buffer to post from the company account.

Tips for writing a good post:

4. Share internally

Internal teams, especially sales, CS, and the relevant product team, can often make use of the content you write if they know about it. They can share it with customers and use the ideas and examples in their conversations.

It's worth sharing in their Slack channels directly as they don't see everything we publish. Asking them to smash the like button, subscribe, and share with their friends and family is a good tactic too.

5. Paid ads + newsletters

You can promote your post by buying sponsored slots in newsletters. Ian Vanagas has a list of newsletters and booked slots we can use to promote content. See sponsorships for more.

If you want to run a paid ad campaign on Reddit, Google, or Twitter, see the paid ads page.

It's a good idea to create an issue highlighting what you'd like to achieve in your campaign. Here's an example.

LinkedIn

Content | Source: https://posthog.com/handbook/content/linkedin

Yes, I realize LinkedIn has a bad reputation, but it’s a popular channel with our ICP, important for recruiting, and our posts often do well there. We're posting more there, so here's what we've learned about doing it well so far.

Advice on LinkedInposting

Thank you to Lucas Faria for many of these tips.

LinkedIn posters and their newsletters

A primary way we use LinkedIn is to promote our newsletter, so here are some examples of people doing the same:

Writing metadata

Content | Source: https://posthog.com/handbook/content/metadata

Every piece of writing we do has metadata included in it.

URLs and content folders

The URL is defined by the folder it's placed in and the filename.md of the markdown file - e.g. a post in the founders folder with the filename i-love-hedgehogs.md would have the URL /founders/i-love-hedgehogs.

Folders also decide _where_ on the website articles appear.

The main folders are:

Important: Some articles can rightfully belong in both the founder hub _and_ the product engineers hub. In this case, choose the most appropriate hub folder and then add the crosspost: field to your frontmatter so it appears in both. So, add crosspost: product-engineers to post a founder's hub article in product engineers as well, and vice versa. You can also add tags from either hub like normal.

Frontmatter

This is the default frontmatter for most posts:

---
title: "Your headline here"
date: YYYY-MM-DD
author: ["your-name"]
featuredImage: IMAGE URL
featuredImageType: full
tags:
  - Content tag here
  - Content tag here
  - Content tag here
  - Content tag here
  - Content tag here
---

The frontmatter for tutorials is similar, but they don't require a featured image:

---
title: "Your headline here"
date: YYYY-MM-DD
author: ["your-name"]
tags:
  - Content tag here
  - Content tag here
---

Note: Each handle in the author field must match a handle in the authors.json file. If you're a first-time author, add yourself to authors.json in the authors data file using this format:

```json

{

"handle": "your-handle", // This is what you'll use in the author field

"name": "Your name",

"role": "Your role at PostHog",

"link_type": "linkedin", // Or "twitter", "github", etc.

"link_url": "https://www.linkedin.com/in/yourprofile",

"profile_id": 12345 // Your community profile ID from posthog.com/community/profiles/[ID]

}

```

Tags

Below is a complete list of tags, organized by section.

You can use tags from the Founder's hub in product engineer posts, and vice versa, if you're crossposting the article.

Founder's hub

Product engineer's hub

Blog

Guides & tutorials

Note, there are other tags we've used in the past here, but they're largely optional.

Creating new tags

Creating a new tag is as simple as adding the text to a post – it also means typos can generate new tag pages, so please be observant.

It's best to avoid a proliferation of tags, so please raise an issue before creating a new one.

Newsletter ads

Content | Source: https://posthog.com/handbook/content/newsletter-ads

We promote our newsletter across a variety of different channels. This page covers the paid options.

Budget

Budget plans can be viewed in this spreadsheet.

Uploading emails to Substack

Annoyingly, Substack doesn't have an API, which means that we manually need to upload emails we capture via our website, InstantForm ads, and most other paid marketing channels. This means that we have to manually upload these emails to Substack. Andy Vandervell does this once a week.

The emails are to PostHog via an event called newsletter_subscribed, which are sent to the #newsletter-sub-alerts in Slack.

If you wish to avoid doing this, an alternative is to send users directly to our Substack so they can sign up directly there. The drawback is that we're unable to send conversion events to 3rd parties (e.g. Meta, Reddit), so their algorithms won't know if their targeting is working (and we cannot send them the emails of people who signed up, because privacy).

You can still track how campaigns on Substack by using this loophole Ian Vanagas found:

  1. Sign up for Substack with another email (lior+refer@substack.com or something, can create multiple).
  2. Subscribe to Product for Engineers.
  3. Go to https://newsletter.posthog.com/leaderboard and get your ref code/link. You can add the ?r=1tb4kk bit to any newsletter link and it will track who signs up using it.
  4. See results here.

Meta Ads

In Q2+Q3 2025, we're testing Meta ads as a way to increase newsletter subscribers. You can view our ads in Ads Manager.

For access to the Ads Manager, please contact Lior Neu-ner or Brian Young.

We do not have Meta's pixel installed, as we do not allow any third-party cookies on our site. For tracking conversions, we use Meta's Conversions API via the PostHog destination.

🚨 Important: We must be extremely careful not to include any personally identifiable information. We should only include the fbclid parameter and the client_user_agent. Avoid sending personal identifiable information to Meta such as name or email.

Our ad creative can be accessed in Figma.

This issue has some information on learnings from previous ad campaigns

Instant Form ads

Meta has something called InstantForm ads, which enable users to sign up to our newsletter directly in the FB+IG apps without needing to open our website. Facebook then sends us these emails via Zapier.

We're not running paid placements anymore due to the high cost per conversion. More details in Slack

As mentioned above, Substack's attribution sucks. Historically, we instead created a custom link for each campaign using Dub.co and calculate cost per click to measure success. However, we should now be able to track signups using leaderboard + referral code workaround mentioned above.

We generally prefer to use a pay-per-sub model, which perform better and are easier to track. This issue outlines our current partnerships with other newsletter as of June 2025.

We look for newsletters that focus on software development and engineering. We don't care about list size or reach as much as we care about clickthrough rate (you can ask for their average CTR). Some we like working with and sponsoring include:

Smaller newsletters that we also have supported:

Newsletter sponsorship content

Titles that work well include:

The main copy is some variation of:

Product for Engineers is PostHog's newsletter dedicated to helping engineers improve their product skills. Learn how to talk to users, build new features users love, and find product market fit. Subscribe for free to get curated advice on building great products, lessons (and mistakes) from building PostHog, and deep dives into the strategies of top startups.

We have also found that linking to an article directly converts better than just a generic "subscribe to our newsletter" link.

If you need images, there is a collection of many sizes of them in Figma.

LinkedIn and Reddit Ads

We tried to run LinkedIn and Reddit ads for the newsletter but both were unsuccessful. Here's what we found:

Tips for new writers

Content | Source: https://posthog.com/handbook/content/newsletter-tips

PostHog has unusually high editorial standards (especially for a B2B SaaS company). When I first started writing here, I struggled quite a bit. The feedback was good, but it was a lot, and for a while I couldn't tell if I was actually getting better or just spinning my wheels.

This handbook page is all the stuff I would tell myself from back then if I could. I wrote it for any future writers who join our editorial team. (It might also be useful for anyone trying to understand our unique developer content marketing and style more deeply.) It won't make the learning curve disappear, but hopefully makes it easier!

Before you write anything

Remember that you are new. Learning how to do any creative skill in a specific style takes time. Even the most talented writer in the world would make mistakes adopting PostHog's voice. As the wise old saying goes, "sucking at something is the first step towards being sorta good at something."

Don't compare your ramp-up speed to peers or people who've been here longer. Comparison is almost meaningless because everyone here has wildly different backgrounds It's why you were hired; it makes us better as a team. You have your own strengths and experiences, so make use of them.

While things are still relatively chill early on, read and absorb as much PostHog content as you can: blogs, newsletters, the handbook. Fix typos or update things as you go. Small PRs are genuinely appreciated and noticed.

Bonus tip: Keep a daily work journal. The random thoughts, questions, and observations you have as you're onboarding will make for great material later on. Even this guide is based on my notes from onboarding.

---

Timeline expectations for your first newsletter

A lot of people underestimate how much work it takes to write a genuinely good newsletter.

Experienced writers (i.e., people who've been doing this at PostHog for years) take about 2 weeks end-to-end to ship a great newsletter. That includes outlining, drafting, and 2–3 rounds of review to get to the finished piece (plus time juggling other projects in between).

As someone new, expect it to take longer. My first newsletter took me about 4–6 weeks:

At the end, I didn't personally feel like my first newsletter was amazing, even though I was told it was very solid. Like I said earlier, remember it takes time to adapt to a new style for creative work.

To keep from going insane, have 1–2 smaller shippable projects running alongside the newsletter. In my first month, I tunneled on just the newsletter because it felt like The One Thing I had to do to prove to myself I could do this. In hindsight, putting all my confidence eggs in one basket added a lot of unnecessary pressure and honestly dampened my creativity. A blog refresh, smaller SEO post, along with handbook edits in parallel gives you breathing room. The early wins and visibility on the team are a real bonus, too.

---

The #1 guiding principle when writing here

Make it second nature to constantly ask yourself: "What do I want the reader's reaction to this piece to be?"

For example: "I want to challenge their assumptions and make them feel surprised."

Charles' Collaboration sucks article is a great example. It was right on the edge of clickbaity, enough for someone to comment "I was expecting to be annoyed but then I read it and was like, okay."

That's the goal. This one question should drive your title, your headings, your tone, your pacing — everything, really. A "How to do X" title can almost always be turned into something that makes a person feel something.

This matters most for newsletters, but it applies to everything we write. Our distinguishing factor is that we always have an opinion, a flair, a point of view. Without that, we'd become so bland, so fast.

---

Coming up with ideas

Ideas come from anyone and anywhere. A lot of times, conversations that happen in all-hands or Slack can be the inspiration for a blog. Basically, whatever you think might be interesting is game – you were hired as a developer who likes writing, so you have the advantage of already somewhat knowing our ICP's interests.

Even when you are new, don't let content ideas live inside your head. Turn them into a GitHub issue as soon as possible. Before you commit to writing something (including your first newsletter), have 2–3 issues with some preliminary research already done so you can make a more informed choice about what to actually write.

Not all ideas are good. Many ideas will die, fizzle out, or get picked up again later.

---

How to write for our readers

Writing nonfiction is a user-centered design problem. You have to start with: who is this for?

Our newsletter has three main reader groups:

Every article should naturally appeal to at least one of these groups — ideally all three, but that's not always possible.

Once you know your reader, make sure the intro and headline appeal to them immediately. The hook should answer: why should an engineer care?

Don't narrow the audience too much, but don't be so broad that you capture nobody. And note that the most compelling hook for your audience might not be the most interesting one for you to write and that's okay.

For example, when I was writing my first newsletter, 10x job posts for 10x engineers, I first kept gravitating toward hooks like "recruiting is so hard" or "we've all read boring job ads." Those were fun and interesting to write, but none of them were actually that targeted for any of the three audiences.

I realized that the best audience was technical founders who want to hire great talent early on, so I ended up opening with "your company is only as good as your people". It's a line that's been said a million times and I personally found a little dull. But it was highly effective for technical founders.

(Think of choosing a hook like choosing a character in a fighting game: sometimes your second-highest DPS character is the best pick because they do physical damage, and the enemy has a lot of magic resistance.)

You can also angle for one group first but weave in relevance for others. For example, the 10x job posts piece was aimed at founders, but by telling them what to look for when hiring great engineers, it was also implicitly signaling to the product engineers and SWEs reading it what traits they should aspire to have themselves while job searching and interviewing.

A few smaller principles worth keeping in mind:

---

How to actually write a newsletter

The biggest lesson I learned was that writing a newsletter is 80% research, 20% writing.

What sets PostHog content apart is that we actually do the work. We don't just say what we think, we actually go and find out.

In other words, we gather evidence usually in the form of (1) real-world examples or (2) first-hand experience to establish credibility.

Real-world examples are observed data from other companies, blogs, and people — and this is what you'll lean on most as a writer. Research examples first. They don't just support your outline; they are the foundation of it. You might have a strong opinion and feel certain it's right, but you don't actually know until you go find out.

For example, for the 10x job posts newsletter, the "data" I used to develop my opinion were literally job posts from other companies.

You should include real examples even during your outline phase because without them, you don't actually have an informed opinion to build on yet.

Where to find good examples:

Avoid using other blog posts as your primary source material. Basic digital literacy. "I saw it on the internet" or "I made that shit up" is not a valid source.

First-hand experience is more like "things we've learned at PostHog." Charles can write a piece that's mostly just his perspective because he has the experience and credibility as an exec — he'll almost always open with a personal anecdote or something from PostHog's history to establish that.

As a writer, you probably don't have that kind of experience to lean on yet, but you can do this too by framing things as "what we've learned at PostHog". I do this by reading all past PostHog content on a topic before I start writing, and then pulling out the real internal perspectives and examples. (Conveniently, you can save those to put in as internal links later!)

---

When your writing doesn't feel good

A useful gut-check: would someone who knows a lot about this topic share this with someone else? Worth noting that this person might not be the same as your target reader. For the 10x job posts piece aimed at technical founders, the person who'd actually share it is more like a seasoned recruiter or someone on a talent team.

If the answer is no, here's a quick diagnostic list:

---

When you hit writer's block

Writer's block is real and it will happen. It usually isn't actually about the writing — for me it's almost always anxiety or self-doubt in disguise. Things that helped me:

---

Things click eventually, but might just take longer than you'd expect. As always, don't hesitate to ask for help and feedback from the rest of the editorial team along the way. We want you to succeed!

– Jina Yoon

Newsletter

Content | Source: https://posthog.com/handbook/content/newsletter

Our newsletter is called Product for Engineers. It's owned by Andy.

Sent and managed via Substack, we put together an issue planning content for each installment of the newsletter. One person writes it and Andy edits and publishes it.

The newsletter is long-form, original copy, often based on blog posts we already wrote. It focuses on product and business lessons and information for engineers.

We run ads to drive subscriptions for this newsletter.

Art from previous newsletters is in Figma and diagrams are in FigJam.

How to write a good newsletter

These aren’t rules, just things that have worked well in the past. They provide some guidance on writing a successful newsletter.

Topic

Title

Intro

Structure

Style & tone

Publishing details

Style guide

Content | Source: https://posthog.com/handbook/content/posthog-style-guide

This style guide explains our guidelines for contributions to PostHog's documentation, tutorials, and blog.

Be sure to familiarize yourself with our library of MDX components that are supported in Markdown to make your article more scannable and engaging.

General principles

Assume almost nothing

As you gain mastery of a product or feature, some things become second nature, but remember they weren't always so obvious. Call these out, and provide links to relevant docs or websites.

Make it easy for your reader to implement their feature or solve their issue, whether they are an expert or just starting out with PostHog.

Get to the point

If you're explaining something, don't wait three paragraphs to do so. Start with the explanation and expand later. Almost all articles can be improved by shortening (or removing) the intro.

Don't be boring.

Make it easy to read

Most readers will scan a page before committing to reading it. They're looking for signs it'll answer their question(s) and quality.

Use clear headings, diagrams and tables to demonstrate thoroughness.

Avoid hedging

We are opinionated at PostHog. That means avoiding hedging like saying "it's complicated" or "it depends." This is frustrating for the reader and doesn't add value. Instead:

  1. Have an opinion.
  2. Provide an example.
  3. Do the research until you can do 1. or 2.

Style rules

Use American English

PostHog is a global company. Our team and our customers are distributed around the world. For consistency, we use American English spelling, grammar, date, and time formatting.

Use sentence case for titles

Write "Documentation style guide", not "Documentation Style Guide" and "PostHog has product analytics and session replay apps", not "PostHog has Product Analytics and Session Replay apps".

But...

Capitalize product names and proper nouns as appropriate

When using a product's name, capitalize it as a proper noun, like: "PostHog's second product was Session Replay." When referring to the general industry term while _not_ referencing a product name, you'd use it lowercase, like: "how many companies now offer product analytics."

Capitalize acronyms and define where needed

Write "URLs", not "urls".

Many acronyms, like that one, will be familiar to developers. When in doubt, link the first use of an acronym to a definition, or provide one.

Use the Oxford comma

Write "bananas, apples, and oranges", not "bananas, apples and oranges".

Why does this matter? Consider the old joke:

"There are two hard problems in computer science: naming things, cache invalidation, and off by one errors."

That doesn't work without the Oxford.

Use "enable", not "allow"

Allow is another way of saying permit.

Example: Your partner allows you to stay up late and play video games.

Enable means providing the means or opportunity.

Example: PostHog enables you to understand user behavior.

In most cases, PostHog _enables_ users to do things.

Add extra line breaks between long bullet points

Sections with long bullet point items are hard to read without extra line breaks (when looking at Markdown). For example, this passage:

Markdown Preview

Is harder to read than this passage:

Markdown Preview

Both render as the same list, but one is easier to read in Markdown. This isn't necessary for shorter bullet-point lists.

Use straight apostrophes and quote marks

Many writing tools, such as Google Docs, Notion and Word, add curly quotes and apostrophes. Please avoid using these. They can normally be turned off in the settings.

"Open source" vs "open-source"

Both can be correct depending on usage.

Open source should be hyphenated when it appears before a noun.

Example: "The open-source community is awesome"

But should be written without a hyphen in other contexts.

Example: "PostHog loves being open source."

Use British-style en dashes

While we default to American English in most things, we prefer using the British-style en dash ( – ) with a space either side rather than the longer em dash with no spaces (—) used in American English.

Example: "Don’t up vote your own content, and don’t ask other people to – post it and pray."

Please don't use a hyphen instead of en dash. On Macs, holding down Option and the hyphen key will give you an en dash.

<strong>A short public service announcement from Andy Vandervell:</strong>

As an editor, readability / aesthetics are more important to me than following grammar and style rules to the letter. British-style en dashes are a case in point.

Don't get me started on using hyphens instead (like - this) – that's just wrong. Here's that last sentence with an em dash instead... "Don't get me started on using hyphens instead (like - this)—that's just wrong". Doesn't that em dash look cramped and nasty?

Honestly, though, I don't care that much, but I will find and replace every em dash and orphaned hyphen on the website. It's fine. It's not a big deal. I'm cool about it.

Adding media

Images, gifs, and short videos

Most media for your article should be uploaded to Cloudinary (under 20 MB).

You can do this from posthog.com by signing in, clicking on your avatar in the top right, then clicking Upload media in the dropdown menu (available to moderators only).

Our uploader supports images, gifs, mp4 and mov, PDFs, and SVGs.

Image: Upload

Copy the link and paste it where you want the image or movie to appear in your file. A max of 1600px is usually good, as this is double the typical display width of an article. Using an image twice the size of the display resolution will make screenshots look crisp on hi-DPI/Retina screens.

Use the orig (optimized) size when adding a featuredImage to an article in Markdown frontmatter, as Cloudinary's resize strategy isn't supported by our Markdown parser.

See more details in the uploading assets with Cloudinary handbook page.

There are MDX components available for embedding images or gifs (`) and [videos](/handbook/engineering/posthog-com/markdown#product-videos) (`).

Videos

Short videos (like screen recordings) should be uploaded to Cloudinary.

There are two other places we host videos:

YouTube embeds

When embedding YouTube videos, use YouTube's iframe embed code with the "Enabled privacy-enhanced mode" box ticked. This ensures Google doesn't drop a cookie on our website. You'll know it's enabled if the code includes "https://www.youtube-nocookie.com" in the URL. Also add the allowfullscreen attribute to the iframe so users have the option to watch the video in fullscreen (useful for reading code snippets).

Wistia

Cory Watilo or Jordo Dibb can upload videos to Wistia. It's best to also have a thumbnail image which can be uploaded to Wistia as well. Videos can be embedded on the site using our <a href="/handbook/engineering/posthog-com/markdown#embedding-wistia-videos">Wistia component</a>.

Best practices for images and videos

Technical and docs writing

See our docs style guide.

Screen recording guide

Content | Source: https://posthog.com/handbook/content/screen-recording-guide

If you plan on recording a demo, a screen share, or the PostHog UI for use on the PostHog website and/or YouTube channel, the PostHog YouTube team kindly asks you to watch the video below and follow the corresponding instructions for your recordings.

Important! While Loom is great for personal use videos, it does not meet our quality standards for videos that will be going on the PostHog website and/or YouTube channel. Please use Screen Studio for such recordings, steps listed below.

Feel free to ask any questions in the #team-youtube Slack channel.

Video:

You like cookies? Then watch this video. Plus, you’ll learn about how to properly set up Screen Studio and your recording area aspect ratio for PostHog videos:

<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/UCsfwjlcBbc" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

1. Download Screen Studio

By default, Screen Studio is free and there’s no need to upgrade to a paid plan.

2. Set Recording Settings

Before recording, make sure you correctly set up the three Screen Studio settings listed below:

Important! Set the “Max Camera Resolution” setting to 4K or the highest available option. If you’d like to record your camera but don’t want to see yourself as you record, enable the “Hide camera preview” option. If you don’t wish to record yourself, then select “Don’t record camera”.

3. Set Up 16:9 recording area

You’ll need to ensure your recording area is a 16:9 aspect ratio. Most displays are not 16:9 by default. If yours is not or you are unsure, follow the instructions below:

Do not record:

4. Prepare to record

Run through this checklist before recording to make sure things go as smoothly as possible:

5. Do a final test

Before you start your actual recording, run through these quick steps to ensure everything is set to go:

6. Record

Now simply record and do your thang! Use the Pause option to take breaks, answer the doorbell, compose yourself, etc. Re-enable when you’re ready, or use the restart function if you want to give it a fresh go.

Don’t be afraid to start lines over if you mess up or say “Cut”. Our post team loves direction vs having to assume things, so feel free to give us direction in the recording, do multiple takes, etc.

Click the red record icon to finish recording.

7. Save as a Screen Studio Project

Once you’re done recording, it’ll either open the recording up into a new editable project or you’ll see the “Edit” option in the preview box, which you should click.

Now simply go up to the “File” menu and select “Save as...”.

Use your name or the project’s name for the file name, and ensure the file ends in the .screenstudio file extension.

Drag that file into the “ScreenStudio Projects” Google Drive and ping the YouTube team/member in the corresponding GitHub issue.

Important! If your recording contains any customer-sensitive information that needs to be blurred or removed, please leave exact timestamps of where the information appears in the recording project. Please triple check your work, as our post team will not always be able to catch this on our own.

And voila! You’re done! Thanks for following these steps, and feel free to ask any questions in the \#team-youtube Slack channel.

SEO best practices

Content | Source: https://posthog.com/handbook/content/seo-guide

General principles

1. Start with search intent

Don't obsess over exact-match keywords, ask: What is the searcher really trying to accomplish? For example, a user searching "difference between retention rate and churn" will likely also benefit from actionable insight on improving customer retention, not just definitions.

We craft our content to address those underlying needs. Cover the main topic thoroughly, include related sub-questions and themes, and anticipate next steps a reader might take.

Here’s what that might look like for "Retention rate vs churn"

2. Make it easy to digest

When answering a question, lead with the answer first, then expand with supporting details. This helps impatient readers and aligns with how AI tools select responses.

We keep our structure simple and scannable:

3. Headlines matter

Our headlines are the front door to our content – they’re what convince someone (or an AI) to pick us. They should stand out in search results, be enticing enough to click, and still make it clear what the page is about.

We should be bold, creative, and opinionated – but not so clever that we lose relevance. If every result has a nearly identical headline, we win by being different, but if we get too abstract or too witty, we risk missing the actual query intent and dropping out of search entirely.

Quick rule of thumb: If it sounds like every other search result, sharpen it. If it sounds clever but hides what the article’s about, clarify it.

4. Demonstrate expertise and authority

The internet has never been so full of words. The friction for content creation has dropped to nearly zero (thanks, ChatGPT), which means the bar for quality has shot up. The only way to win attention is to raise the bar: create content that actually teaches, clarifies, and adds something new.

Establish yourself (and PostHog) as a subject matter expert. We do this by:

5. Be conversational

Our tone is friendly, focused, and human – especially now that voice search and AI chat engines are shaping how people consume information. Content that sounds natural and answers questions simply is more likely to show up in featured snippets, People Also Ask boxes, and AI overviews.

That said, conversational doesn’t mean rambling. Stay on topic and be clear and direct. Think of how you’d explain the topic if speaking to a colleague – friendly but focused.

A more dialog-like tone can also help capture featured snippets or People Also Ask boxes, as the content directly addresses how users phrase questions.

Bad Q&A example

- Heading: Strategies for reducing customer attrition

- Body copy: Customer attrition is a key challenge for many businesses and must be addressed with a comprehensive set of initiatives. Companies should consider improving their product offering, implementing proactive customer success programs, and monitoring engagement metrics over time.

Good Q&A example

- Heading: How do we reduce churn?

- Body copy: Start by identifying where customers are dropping off – look at cancellation reasons, churn cohorts, and feedback surveys. Then tackle the biggest issues first, like onboarding problems or missing features. Even small fixes (e.g. a clearer onboarding flow) can reduce churn quickly. Follow-up questions we could answer: What’s a “good” churn rate for SaaS? What metrics should we track to spot churn early? How do we measure if our retention efforts are working? How can we build a feedback survey?

6. Don’t put all our eggs in one keyword basket

Good SEO articles always target more than one search term. While you may start with a core query (or prompt) in mind, remember there are always multiple ways to search for the same information. Sometimes it's better to target a similar but lower volume search term than the big obvious one.

For example, the parent search term "user persona" (27,000/mo) has numerous derivations:

We target clusters of intent, not just one keyword. Long-tail variations are often easier wins and build topical relevance. Over time, our page can rank for multiple terms and even capture the broad head term as authority grows.

7. Write for our ICP

The more specific we make our content, the more likely it is to resonate – and perform. This matters more than ever with AI-driven search and tools like ChatGPT's Deep Research, which don’t just answer the initial query but often fan out into follow-up questions and related recommendations.

For example, a generic “Best session replay tools” list might compete with thousands of others. But “Best open source session replay tools for startups” positions us as the exact match for a highly qualified search.

When we write, we should ask ourselves:

8. Updates work / are important

Publishing a great article is not the end of the story. SEO is an ongoing process, and one of the best ways to maintain or boost rankings is to keep content up-to-date.

How often this should happen is very subjective, but the more traffic a page gets the more often it should be updated. When updating, don’t just change a few words or the date; search engines are smart about detecting meaningful updates versus superficial ones. Add genuinely valuable content: new stats, a new tip, clearer structure, recent developments, etc. And if your last update was a while ago, consider adding an "Updated on \[Date\]" notice to show readers (and Google) that the page is maintained.

Likewise, updating and improving a page that isn't ranking is often the best way to get it to rank successfully. Just because something didn't rank at the first attempt, doesn't mean it never will.

9. Internal linking isn't optional

Internal linking is a vital part of successful SEO. It helps Google find our content and understand how pages relate to each other. It can also help prevent internal conflicts (where Google is unsure which article to list for a term), by signalling to Google what specific term we think a page should rank for.

Here are some best practices for internal linking:

10. Optimize for LLMs

We’re no longer just writing for Google – we’re writing for the answer engines too. ChatGPT, Perplexity, Claude, and Google’s AI Overviews are pulling from our content to build answers. To win those spots, we need to make our pages easy to retrieve, easy to quote, and obviously authoritative.

The goal is to make our content the easiest, clearest, most trustworthy answer in the room, for both humans and machines. How we do that:

11. Steelman competitors

Many other companies "straw man" their competitors. They claim their competitors are worse than reality, focus on differences that don't matter, and make hyperbolic claims about how much better they are. We don't do this.

When writing about competitors, be honest about their capabilities. Assume they are reading and will dunk on you for being dishonest. PostHog may not have all the features competitors have today, that's okay. Our reputation and trust with readers is more important than whatever "marketing win" being dishonest gives us.

It's also okay to make mistakes here. Competitors change faster than we can keep up. Whenever we find a mistake, we fix it as soon as we realize. We also happily accept updates from competitors if they make our post more accurate.

Additional tips

Good metadata is like a handshake – it’s the first impression users (and AI tools) get before they ever see the page. Well-crafted titles and descriptions can improve click-through rates and help AI engines understand context.

Quick metadata checklist:

Useful SEO tools

We use and recommend all the following tools to all writers.

Ahrefs

Ahrefs is an all-in-one tool. It's useful for:

Keywords Everywhere

Keywords Everywhere is a very useful Chrome extension that adds keyword research context to Google searches and other popular SEO tools. It's a great way to do quick bits of keyword research and find related terms.

It's only ~$15 annually.

Google Search Console

While the data is somewhat sampled, Search Console is a useful tool for analyzing the top-level numbers, or specific pages. Especially useful for seeing exactly which search terms are driving traffic to a particular page – sometimes the results will surprise you.

Mangools Google SERP Simulator

A free tool that lets you test how your headline will look in Google search results. This is useful for seeing:

  1. Whether Google will clip the headline because it's too long – Google has a 600px width limit on headlines.
  2. Comparing your headline to other results – ideally we want headlines that stand out / are more enticing than other results

AlsoAsked

A useful little tool with a decent free tier – 3 searches per day. It generates "people also asked" questions based on search terms.

It's useful for deciding what subheadings to include in articles, though exact matches aren't really necessary.

YouTube

Content | Source: https://posthog.com/handbook/content/youtube

See also how we do video at PostHog.

We experimented with YouTube from November 2022 to July 2023, but have paused creation and publishing for now. We may try again in the future.

Although videos were driving X00s of views each (some hit X000s), and we received some positive feedback, we didn't see an increase in signups, traffic, or mentions from the videos. For example, the video on why and how we use GitHub as our CMS got 3,000 views in 1 one week, but made no noticeable impact on signups.

We also were starting to run out of obvious tutorial and SEO blog content to turn into videos. Basically, we ran out of low-hanging fruit. New videos would have taken increasing amounts of time.

Learnings

Types of videos we made

  1. Tutorials like How to bootstrap feature flags
  2. SEO-ish content like The best GA4 alternatives for apps and websites
  3. "Essay" videos like The modern data stack sucks

YouTube comments

YouTube comments are posted to Slack using Make. It's a tool similar to Zapier, except Zapier doesn't support YouTube comments.

For access to Make, ask anyone in the . They're all admins and so they can add you.

Thumbnails

Thumbnails can be accessed in Figma

Common churn reasons

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/churn-reasons

There are a number of recurring themes on why a customer might churn. Below is a non-comprehensive list of reasons we've encountered and some ideas around how we could mitigate churn risk in each scenario where applicable.

Situations where we have more control

Champion is gone

When a champion leaves, depending on the size of the company and more importantly, the number of active users in a given account, this can have a major impact if PostHog was principally used primarily by our champion. The best way to combat this risk is to increase PostHog's footprint across the organization and increase the number of users adopting PostHog within the org. This may mean increasing the number of teams using PostHog at a given company and the number of products the company adopts. The more people using PostHog, the less risk it poses if our champion leaves.

We can create value by engaging with different team members, building relationships with more than one champion at a given company, and help deliver value across different teams to decrease the risk of a single champion having a significant impact once they're gone.

Champion was not key decision maker

Sometimes we connect and build a great relationship with a champion who truly loves PostHog but they are not a key decision maker in their org. This can be great in terms of building a relationship and getting updates but tough in terms of ensuring there is influence to how PostHog is adopted at their company. In situations like this, it would be good to try and leverage contacts we do have to build relationships with key decision makers at the org. Note that a decision maker does not necessarily mean they are in a leadership or management role. It just means they have the capacity to make decisions and influence PostHog's adoption within the org.

Customer replaces PostHog (whether internally or with a competitor)

Customers may churn for a number of reasons, some examples are:

Whatever the reason may be, the best way to combat churn in this situation is to increase the number of products a customer is using and the overall value the customer gets from having all their data live in one app. It doesn't necessarily prevent this from happening completely, but it does help decrease the risk.

Ideally we try to resolve the situation when customers are considering alternatives, and advocate for changes we think could help with decreasing churn risks, but in situations where that isn't possible, having customers use PostHog's other product offerings might help customers only churn for a specific product usage and not as a customer as a whole.

Customer experience has been poor

If you notice there has been a pattern where a customer has really struggled to get help or quick responses in the past, or if they've communicated this openly in your discussions, it is vital we turn this impression around by staying on top of things for the customer moving forward.

When there are opportunities to help a customer, it is recommended we provide the solution where possible, explain to the customer what we did so they have a clear understanding of the solution provided and how they can solve this themselves.

In situations where it requires us to advocate for feature requests, follow up on bug fixes, or stay on top of something for the customer, it is incredibly helpful to be proactive and frequently circle back to the customer to keep them up to date when possible rather than wait for them to follow up again.

By staying on top of things on behalf of the customer and proactively communicate, it will help develop a sense of trust, especially when customers have had a poor experience, in particular a lack of communication and follow up.

Customer hasn't been able to extract value out of PostHog

If a customer has communicated this with you, offer to work with their team to set them up for success. Make yourself available to understand their team needs, offer to set up regular meetings if they're open to it, and help them get the specific stats that would move the needle for them.

It is also possible their team may not understand how PostHog could be helpful. Offer workshops, training calls, and other things to give them concrete examples of how PostHog can help them accomplish their goals.

Lack of features or speed of delivery for specific needs

If the customer is an ICP fit, and there is risk of churn due to lacking critical features or speed in which we deliver certain results, it might be worth looping in the relevant engineering team and product manager to discuss what our options are for each specific situation that arises from this. Sometimes the key PM will want to jump on a call with the customer to learn about their specific needs.

Openly communicate in the relevant teams channel that this is a churn risk if this feature is something we can't ship. Use this opportunity as a way to help our PMs get direct feedback. We never want to lose a customer because we failed to deliver a key feature they need but these kinds of discussions are helpful to our team to learn what matters to our customers and helps us figure out if we can prioritize them or not.

Lack of trust for using PostHog as source of truth

We've heard the feedback that sometimes customers can't rely on PostHog as a source of truth because the data we collect is at odds with data they see elsewhere. This is a great opportunity to dive deeper on understanding what kind of stats they're seeing, what could be wrong with their implementation, and how we can possibly correct this so they have more trust in their PostHog data.

If a customer is relying on a different source of truth and possibly moving PostHog data to another external source, it can pose as a risk long term that they're not as tied to using PostHog, so fixing this so customers can rely on their PostHog data is important even if it doesn't pose an immediate threat.

Privacy, compliance, or data governance reasons

Some customers require strict privacy, compliance, or data governance controls. In some situations, it might be out of our control in terms of providing a solution that will work e.g. some customers can't store specific data with 3rd party services and must keep all data on prem. It's important we clarify all data controls customers do have with PostHog so they can make as concise of an informed decision as possible regarding where and how PostHog can be used. PostHog is anonymous by default and even among some of our products, such as Session Replay, we mask certain data to protect privacy. Some customers may not be aware of this and assume they can't use certain products. Helping them understand what privacy controls is available will help them be more confident in adopting certain PostHog products in this situation.

Even if we don't control local laws, industry rulings, etc., we can help our customers better understanding how to optimize their data collection, mask information, add privacy controls, or follow key compliance practices such as cookieless tracking or GDPR.

As much as we can, we should help customers better understand what they can and can't do with regards to privacy when using PostHog, and what data deletion methods are available.

Situations where we have less control

Customer has been acquired

This doesn't necessarily pose as an immediate risk or assume the customer will churn but we've seen many times where a company gets acquired and eventually moves off for a number of reasons. It would be good to learn what risks exists when you've learned that one of your book of business has just been acquired.

Customer ceases operations (for any number of reasons)

This unfortunately is completely out of our control. If a company ceases operations for any number of reasons, there's not much we can do here.

Lack of ICP fit

This is a more recent development and it can be a tough situation. In situations like this, it would be good to understand where we underserved the customer and why it was difficult or wasn't a good fit given they don't match our ICP, and help relay feedback to our team.

Learn from churn

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/customer-churn-retros

Churn retros

When a human-managed account churns from PostHog, we share learnings in #customer-churn-retros. The goal is simple: learn from what happened so we can prevent it next time.

Who does this

The CSM or AE who managed the account writes the retro. Post it as soon as possible after the churn (or even when the risk is first surfaced as a possible churn) - while the details are fresh.

What to include

Keep it concise. We're looking for signal, not noise.

Basic info

What we did well

Bullet points. Be specific about what actually worked:

What we could do better

This is the important part. Be honest:

Don't sugarcoat it. If we screwed up, say so.

Product learnings

What did this churn teach us about the product?

Tag relevant product teams if needed.

Process learnings

What do we need to change in how we work?

Example retro

Customer name: HogFlix ARR at churn: $42,000 Tenure: 14 months ICP fit: 8/10 - B2B SaaS, 75 employees, solid PMF Primary reason for churn: Switched to Amplitude due to advanced analytics needs we couldn't meet

What we did well:

What we could do better:

Product learnings:

Process learnings:

---

Tips for writing these

Be direct. This isn't a CYA exercise. If you missed something, own it.

Focus on prevention. Every retro should have at least one concrete "we should change X" takeaway.

Tag people. If product or process changes are needed, @ the relevant teams.

Don't make excuses. "They were never a good fit" isn't helpful. Why did we take them on? What should we have done differently?

Keep it readable. Use bullets. Be concise. Respect everyone's time.

Customer industry segments

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/customer-industry-segments

We have thousands of customers in PostHog, many of which are in similar industries. As CSMs having an understanding of our customers' industries can help us better be an expert on PostHog works best for their specific use cases. This page serves as a resource for us to be able to collect and share industry specific vocabulary, important metrics, PostHog best practices, etc. that allow us to quickly ramp up on the industry to better engage with those customers.

Industry segment list

These segments can change as our customer data evolves, but the following serve as a starting point:

Template for industry playbook

Eventually each industry listed above will be linked to its own playbook with details its specifics. The following is a template that can be used to create the playbook:

### Description (general overview of what the industry is and the businesses it consists of)
### What they care about (i.e. what is most important to their business success)
### Industry terminology
### Common software used
### Important business metrics and data
    #### Metrics
    #### Data (event taxonomy, person profiles, groups)
### PostHog products they should be using
    #### Product
    	##### Best practices
    	##### Common challenges
    	##### Cross product use cases

Segments in Vitally

Industry segment is a custom account trait in Vitally. You can find and edit your customer's industry on the side panel of their account page as a pinned trait. You can add a value or edit current value directly on the account page or add the industry segment as a column to any custom tables you have in Vitally.

<summary>E-commerce playbook</summary>

Ecommerce description

Online retail businesses including direct-to-consumer brands, marketplace platforms, and omnichannel retailers selling physical or digital goods through web and mobile.

What they care about

Industry terminology

Common software used

Important business metrics and data

Metrics
Data
Event taxonomy
Person profiles

PostHog products they should be using

Product analytics
Best practices
Common challenges
Cross product use cases

Customer success

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/customer-success

This is our playbook for new customer success engagement. These are customers who have been with us for a while and we are ready to establish an ongoing relationship with them.

The core job of a Customer Success Manager (CSM) is to ensure the longevity of the customer by ensuring their overall success and that they are getting the most value out of using PostHog. This may include helping the customer with onboarding, training, support, strategies, cost-saving, and more. Ultimately, the CSM serves as the customer's champion within the company and advocates on their behalf, and ensures the customer is successful.

Four principles to bear in mind:

Maximizing your chance of success

As a CSM, you’ll be spending most of your time managing your book of business and investigating churn signals so that there should be zero surprises should a business churn. Your first initiative should be focused on establishing a relationship with your book of business and prioritizing your understanding of their business, how they use PostHog today, and where you can add the most value to their business. It helps to approach this from a viewpoint of how you can be most helpful to your book of business as you learn what drives their success.

In order of priority, your objectives should be:

Tips on success engagement

Feature request tracking

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/feature-requests

When working with our customers, they will occasionally ask for features which aren't in the product yet. We won't build a niche feature for a single big customer, but if we can see a request being of benefit to multiple customers, we should capture, track and feed it back to our product teams.

Urgent vs Non-urgent requests

If a customer is at risk of churn, or otherwise unhappy about the missing feature, then we should communicate this to the relevant team in their Slack channel (usually #team-xyz). Adding in the urgency, ARR and tagging the team lead is a good approach here to get some focus. Remember that you still own the customer and may need to follow up with product teams to get the right level of focus as they don't have all of the customer context that you do. Don't create false urgency where there is none - we only want to use this approach when things are _actually_ urgent.

For non-urgent requests we should capture them in Vitally using the process on this page, and then share them with the teams in their Slack channels ahead of quarterly planning.

Tracking feature requests in Vitally

Current Feature Request List

We track feature requests a custom object in Vitally. You can see the current list of feature requests here. It's filterable by team, and shows the accounts and combined ARR of those accounts who have asked for the feature. There's also a Kanban board view which helps you track the progress of requests.

Adding a customer to an existing request

  1. Open up the request by clicking on the title of it
  2. Under Accounts near the top of the request click to Select an Account
  3. If the customer has specific context or a link to a Slack discussion then add it into the text area at the bottom of the request UI. Also add in the contact information of the person asking for it, if it's not a Slack thread.

Creating a new request

If you've checked the list above and can't see an existing request then you should create a new one. You can do this in two ways:

  1. When looking at the list of features, there is a Create new button in the top left of the UI.
  2. When looking at an account, you can see the feature requests they are connected to in the related objects section of the UI. There's a Create new button at the top of that UI as well.

Most of the fields are self-explanatory, and the status should almost always be set to Requested if it's a new one, unless the team is actively working on it. Make sure you add as much context in the text area at the bottom as possible, with links to Slack/Zendesk tickets.

Basic account review

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/foundation-check

When working with our customers, it is important to do a basic account review to get a better understanding of whether we think the customer has things set up correctly. Below is a simple checklist of things to look for, and address on your discovery call with the customer, to get a better idea of whether they are set up for success or not. It's important to note that we can only check for certain things and some things (like backend implementation) will rely on us speaking to the customer to get a better understanding of their implementation.

Events and events properties

A good starting point is verifying what events the customer is tracking, whether they have custom events set up, is autocapture enabled, and if they're collecting any event properties. These could be custom event properties or they could be autocapture attributes that have been added. Two good starting places to look are:

One of the things to look for is whether the customer has any custom actions setup. This is quite useful and is often a good sign that customers might have their setup wrong if they have not made use of actions. Some common actions to potentially look for are _renaming of events_ that are more useful to the customer or bundling of events that are common such as user signups or purchases.

Additionally its worth flagging if a customer has autocapture enabled and no actions setup, that's probably worth flagging with the customer.

Reverse proxy configured

Another good place to start is to see if the customer has reverse proxy setup. There are two ways you can do this:

If both methods are unavailable to you because the customer isn't using session replay and the hosted site isn't publicly accessible, you can simply ask the customer on the discovery call.

Persons Properties, Group Properties, and Cohorts

Next, we want to look to see if customers are making use of persons properties, check if there are signs they may be over identifying, and if they are making use of cohorts. It is beneficial to understand what sort of person properties the customer is adding, potentially look for signs of properties they might be missing base on what you understand of their business, and the kind of cohorts, if any, they are using.

If group analytics is enabled, its worth checking to see if they have group properties set and if the type of properties makes sense. Because group types are limited to five, it's important to make sure the group types are set up in a way that makes sense, and the related persons profile makes sense in the way it's associated with group properties.

Ecommerce Events

For ecommerce customers specifically, PostHog has a useful guide on ecommerce events specification that is worth checking to see if these customers were aware and have implemented custom events tracking related to the type of events we'd normally like to see (such as sku, product_id, or category).

SDK or library version

Make sure customers are using an up to date SDK or Library version. For this, you'll want to click on Activity, click on Configure columns, then add Library and Library Version so you can see versions of the SDK they are using. Then you can reference this against our Github repos for the up-to-date SDK versions to check if they are on the latest versions or not.

Alternatively, you can go to Metabase, look up the Library version audit table and see SDK versions there.

Sign up for an account when possible

If the customer's product offers a free account you can sign up for, do it and go through the workflow. This is a great way to see if events are firing properly, what events are being tracked, and get a rough idea of what might be missing so that you can make recommendations on your discovery call with the customer.

What types of dashboards does the customer have set up?

Every customer and more specifically, every team, will have a different set of goals they deeply care about. What we want to see here is if they've spent time setting up custom dashboards or insights to track specific trends, engagement, conversion metrics, or other key dashboards that indicates they're measuring the right things beyond basic dashboards included by default. This could be things like user sign ups, retention dashboards, or free to paid upgrades. What you want to do is get a feel for the kind of tracking the customer has setup so that on the discovery call, you can understand if this currently aligns with their immediate goals or if there are key metrics they should be looking at but have not setup.

Are customers using data pipelines for event notifications?

This idea actually came from our own team's use of data pipeline destinations to get notified when specific events occur in Slack. It's a great additional use that could be helpful to companies that didn't consider this use case for data piplines and an easy way to try and upsell.

Query failure rate

This is a good one to check and is available in Vitally. This usually means the customer is attempting to do something and it didn't work. Great for expanding on the discovery call itself.

Getting started with customers

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/getting-started-with-customers

As a CSM, it is your responsibility to be the expert on each of your customers, whether or not they choose to engage with you. Obviously, you’ll learn more about customers that you actually talk to, but there are still plenty of ways to get to know an account, learn their use cases, and track their journey from all of the data available.

Many customers have never spoken to PostHog – some happily welcome our help, others are strongly independent. In order to be successful as a CSM, we want to understand our customers and be helpful.

Get people to talk to you also has good, helpful tactics.

Newly assigned accounts

When you're assigned a new customer account, your approach will vary depending on the existing relationship between the customer and PostHog. This guide walks you through some key steps you can take when welcoming new customers to your book.

Determine which category your customer falls into

No/low interaction with PostHog humans

These are customers who have been using PostHog but haven't had much direct contact with our team. You should conduct a more thorough assessment before your first call with them. If you haven't done your initial outreach yet, you can also use the assessment to customize your message with a specific tip from your learning.

Account and business audit

Start by gathering context about who they are and how they're using PostHog:

Understanding their business:

Reviewing their PostHog setup:

Data management assessment: In their project(s), check the data management tab:

Answering these questions helps you identify the most important things to focus on in your initial engagements. Take a look at our basic account review page for additional things to check.

Introduce yourself

Once you've completed your audit, start reaching out. If this is an account that's being handed over from an existing contact:

If you don't have an established contact, introduce yourself to the widest blast range: org owner, org admin, users who have recently raised tickets, and users who have logged in in the last month. Even if there _seems_ to be a point of contact, things probably changed – multi-thread!

Your intro message should:

Examples of a value nugget

Take a look at your customer's account in Vitally and Metabase to identify ways you can be helpful. Some examples include:

If there's an established Slack channel you are inheriting, do it in Slack.

Example subject lines:

You should find what you're comfortable with whilst keeping a sense of PostHog's tone of voice. Some examples include:

In Vitally, you can see how other team members have reached out to customers in the past by going to an account's Active conversations tab for inspiration.

If there is no response, follow up after 2-3 business days, targeting individuals in the engineering, product, or data team. Emphasize the purpose of your reaching out - you're not trying to sell them something, you want to understand their use case and help optimize their PostHog integration.

Connect with champion

1-1 email or Slack message

Aim: start the relationship with a champion, ideally in the engineering, product, or data team

Content: Acknowledge that their time is valuable and that you will not be selling or pitching. You want to understand how to better serve the customer by understanding how they use PostHog. Would they be open for a 15-minute call? Offer to do this async as well.

Pro tip: If they're not already in Slack, don't ask; add them to Slack by sending them a direct invitation. If this is an account without an established Slack channel, you can follow our guide on shared Slack channels to set one up.

Getting-to-know-you discovery call

This is one of the most effective ways to learn what you need to know about a customer, as you can ask direct questions and spend a lot of time listening to their responses. A quick call upfront is often better than a month of back-and-forth in Slack.

Typically, this is a 15-30 minute conversation aimed at establishing rapport, understanding pain points, and beginning to formulate how you can best assist them.

Your discovery call should help you determine the level of engagement you'll have with the customer going forward. Think through the following questions:

Preparation before your call

Some things to consider before your call:

  1. Understand the customer’s PostHog usage:
  2. What products are they using? How are they using it? What metrics do they care about from those products?
  3. What products are they not using? This means products that make sense for them to use, and you want to understand why they aren’t using them.
  4. For example, product analytics and web analytics are closely coupled. If the customer is using product analytics but not web analytics, understand why. Is there a reason for that? What’s the objection?
  5. Call out feature preview ✨
  6. Explain what feature preview is and how to enable it
  7. Recommend PostHog AI as it's usually relevant regardless of customer use case
  8. Otherwise, recommend new products that the customer likely already has (e.g., Messaging, CRM) – position it as 'You probably already have [product], this is a product we’re trying to launch and would love to see how you would use it / any feedback you have. Keen to relay or rope in the engineering team directly with your feedback.'
  9. Q&A on product
  10. Next steps and ideal catch-up cadence.
Additional questions to consider for your call

Here are some recommended questions you could use. Please do not simply interrogate a customer with each of these questions; this is more of a question bank to use for inspiration!

Customer Prioritization

Analyzing product usage

While PostHog itself is (obviously) the gold standard for understanding how customers are using our product, we also make it very easy to view this information within the account context in Vitally and in Metabase.

We use the PostHog CDP to send product events to Vitally so that we can see which specific users are most active, MAUs on an account, and how many paid products they use. We can see more specifics in the Metabase dashboard, as well.

These sources will both help you identify potential cross-sell and upsell opportunities, in the name of helping customers maximize their value in the product.

Past conversations, tickets, and Slack channels

A very valuable part of account research is also reviewing past conversations. This will give you an idea of what level of contact we’ve had, who the main contacts may be, what issues they’ve faced, and so on.

The key places to look for this information:

Get notifications

We use Watch Tower to monitor news about companies in our book of business and surface what matters. To get started, create an account with your PostHog email. Once in, you can create a list and select to import your book of business from either Vitally or Salesforce.

Once your book has been imported, it's important to go through and make sure the names are correct, and each entry has the correct domain. Watch Tower uses the domain to understand context about the company you're trying to monitor and both Vitally and Salesforce rarely include domain data accurately.

Finally set up how you like to be notified. Email is the default but you can also set up Slack notifications as well. Once you've ensured your list is updated correctly and notifications are set, every day Watch Tower will scan the news for info relating to any of your companies. If there's a match, you'll get a notification.

The best recommendation is to find your own rhythm for how you, as an individual, prefer to learn about your customers. There's not a strict playbook. This is a compilation of the most reliable sources of knowledge to use for researching an account.

Handling customer issues

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/handling-customer-issues

As a dedicated PostHog human for customers, you're the first point of contact for customer issues. This helps build your relationship as a technical point of contact, plus you have the most context on the customer and can help with proper escalation.

The support team and engineering teams are always available to help, but you should try to solve issues yourself before handing off to other teams. This also helps you level up your product knowledge.

Raising issues

Zendesk

We use Zendesk Support as our internal platform to manage support tickets. For specifics on how we use Zendesk, look here.

Tickets created from Slack

Customers can create tickets from Slack by adding the 🎫 emoji reaction to their message. This is useful as customers can receive help even when you're asleep or on holiday. Make sure you let your customer know about this capability and it's also worth periodically reminding them about it.

If this isn't working as expected, make sure you've invited Pylon to the channel. Fill out the automated message asking for Group and Severity so the ticket is routed to the right team (customers sometimes forget so help fill it for them). Check feature ownership if you're unsure which team is responsible for a product area.

If you're investigating a ticket that your customer raised in Slack, let support know you're on it to avoid duplicate effort. You can do this by leaving an internal note directly in Zendesk.

Tip: Customer messages from channels with Pylon also go to #support-customer-success. You can find the ticket in the channel and leave a message in the thread. This also creates an internal note in Zendesk.

Investigating issues

When investigating customer issues, it's helpful to ask for specifics – e.g. links to the insight, feature flag, or dashboard; a screenshot of the error or the specific error message.

If helpful, you can log in as the customer into their PostHog org. Clicking a link from a customer's PostHog instance will sometimes give you the option to login as the customer. Alternatively, log into US admin (EU admin), search for the org or user, and click "Log in as user". If you're not seeing this option, ask Dana Zou to add you as a staff member in admin.

When investigating, use our docs, look at troubleshooting tips, search through Slack, Zendesk, GitHub, or Pylon for similar issues. If you've just joined, try to spend 30 mins to 1 hour investigating by yourself before asking for help. Onboarding is the best time to learn about PostHog products! Obviously, balance this with the urgency of the issue and use common sense.

While investigating, keep the customer in the loop by communicating progress, blockers, next steps etc.

Escalating tickets

You can escalate tickets to either the support team or the relevant engineering team. The decision depends on:

Our support team are technical engineers and can answer the majority of tickets. If in doubt, escalate to support.

If you're escalating to support, you don't need to do anything special - the ticket will stay in the support queue.

If you're escalating to engineering, in Zendesk, set the esc. dropdown in the left sidebar to escalated and double check the group assignee makes sense. You might need to upgrade your Zendesk role to full agent, just remember to downgrade after.

When escalating tickets, leave an internal note saying whether you're escalating this to engineering or support (and why) – so it's clear who should pick it up. Also include details about the investigation you've done and observations you've made. Even if it's confirming that you followed the customer's reproduction steps and saw the same issue, that context is incredibly valuable.

Auditing impersonations

Customers sometimes ask who from PostHog has accessed their account. You can use the following SQL query on project 2 to get a log of impersonations for a specific organization. You can get the organization ID from Vitally.

-- Get all user emails for an organization from persons table
WITH org_users AS (
    SELECT DISTINCT
        properties.email as user_email,
        properties.org__name as org_name
    FROM persons
    WHERE properties.organization_id = 'ORGANIZATION_ID'
        AND properties.email IS NOT NULL
)
SELECT
    e.timestamp,
    ou.org_name,
    e.properties.target_user_id as target_user_id,
    e.properties.target_user_email as target_email,
    e.event,
    e.properties.staff_user_email as staff_email,
    e.properties.mode as mode,
    e.properties.reason as reason
FROM events e
JOIN org_users ou ON e.properties.target_user_email = ou.user_email
WHERE
    e.event IN ('impersonation_started', 'impersonation_upgraded')
    AND e.timestamp >= now() - INTERVAL 30 DAY
ORDER BY e.timestamp DESC

Checking the health of a customer's deployment

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/health-checks

In a world where a lot of our high-paying customers have self-served without ever speaking with a PostHog human there is scope for them to implement PostHog in a less than optimal way. This could result in people spending more than they need to, or having inaccurate reporting data available to them. Ultimately if left unchecked these things will lead to avoidable churn.

Are they paying for things they don't need?

Group analytics

Group Analytics can be a real value-add for B2B companies, allowing them to track analytics at the company or workspace level rather than an individual person. They do however need to implement group tracking in their PostHog SDK. Customers who haven't done this may end up paying for Group Analytics but not able to use it.

We have a Vitally risk indicator added to customers who are paying for Group Analytics but not using it.

To help the customer you should figure out whether they are B2B or could otherwise benefit from sending group information. If so, reach out with guidance. If not, reach out telling them that they can save by removing the Group Analytics add-on from the billing page.

Autocapture

Autocapture is a great way for users to get up and running with event capture without a huge engineering effort. Autocapture can however get very noisy very quickly, and if users aren't leveraging these events they may not be getting value out of them. You can understand a customer's Autocapture event volume from their Metabase customer usage dashboard (instructions above on how to get there). There is a breakdown of the Key event volume Last 30 days which shows the number and % of Autocapture events they are sending across all projects. If that is high (>50%) then check the Actions (by type) visualization on the same dashboard to see if they have any Autocapture actions defined. If not they are likely to not be benefitting from Autocapture events.

If they aren't benefitting from Autocapture you should reach out to let them know how best to use it. Alternatively, they can tune or turn it off by following the Autocapture configuration docs.

Session replay targeting

When Session replay is enabled it will capture all sessions by default. As every session is counted for billing purposes, customers may end up with a bunch of low value short recordings and still be paying for them.

If a customer has Session replay enabled, log in as them and look at their session replay settings. At a minimum we recommend setting the minimum duration to 2 seconds or more but there are other tuning options which they may also benefit from.

Are they running up-to-date SDKs?

Outdated SDKs miss out on bug fixes, performance improvements, and new features. A customer using a three-year-old SDK will hit issues we've already solved, which can silently erode trust over time.

Check SDK versions using SDK Doctor or in Metabase via the Library version audit table. At minimum, the SDK sending the bulk of their event volume shouldn't be more than 3 months behind the latest. Monthly updates are the best-practice habit to encourage. Some SDKs have breaking changes between versions, and if so, make sure you make the customer aware about the breaking change.

A light nudge on this also doubles as a natural re-engagement touchpoint for customers you haven't spoken to in a while.

Have they implemented tracking incorrectly?

Calling identify too often

A common pattern is for users to call posthog.identify() on every page, or in an endless loop. Whilst this won't break their tracking (unless they use different distinct IDs in the identify call) they will end up with a drastically inflated event volume. You can diagnose this by looking at their Metabase usage dashboard in the Key event volume visualization. If either the volume of $identify or $set events is higher then 5% then something has likely gone wrong in the implementation.

You should get in touch and let them know that they only need to call posthog.identify() once per session.

Calling groupidentify too often

As with identify() above users may also end up calling posthog.group() more than they should. In the Key event volume visualization in Metabase if the $groupidentify count is higher than 5% they've likely set it to call once per page.

You should get in touch and let them know that they only need to call posthog.group() once per group per session, or when the group changes.

To see where duplicate groupidentify calls are being generated, you can use the following SQL:

SELECT properties.$lib AS lib, count() AS groupidentify_event_count
FROM events
WHERE event = '$groupidentify'
  AND $session_id IN (
    SELECT $session_id
    FROM events
    WHERE event = '$groupidentify'
    GROUP BY $session_id
    HAVING count() > 1
  )
  AND timestamp >= now() - INTERVAL 30 DAY
  AND timestamp < now()
GROUP BY lib
ORDER BY groupidentify_event_count DESC

Calling posthog.reset() before identifying the user

Posthog.reset() will generate a new anonymous distinct ID. If this is called before a user is identified then two anonymous unlinked user may be created. There is no easy way to proactively diagnose this however if a customer says that their tracking between web and app is off, this is a common culprit.

We have guidance on when to call posthog.reset() in the JavaScript library features guide.

Reverse Proxies

It is best practice for a customer to use PostHog's Managed Reverse Proxy or to configure their own for events to be sent from their own domain.

When using either PostHog's managed reverse proxy or deploying a non-managed reverse proxy, events should populate the "Library custom API host" property. Host mapping and domains can potentially be seen in Metabase. You should verify the setup with a customer.

Cookieless tracking

If a customer mentions their user/event count seems to be missing a lot of data from their website, ask them if they have implemented cookie opt-in and to share the part of their code where PostHog is initialized. Some customers may not be aware that we have specific recommendations for how to initiatlize PostHog for cookieless tracking.

For example, if they implement PostHog on their website similar to as follows:

posthog.init(...,
    opt_out_capturing_by_default: true
)

if (cookiePreference === 'accepted') {
    posthog.opt_in_capturing()
}

They will not be capturing anything for customers who visit their website and opt-out of cookies or ignore the cookie banner completely. We recommend instead they use the cookieless_mode parameter in their initializer as outlined in the cookieless tracking tutorial. If the customer wants to move forward with implementing cookieless mode, ensure they enable "Cookieless server hash mode" in their project settings under Project Settings > Web analytics.

Cookieless mode can help them have more accurate tracking totals because when using cookieless tracking, the PostHog SDK will generate a privacy-preserving hash, calculated on our servers.

Are feature flags resilient?

Falling back to working code

It is important that hitting the flags endpoint does not block an application from otherwise functioning correctly. If the flag fails to load or returns an unexpected value for any reason, such as None(empty string), or false you should always fall back to working code.

Server side local evaluation

Implementing Server-side local evaluation will ensure that flags continue to return values regardless of the network status of the flags endpoint. By default, PostHog will attempt to evaluate the flag locally using definitions it loads on initialization and at the poll interval. If this fails, PostHog then makes a server request to fetch the flag value.

As a note, server side local evaluation is billed differently than other flag requests.

Customer health tracking

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/health-tracking

We use Vitally as a customer success platform. You can log in via Google SSO to view customer data but will need Mine or Simon to grant you admin access to let you manage your accounts. It integrates with our other systems such as PostHog, Salesforce and Zendesk to give you a complete view of what's going on with your customers.

Health scoring

Overview

Health scores are a great way to assess whether your customer is at risk of churn or in a good state and are a common pattern in Customer Success tracking. We compute an overall health score out of 10 based on the following factors and weighting. You can read more about how Vitally health scores work in their docs here.

Health score metrics are divided into two categories: Customer Engagement (25%) and Product Engagement (75%).

Customer engagement

| Score Name | Measuring | Weighting | |---------------------------|--------------------------------------------------|--------------| | User engagement | Are they using PostHog regularly? | 15% | | Product experience | Are there negative experiences with the product? | 5% | | Company engagement | Are they engaging with PostHog humans? | 5% |

Product engagement

| Score Name | Measuring | Weighting | |-----------------------------|--------------------------------------------------------------------------------------|--------------| | Product Analytics | Event volume and users analyzing insights | 33% | | Session replay | Replay volume and users analyzing replays | 20% | | Feature flags & Experiments | Flag requests, users creating feature flags, users creating or viewing experiments | 17% | | Surveys & Data warehouse | Users creating and viewing surveys, volume of rows synced | 5% |

Customer engagement

Non-product metrics, looking holistically at: Are customers using PostHog? Do they have friction when using PostHog? Are they engaging with PostHog humans?

User engagement

This tracks whether users are logging in to PostHog. It can tell us if customers are getting value from PostHog (regardless of the products they're using). Customers that have a low active user percentage, or only have 1-3 users engaging with PostHog are at risk of churn.

| Measure | Poor | Concerning | Healthy | |-----------------------------------------------|---------|------------|---------| | Last seen in product | >5 days | 1-5 days | ≤ 1 day | | Active user percentage | <20% | 20-40% | ≥40% | | Percentage decrease in active user percentage | >20% | 5-20% | ≤5% | | Users engaging with features | <3 | 3-10 | ≥10 |

Product experience

This looks at the experience of using PostHog.

Creating a lot of tickets can mean users are not satisfied with PostHog, haven’t implemented PostHog correctly or aren’t using the product correctly (opportunity to offer training)! Similarly, visiting docs can mean users are trying to do something and could need help.

We also look at query failure rate. Failed queries are common (users can cancel a query, there can be SQL syntax errors, etc.), however, a high failure rate means users aren't getting the data they need from PostHog. You should help investigate and provide recommendations.

| Measure | Poor | Concerning | Healthy | |-----------------------------------------------|------|------------|---------| | Tickets created in last 30 days | >10 | 5-10 | ≤5 | | Urgent tickets that remain unresolved | >2 | 0-2 | 0 | | Docs visited in last 7 days | >100 | 20-100 | ≤20 | | Query failure rate in last 7 days | >13% | 5-13% | ≤5% |

Company engagement

This looks at a customer's engagement with PostHog as a company. Most of PostHog's customers are happily self served so this is weighted very little in the overall healthscore.

| Measure | Poor | Concerning | Healthy | |-----------------------------|----------|------------|----------| | Most recent meeting | >90 days | 30-90 days | ≤30 days | | Most recent ticket | >90 days | 30-90 days | ≤30 days | | Total product count | <3 | 3-6 | >6 |

Product engagement

Across PostHog's products, we look at 2 factors – data volume & user engagement.

Data volume

This tracks _percentage decrease_ in data volume over the last 30 days. We use success metrics to track billable usage over the last 30 days and compare it with the previous 30 days on a rolling basis. The percentages you see in the tables below are the _decrease_ between the previous and current period.

User engagement

Data volume is a lagging indicator, by the time it drops, customers may have already decided to churn. We combine data volume with product-specific user engagement, measuring the percentage of _active users_ interacting with product features over the last 14 days.

There are products we don't include in the health score. Vitally has a limit of max 20 health metrics so we are excluding other products for now as the overall ARR from them are still very low compared to the others.

Product analytics

| Measure | Poor | Concerning | Healthy | |------------------------------------------------|------|------------|---------| | Event count last 30 days (percentage decrease) | >20% | 5-20% | <= 5% | | Active users analyzing insights | <20% | 20-40% | ≥40% |

Product analytics usage include: analyzing insights or dashboards, creating or saving insights, creating or updating dashboards

Session replay

| Measure | Poor | Concerning | Healthy | |-------------------------------------------------|------|------------|---------| | Replay count last 30 days (percentage decrease) | >20% | 5-20% | <= 5% | | Active users watching replays | <20% | 20-40% | ≥40% |

Feature flags & experiments

| Measure | Poor | Concerning | Healthy | |----------------------------------------------------|------|------------|---------| | Decide requests last 30 days (percentage decrease) | >20% | 5-20% | <= 5% | | Active users creating feature flags last 30 days* | <5% | 5-20% | ≥20% | | Active users using experiments** | <5% | 5-20% | ≥20% |

Feature flag usage includes: creating or updating feature flags. We look at this over 30 days instead of the usual 14 as feature flags provide value over a longer time frame.

Experiments usage includes: creating experiments, viewing experiments, and launching experiments.

Surveys & data warehouse

| Measure | Poor | Concerning | Healthy | |----------------------------------------------------|----------|------------|---------| | Active users viewing surveys | <5% | 5-20% | ≥20% | | Rows synced last 30 days (percentage decrease) | >20% | 5-20% | <= 5% |

Account indicators

Health scores are useful for tracking the long-term trends in an account, but occasionally there are more immediate point-in-time events that we should react to. These are tracked as indicators in Vitally and fall into one of two categories

Risk indicators

These are automatically applied via Vitally playbooks (see the Risk category here):

Forecasted MRR decrease

Applied if the Forecasted MRR Change is less than -10%, indicating a drop in MRR. We should look into the account to understand whether it is just a reduction in usage, or they are trending towards churn.

Increased billing page visits

Applied if there have been more than 1 visits to the billing page in the previous 7 days. Can be a good indicator that the customer needs help understanding or reducing their bill.

Query failure rate > 10%

Applied if the Query failure rate over the last 7 days (Success metric) is greater than 10%. Use Vitally to see which user was impacted and see if you can help optimize their queries or flag to our team for investigation.

Sudden decrease in event volume

Applied if the Event count last 7 days (Success metric) decreases more than 20% versus the previous 7 days. Indicates that they may have turned event tracking off.

No insights analyzed past week

Applied if insight analyzed was last seen greater than 7 days ago. Indicates that they may have stopped using PostHog to track analytics data.

Payment failed

Applied if there is a failed payment on their Stripe account. We should reach out to get this resolved ASAP.

Startup credit will run out this billing cycle

Applied if they are currently in the Startup plan segment but also have Forecasted MRR, meaning that they are on track to make a payment this month.

Organization owner recently removed

Applied if the Owner role has been removed from a user in the last 14 days. May be a sign that you've lost a champion.

Opportunity indicators

These are automatically applied via Vitally playbooks (see the Opportunity category here):

Forecasted MRR growth

Applied if the Forecasted MRR Change is more than 10%, indicating an increase in MRR. We should look into the account to understand whether it is likely to be deliberate or an accidental spike.

Organization owner recently added

Applied if the Owner role has been added to a user in the last 14 days. This is a good opportunity to reach out to a potential champion if you've not met them before.

How we use automation in Customer Success

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/how-we-use-automation

How we use automation

Customer Success at PostHog means managing \~30 accounts per CSM while maintaining deep, meaningful relationships with each customer. Automation and AI tools can help surface important signals and streamline repetitive tasks, allowing CSMs to focus on strategic guidance and relationship building.

Automation should never be used as a replacement for human connection and interaction, but mainly as a tool to help a CSM be better prepared, informed, and effective.

Current automation stack

PostHog CS leverages several integrated tools to monitor account health and identify opportunities:

Core monitoring systems:

Key automated workflows

Account monitoring triggers include:

Human-first automation philosophy

Every automated workflow includes deliberate human decision points. For example, when an account begins using session replay, Vitally creates an indicator suggesting outreach about their use case \- but the CSM determines whether and how to engage based on the account relationship and context.

This approach ensures automation enhances rather than replaces the human elements of customer success.

Working effectively with automations

Best practices:

What remains purely human:

Requesting new automations

CSMs are encouraged (as are all PostHog employees) to experiment and surface new ideas frequently in Slack or team stand-up. Examples of areas where automations could be useful include, but are not limited to:

How we work

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/how-we-work

This page covers more of the operational detail of how our team generally works - for a broader overview of roles and responsibilities, visit the customer success team page.

Main metrics for each role

Book of business

Customer Success Managers

Each CSM is assigned customer accounts accumulating to ~$1.5m ARR to work with. We use the CSM Managed Segment in Vitally to track this against goals and CSMs should not assign this themselves (that's up to Dana or Charles).

Weekly Customer Success standup

In addition to the weekly sprint planning meeting on a Monday, we do an account review standup on Wednesday to discuss any at-risk accounts.

The objective of the meeting is to hold each other to account, provide direct feedback, and also support each other. It is a great place to ask for help from the team with thorny problems - you should not let your teammates fail.

How contractual bonus works - Technical CSMs

CSMs are responsible for ensuring that a larger book of existing customers - both annual and monthly - continue to use PostHog successfully. They nurture customers and are product experts - this isn't a role of just going back and forth between customers and support engineers, or collecting feedback.

This plan will _also_ almost certainly change as we scale up the size and complexity of our success machine! As above, we will always ensure folks are treated fairly when we make changes.

Variables

Account allocation

Working with engineering teams

We hire Technical CSMs. This means you are responsible for dealing with the vast majority of product queries from your customers. However, we still work closely with engineering teams!

Product requests from large customers

Sometimes an existing or potential customer may ask us to fix an issue or build new features. These can vary hugely in size and complexity. A few things to bear in mind:

Finally, if you are bringing engineers onto a call, brief them first - what is the call about, who will be there. And then afterwards, summarize what you talked about. This goes a long way to ensuring sales <\> engineering happiness.

Complicated technical questions

You will run into questions that you don't know the answer to from time to time - this is ok! Some principles here:

Working with customers in Slack

Most of our customers use Slack, and it's a great way for us to be responsive to them. Everyone has the permission in Slack to create a Connect channel with a customer, and you should do this as early as possible in your relationship with them.

When you've created the channel you should also add Pylon, which is used to sync Slack conversations with Zendesk so that our Support and Engineering teams can work on customer issues in a familiar context.

To add Pylon to your customer channel:

  1. In the Slack desktop app, click the channel name.
  2. On the Settings tab, click Add apps.
  3. Type Pylon and click Add.
  4. In the popup that appears in the Slack channel, select Customer Channel.
  5. Add yourself as the Account Owner.
  6. Click Enable.
  7. Add Tim, Charles, and Abigail to the channel.

Once enabled, you can add the :ticket: emoji to a Slack thread to create a new Ticket in Zendesk. Customers can also do this. Make sure that a Group and Severity are selected or the ticket won't be routed properly.

It's your job to ensure your customer issues are resolved, make sure you follow up with Support and Engineering if you feel like the issue isn't getting the right level of attention.

Tools we use

Gmail We use Gmail for our email and the team uses many different clients from Superhuman to Spark to the default Gmail web interface. Find something that works well for you. To get your own email signature, copy the signature from someone else on the team (like Simon) and then fill in your own details.

Calendly: We use Calendly for scheduling meetings. In order to schedule a meeting between a customer and multiple members on the PostHog team, click on "Event types" in the left hand navigation, then click "+ New Event Type" button in the top right, and select "Group" from the dropdown. This will allow you to create a group meeting and add multiple team members to the event and create a link you can share with the customer.

BuildBetter: We use BuildBetter for call recording and notetaking. You will need to integrate BuildBetter with your calendar in order for it to automatically join your calls. To do so, click on settings and look for the integrations link under account (not the one under organization) and follow the steps from there.

Zoom: We use Zoom for sales calls, and if you have Calendly properly integrated, calls that are booked through the tool will default to Zoom. You can find backgrounds to use for the calls here: This is fine \(and other awesome PostHog wallpapers\).

Lifecycle of CSM engagement

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/lifecycle-csm

This page covers more of the operational details of how our team generally works - for a broader overview of roles and responsibilities, visit the customer success team page.

Introduction

When starting out as a Technical Customer Success Manager (CSM) at PostHog, you are assigned a book of business with ~30 accounts to work with. It is helpful to think of customer engagement in stages to help us identify how we should connect with customers at each stage.

Stage 1: Getting started with customers

We've written a lengthy guide on how to get started with customers, so rather than rehash some of that information here, go read that guide instead.

Stage 2: Establishing trust

Once you've gone through your entire book of business and have completed stage 1, the next stage in our journey is to develop deep trust with our champions. Trust is built over time, based on our interactions and how we manage and nurture those relationships.

Some key examples that really help with building trusts with your customers:

Establishing trust can take time, and your communication style and actions can play a significant role. It may be worth offering recurring calls with your champion to establish more face-to-face contact, as this can help you maintain an ongoing pulse on what's happening.

Stage 3: Getting deeply embedded with customers

At this stage, we're interested in conducting a deep dive and becoming more deeply embedded with their team to work through some of their goals. This could help establish new workflows or setups to gain deeper insights beyond what they've achieved.

Here are a couple examples that have came up previously:

The goal at this stage is to help our customers succeed by getting them the key metrics they care about, and often times, requires us to connect with their team to implement custom code changes at a deeper level to accomplish this.

If your champion is in a key decision making position who can get these changes through, that's great, but if not, this is also a great opportunity to ask your champion for an introduction to the key decision maker so you can work close with them to ensure changes can be prioritized. Another method is to reach out to the team lead, such as the head of engineering or head of product, armed with what their quarterly goals are, and offer your assistance directly. You may establish another strong connection this way.

Companies have conflicting priorities but by demonstrating you understand what their core goal is, and how PostHog could solve the problem, and finding the key decision maker, you have a higher chance of convincing the team to prioritize the changes now rather than wait to add value.

New hire onboarding exercise

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/new-hire-onboarding-exercise

This exercise can help you learn more about your customer’s usage of PostHog while helping you ramp up on your own PostHog skills!

Tactical questions

To get started you’ll need all the organization IDs for your accounts. You can get those via SQL query: SELECT DISTINCT posthog_org_id_c, NULL as empty_column FROM salesforce.account WHERE owner_id = 'your_salesforce_id' You can find your salesforce ID by going to your profile and copy the text in the website URL after “/User/” then export the results via CSV. (The empty column is there so we have commas as delimiter for the org ids, this allows you to directly copy and paste all the org ids into a filter input text field.)

Cohorts

  1. Who are all the users in your accounts?
  2. Who are all the admins / owners in your accounts? (Hint: check the current_organization_membership_level property)
  3. Who are the new users in your account this week?
  4. Who are the power users in your account? (Power users can be across multiple products, or you can split it by product. Define a power user as you see fit!)

Activation

  1. For the new users on your accounts, how many came back to analyze an insight, watch a recording, create a feature flag, etc. within their first week?
  2. What are the monthly activation rates across all your accounts for product analytics? (Hint: read this activation metric post and these insights)

Retention / Usage

  1. Which of your new users have retained their usage after their first 3 months?
  2. Which of your organizations have viewed /docs/ pages more than once in the past week? How many /docs/ pages views have there been across accounts for the past week?

Churn

  1. Have any of your accounts churned from a specific product within the last 3 months? How many/if any across all your organizations within the last 3 months?

Strategic questions

  1. Are any of your users getting stuck setting up a product?
  2. What alerts / CDP destinations can you set up to help you monitor drastic changes in your account metrics in PostHog?
  3. What analysis would help understand why accounts take so long to convert from first login to consistent usage?

Example answers

If you get stuck or want to verify your implementation against an example, below are existing cohorts, insights, etc. to match each question.

Cohorts:

  1. Example cohort
  2. Example cohort
  3. Example cohort
  4. Example cohort

Activation:

  1. Example insight

Retention / Usage:

  1. Example insight
  2. Example insight

Churn:

  1. Example insight

Strategic questions:

  1. Example funnel - Where would you dive deep next from here?

New starter onboarding

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/new-hire-onboarding

Your first few weeks

Welcome to the PostHog Customer Success & Onboarding team! We only hire about 1 in 400 applicants, so you've done well to make it here! Unlike a lot of companies, we don't have a super-long onboarding process and would prefer you to be up and running with your customer base as quickly as possible. Here are the things you should focus on in your first few weeks at PostHog to help you achieve that.

Ramping up is mostly self serve - we won't sit you down in a room for training for 2 weeks. If you're not sure who is supposed to make something below happen, the person responsible is almost certainly you!

Also look at the sales team's onboarding page for guidance on what _not_ to do when you start. In general, there's a lot of good resources within sales to reference (as we were previously one team!)

Day 1

Rest of week 1

Week 2

In-person onboarding

This typically happens in Week 2 or 3 and runs 3-4 days with a few existing team members, covering:

Weeks 3-4

How do I know if I'm on track?

By the end of month 1:

By the end of month 2:

By the end of month 3:

PostHog curriculum

PostHog has a lot of products! To help you figure out how to start and continue building your knowledge, here's a recommended list of topics to work through. Do not feel as though you need to learn all the products through your first few weeks. Learning is best done working through customer use cases and requests.

Add and modify this list as you work through it! Products are added frequently, likely making this list outdated.

Fundamental

Product analytics
  1. Quick primer on Product analytics
  2. Creating insights:
  1. Persons
  1. Groups – what is it? what is the use case? how is it charged?
  2. Session replay – masking, cutting costs, filtering
  3. Toolbarheatmaps, actions
Implementation
  1. How is PostHog implemented?
  2. Autocapture – how do you customize autocapture? How do you leverage autocapture?
  3. What are custom events? How do you set custom properties?
  4. What is identify? How do you set custom person properties? How do you merge users? What is alias?
  5. What are groups? How do you set group properties?
  6. What are cohorts? How do you create cohorts (static and dynamic)? How are they different from groups?
  7. Projects, organizations and access controls
  8. More advanced use cases:
Billing
  1. How to estimate costs
  2. Pre-paid credits
  3. Billing Limits

Intermediate

Feature flags
  1. Creating and using them in code
  1. Locally testing feature flags using toolbar
  2. Insights based on feature flags:
  1. Local evaluation
  2. Client-side bootstrapping
  3. Troubleshooting
Experiments
  1. Creating an experiment from PostHog UI
  2. Understanding MDE, primary metrics, secondary metrics, interpreting results
  3. Traffic allocation - configuring it and validating it. What are some reasons why 80/20 split may not be an 80/20 split?
  4. Returning users: user sees variant A in session 1, does not convert; user sees variant B in session 2, does convert
  1. No-code web experiments
LLM Analytics
  1. Implementing with your LLM SDK
  1. Generations vs traces vs spans vs sessions
  2. LLM Cost Analysis
  1. Insight analysis
Error Tracking
  1. Implementing error tracking
  2. Stack traces
  1. Exceptions vs issues
Other Products and Features
  1. Platform add-ons (Boost/Scale/Enterprise/Teams)
  2. Data pipelines
  1. Surveys
  1. Workflows
  2. Logs

Advanced

  1. SPA (single page apps)
  2. API

Alerting setup (for team leads)

We have certain automations in Vitally that your team lead needs to add you to. Please ask your team lead to add you.

Template for onboarding success plan

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/onboarding-success-plan

Introduction

Each customer is going to be a bit unique when it comes to onboarding and implementation; especially with such a broad product surface area! There are still some best practices we can follow to collaborate with the customer and plan for their success. It really helps customers to build engagement if we can collaborate with them on a plan for their first 30 days at PostHog.

Customize the below template

Template

PostHog 30-Day success plan

Customer: [Customer Name] | CSM: CSM or AE NAME | Start Date: [Date]

Our shared goal

Week 1: You're getting actionable insights from PostHog Week 3: You've identified specific opportunities to improve your key metrics Week 4: You're confident PostHog is driving measurable business value

---

Week 1: Quick setup & first insights

Goal: See value within 7 days

Your commitments
PostHog commitments

Weekly check-in: [Day/Time] - 30 minutes

---

Week 2-3: Feature Adoption & Optimization

Goal: Expand usage across your team

Together We'll:

Bi-weekly Check-in: [Day/Time] - 30 minutes

---

Week 4: Value Confirmation & Next Steps

Goal: Quantify business impact and plan expansion

Success Review:

Business Review: [Day/Time] - 45 minutes

---

Post-Launch: Optimization Check-in

Goal: Maximize ROI and identify expansion opportunities

6-8 Weeks After Launch:

Optimization Review: [Day/Time] - 60 minutes

---

Success Metrics

| Milestone | Target Date | Success Criteria | Status | |-----------|-------------|------------------|---------| | SDK Validated | Day 2 | Data flowing correctly into PostHog | ⏳ | | Custom Events Defined | Day 3 | Business-specific tracking configured | ⏳ | | Feature Flags & Groups Setup | Day 5 | B2B tracking and flags operational | ⏳ | | Team Trained | Day 7 | 3+ people actively using | ⏳ | | Actionable Insights | Day 10 | 2+ specific findings identified | ⏳ | | Feature Expansion | Day 20 | 2+ products actively used | ⏳ | | Business Impact | Day 28 | Measurable metric improvement | ⏳ |

---

Key Contacts & Communication

PostHog Team
Your Team
Meeting Schedule

---

What You Can Expect From Us

Rapid Response: Same-day replies to questions/issues from your PostHog Human ✅ Proactive Guidance: We'll suggest optimizations based on your usage ✅ Custom Resources: Tailored documentation and best practices ✅ Issue Resolution: PostHog human will resolve issues directly and escalate internally as needed ✅ Product Feedback: Feedback calls or user interviews with product managers or engineers

What We Need From You

Clear Objectives: Tell us the specific metrics you want to improve ✅ Custom Event Planning: Help us understand your business-specific tracking needs ✅ Stakeholder Engagement: Keep key people involved and responsive ✅ Honest Feedback: Let us know what's working and what isn't ✅ Success Definition: Help us understand what ROI looks like for you

---

Questions or concerns? Reach out anytime - our success is measured by your success.

This plan is our shared roadmap. We'll adjust it based on your specific needs and progress along the way.

Renewals

CS and Onboarding | Source: https://posthog.com/handbook/cs-and-onboarding/renewals

Renewal principals

Being on a prepaid credit plan - usually annual - is a win-win solution for both PostHog and the customer. Customers get discounts on the credits they purchase and PostHog gets confirmed revenue.

When estimating renewal amount, we want to make sure we accurately determine the amount of credits the customer will need in the next 12 months (or equivalent period, e.g. if they prepaid for 6 months). This is not an opportunity to upsell - do that later by encouraging product usage.

This page walks through recommendations for approaching and handling renewals. Contract rules and how to create contracts are covered in relevant pages under our shared processes.

When to start

Start renewal conversations at least 2 months before the contract renewal date for customers you are already in frequent contact with. For customers who are quiet, start renewal discussions 3 months out to allow more time for re-engagement.

Vitally and Slack will keep you on track with automated reminders. When a customer hits the 2-month mark, they'll automatically enter the Upcoming renewal segment, you'll get a task assigned to you in Vitally, and Slack will send a notification.

Start by sending a message in the shared Slack channel. Things will change in a year – the person you worked with previously may not be the right person this time. Mention when the customer is set to renew and ask if they have any preferred next steps.

As you make progress in the renewal discussions, update the renewal opportunity in Salesforce.

Unique Renewal Cases

Customers who are projected to run out of credit before renewal

You will get notified by credit bot in Slack if a customer is set to run out of credits before their renewal date. This is considered an early renewal and follow the same process. If the customer will likely run out of credits before renewal is done, make sure they have a credit card on their account so any overage bills will be paid.

Customers who are projected to have expiring credits at the end of their contract

It is sometimes the case a customer will have a balance on their account when their contract term ends. This credit balance will expire, and they will still be moved to monthly payments. We have rules in place for this situation, allowing customers to carry over credits on a flat renewal (or higher). If you notice a customer trending towards this, try and engage with the customer to explain this credit expiry and the options available. Use this call to explore projected growth and other use cases and features. In these cases, renewal discussions should start at 3 months, to give time to explore new features and determine if carrying over the credits is valuable to the customer.

Customers with irregular contracts

Many customers are on legacy contracts that do not adhere to our contract rules. This could include non-Net 30 payment terms, unique discounts, legacy pricing, or monthly/quarterly payments. It should be a priority to migrate customers to standard pricing and discounts. Although conversations may be difficult, we should, whenever reasonable, stick to the pricing in our handbook, and freely share that handbook with the customer to defend our point. Trust your judgement on when these irregular terms may be deal-breakers, and worth keeping.

Renewal discussions

Renewal conversations are best done on a call. There can be a lot of moving parts so talking through it is usually a good idea.

Before the call, review your customer's usage and start a quote in Quotehog. If you need to look at data beyond the last 6 months, you can use this PostHog dashboard and edit the variables. Check if your customer is on any legacy pricing tiers – either talk to them about moving to standard pricing, or take it into account when building a quote.

This call can be an opportunity to explore your customer's PostHog experience so far and upcoming initiatives that you can build on in the future. It's also a good idea to explain how contracts, credits, and discounts work at PostHog – our pricing philosophy and contract rules are handy pages to bring up.

When you walk through the quote, start by looking at their past usage – try to anchor to the main products they're using as there can be a lot of numbers to look at. Explain how you estimated the usage for each product to arrive at the final number. Check-in with your customer throughout to sense check you're on the right track.

After the call - share the public quote link with your customer along with any usage information you shared on the call.

Hogref Schema

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/_snippets/hogref-schema

The hierarchy of the HogRef JSON schema is as follows:

Root
├── info (metadata)
├── categories[] (string labels)
├── classes[]
│   ├── functions[]
│   │   ├── params[]
│   │   ├── returnType
│   │   ├── examples[]
│   │   ├── throws[]
│   │   └── overloads[]
│   ├── properties[]
│   ├── staticMethods[]
│   └── events[]
└── types[]
    ├── properties[]
    ├── enumValues[]
    └── generic[]
Root Object

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | ✓ | Unique identifier for the SDK (e.g., 'posthog-js', 'stripe-node') | | schemaVersion | string | | Version of this schema format being used | | info | Info | ✓ | Metadata about the SDK | | classes | Class[] | ✓ | Main classes/modules exposed by the SDK | | types | Type[] | | Type definitions, interfaces, and enums | | categories | string[] | | List of functional categories for organizing methods |

---

Info

Metadata about the SDK.

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | ✓ | Package/library identifier | | title | string | ✓ | Human-readable name of the SDK | | version | string | ✓ | Current version of the SDK | | description | string | | Brief description of what the SDK does | | slugPrefix | string | | URL-friendly prefix for documentation links | | specUrl | string (uri) | | URL to the source specification or repository | | docsUrl | string (uri) | | URL to the official documentation | | license | string | | License type (e.g., 'MIT', 'Apache-2.0') | | platforms | string[] | | Supported platforms (e.g., 'browser', 'node', 'react-native') |

---

Class

Main classes/modules exposed by the SDK.

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | ✓ | Unique identifier for the class | | title | string | ✓ | Display name of the class | | description | string | | Overview of what this class provides | | functions | Function[] | ✓ | Methods and functions available on this class | | properties | Property[] | | Instance properties of this class | | staticMethods | Function[] | | Static methods on this class | | events | Event[] | | Events emitted by this class |

---

Function

Methods and functions available on a class.

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | ✓ | Unique identifier for the function | | title | string | ✓ | Function name as it appears in code | | description | string | | Brief description of what the function does | | details | string \| null | | Extended explanation, usage notes, or caveats | | category | string | | Functional category (e.g., 'Initialization', 'Capture') | | releaseTag | ReleaseTag | | Stability/visibility status of the function | | showDocs | boolean | | Whether to display in public documentation | | params | Parameter[] | | Function parameters | | returnType | TypeReference | | Return type of the function | | examples | Example[] | | Code examples showing usage | | throws | ThrowsClause[] | | Exceptions/errors that may be thrown | | since | string | | Version when this function was introduced | | deprecated | string \| boolean | | Deprecation notice or true if deprecated | | seeAlso | string[] | | Related functions or documentation links | | path | string | | Source file path for this function | | async | boolean | | Whether this is an async function | | overloads | FunctionOverload[] | | Alternative function signatures |

---

Parameter

Function parameters.

| Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | ✓ | Parameter name | | type | string | ✓ | Type annotation for the parameter | | description | string | | What this parameter is for | | isOptional | boolean | | Whether this parameter is optional | | defaultValue | string | | Default value if not provided | | isRest | boolean | | Whether this is a rest parameter (...args) |

---

TypeReference

Reference to a type, used for return types and property types.

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | | Reference ID to a type definition | | name | string | ✓ | Display name of the type |

---

Type

Type definitions, interfaces, and enums.

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | ✓ | Unique identifier for this type | | name | string | ✓ | Type name | | description | string | | What this type represents | | kind | TypeKind | | Kind of type definition | | properties | Property[] | | Properties for object types | | enumValues | EnumValue[] | | Values for enum types | | example | string | | Inline type definition or usage example | | path | string | | Source file path | | extends | string[] | | Types this type extends | | generic | GenericParameter[] | | Generic type parameters |

---

Property

Properties on types or classes.

| Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | ✓ | Property name | | type | string | ✓ | Type of the property | | description | string | | What this property represents | | isOptional | boolean | | Whether this property is optional | | isReadonly | boolean | | Whether this property is read-only | | defaultValue | string | | Default value | | deprecated | string \| boolean | | Deprecation notice |

---

Example

Code examples demonstrating usage.

| Field | Type | Required | Description | |-------|------|----------|-------------| | id | string | | Unique identifier for the example | | name | string | | Title describing what the example demonstrates | | code | string | ✓ | The example code | | language | string | | Programming language (e.g., 'javascript', 'typescript') | | description | string | | Additional explanation of the example |

---

Event

Events emitted by a class.

| Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | ✓ | Event name | | description | string | | When this event is emitted | | payload | string | | Type of data passed to event listeners |

---

ThrowsClause

Exceptions/errors that may be thrown by a function.

| Field | Type | Required | Description | |-------|------|----------|-------------| | type | string | | Type of error thrown | | description | string | | When this error is thrown |

---

EnumValue

Values for enum types.

| Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | ✓ | Enum member name | | value | string \| number | ✓ | Enum member value | | description | string | | What this enum value represents |

---

GenericParameter

Generic type parameters.

| Field | Type | Required | Description | |-------|------|----------|-------------| | name | string | ✓ | Generic parameter name (e.g., 'T') | | constraint | string | | Type constraint (e.g., 'extends string') | | default | string | | Default type |

---

FunctionOverload

Alternative function signatures for overloaded functions.

| Field | Type | Required | Description | |-------|------|----------|-------------| | params | Parameter[] | | Parameters for this overload | | returnType | TypeReference | | Return type for this overload | | description | string | | Description specific to this overload |

---

Enumerations

ReleaseTag enum:

| Value | Description | |-------|-------------| | public | Stable, public API | | beta | Beta feature, may change | | alpha | Alpha feature, likely to change | | internal | Internal use only | | deprecated | Deprecated, avoid use |

TypeKind enum:

| Value | Description | |-------|-------------| | interface | Interface definition | | type | Type alias | | enum | Enumeration | | class | Class definition |

Example JSON
{
  "id": "example-sdk",
  "schemaVersion": "1.0",
  "info": {
    "id": "example-sdk",
    "title": "Example JavaScript SDK",
    "version": "1.0.0",
    "description": "Example SDK for tracking events.",
    "slugPrefix": "example-sdk",
    "specUrl": "https://github.com/example/example-sdk"
  },
  "categories": ["Capture"],
  "classes": [
    {
      "id": "ExampleClient",
      "title": "ExampleClient",
      "description": "The main client for tracking events.",
      "functions": [
        {
          "id": "capture",
          "title": "capture",
          "description": "Captures an event with optional properties.",
          "details": "You can capture arbitrary object-like values as events.",
          "category": "Capture",
          "releaseTag": "public",
          "showDocs": true,
          "params": [
            {
              "name": "event_name",
              "type": "string",
              "description": "The name of the event (e.g., 'Sign Up', 'Button Click')",
              "isOptional": false
            },
            {
              "name": "properties",
              "type": "Properties | null",
              "description": "Properties to include with the event",
              "isOptional": true
            }
          ],
          "returnType": {
            "id": "CaptureResult | undefined",
            "name": "CaptureResult | undefined"
          },
          "examples": [
            {
              "id": "basic_event_capture",
              "name": "basic event capture",
              "code": "\n\n// basic event capture\nclient.capture('button-clicked', {\n    button_name: 'Get Started',\n    page: 'homepage'\n})\n\n"
            }
          ],
          "path": "lib/src/client.d.ts"
        }
      ]
    }
  ],
  "types": [
    {
      "id": "Properties",
      "name": "Properties",
      "properties": [],
      "path": "lib/src/types.d.ts",
      "example": "{\n    $timestamp: '2024-05-29T17:32:07.202Z',\n    $os: 'Mac OS X',\n    $browser: 'Chrome'\n}"
    }
  ]
}

API specifications

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/api-specifications

PostHog's API specifications are (mostly) generated automatically from the OpenAPI spec. We have a tooling to generate the API specification markdown files from the OpenAPI spec.

Where we publish the API specifications

When ever you run the app locally, the API specification is available at /api/schema/, and you can view it using Swagger UI.

On the website, the API specification is available at /docs/api/. Some of these pages are hand-rolled, and some are generated from the OpenAPI spec.

| Page | Type | |------|------| | Overview | hand-rolled | | Capture | hand-rolled | | Flags | hand-rolled | | Queries | hand-rolled | | Actions | generated | | Alerts | generated | | Activity log | generated | | Annotations | generated | | Batch exports | generated | | Cohorts | generated | | Dashboards | generated | | Dashboard templates | generated | | Early access features | generated | | Endpoints | generated | | Environments | generated | | Event definitions | generated | | Events | generated | | Experiments | generated | | Feature flags | generated | | Groups | generated | | Groups types | generated | | Hog functions | generated | | Insights | generated | | Invites | generated | | Members | generated | | Notebooks | generated | | Organizations | generated | | Persons | generated | | Projects | generated | | Property definitions | generated | | Query | generated | | Roles | generated | | Session recordings | generated | | Session recording playlists | generated | | Sessions | generated | | Subscriptions | generated | | Surveys | generated | | Users | generated | | Web Analytics | generated |

How the website ingests the OpenAPI spec

The website ingests the OpenAPI specification during the Gatsby build process in two stages:

  1. During sourceNodes: The OpenAPI spec is fetched and parsed using OpenAPIParser and MenuBuilder from the redoc library. This creates a structured menu of API endpoints that's used for navigation. The menu groups endpoints and handles pagination for groups with more than 20 items.
  2. During onPostBuild: The build process fetches the OpenAPI spec from https://app.posthog.com/api/schema/ (or from the POSTHOG_OPEN_API_SPEC_URL environment variable if set). The spec is then passed to generateApiSpecMarkdown(), which:

The generated markdown files are then available at /docs/open-api-spec/{operationId}.md and are included in the documentation site's API reference section.

How to update the OpenAPI spec

Any of the automatically generated pages sources from the OpenAPI spec. To update the content of an automatically generated page, you need to update the OpenAPI spec by making changes to the PostHog/posthog repository.

Updating the page title and description

These updates happen in the PostHog/posthog.com repository.

Page title: Update the titleMap object in src/templates/ApiEndpoint.tsx. For example, to change the "Actions" page title, modify the actions entry in the map.

Page description: Create or update an overview.mdx file in the corresponding API folder. The file should be located at contents/docs/api/{name}/overview.mdx, where {name} matches the API endpoint name (e.g., events, feature-flags).

Example: contents/docs/api/events/overview.mdx contains the description that appears at the top of the Events API page.

Updating the endpoint title and description

These updates happen in the PostHog/posthog repository.

Endpoint title: The title is auto-generated from the operationId in the OpenAPI spec using the generateName() function in src/templates/ApiEndpoint.tsx. To customize it, update the operationId or description in the Django viewset in the PostHog repository. You basically need to update the path to update the title.

Endpoint description: Create an MDX file named after the endpoint's operationId in the appropriate API folder. The file should be located at contents/docs/api/{name}/{operationId}.mdx.

Example: contents/docs/api/feature-flags/feature_flags_list.mdx adds custom content that appears under the "List all feature flags" endpoint. The content from this file is rendered above the endpoint's description from the OpenAPI spec.

Updating the endpoint parameters and responses

The endpoint request body parameters, query parameters, path parameters, response body, response headers, API key scopes, etc. are all defined in the Django serializers and viewsets in the PostHog repository.

Generally, there are two types of "views" in Django and they require different annotations to generate accurate OpenAPI specs.

  1. Model-based CRUD views: These are views that are backed by Django models. These CRUD views are backed by models defined in the Django ORM. They map literally to Django model fields, and generally don't need any additional annotations for accurate request and response definitions.
  2. Function-based views: These are views that are backed by Python functions. These views are not backed by models, and generally annotated with @action decorators. For these views, we need to manually annotate request and response definitions.

If an endpoint needs additional annotation, you can use the @validated_request decorator to annotate the view. This decorator will use the serializers passed in for both validation and annotation of the request bodies, query parameters, and response bodies, ensuring the OpenAPI spec stays accurate (or we know when they're not).

Basic usage

The @validated_request decorator wraps a view function and provides validation for request and response data:

from posthog.api.mixins import validated_request
from drf_spectacular.utils import OpenApiResponse
from rest_framework import serializers, status
from rest_framework.response import Response

# Django uses serializer to validate request body data, validated request can infer the request and response schemas from the serializer definitions.
class EventCaptureRequestSerializer(serializers.Serializer):
    event = serializers.CharField(max_length=200, help_text="Event name")
    distinct_id = serializers.CharField(max_length=200, help_text="User distinct ID")
    properties = serializers.DictField(required=False, default=dict)

class EventCaptureResponseSerializer(serializers.Serializer):
    status = serializers.ChoiceField(choices=["ok", "queued"])
    event_id = serializers.UUIDField()
    distinct_id = serializers.CharField()

@validated_request(
    request_serializer=EventCaptureRequestSerializer,
    responses={
        200: OpenApiResponse(response=EventCaptureResponseSerializer),
    },
    summary="Capture an event",
    description="Sends an event to PostHog for tracking",
)
def capture_event(self, request):
    # Access validated request body data
    event_name = request.validated_data["event"]
    distinct_id = request.validated_data["distinct_id"]

    # Process the event...

    return Response(
        {
            "status": "ok",
            "event_id": str(uuid.uuid4()),
            "distinct_id": distinct_id,
        },
        status=status.HTTP_200_OK,
    )
Validating query parameters

Use query_serializer to validate query parameters:

class QueryParamSerializer(serializers.Serializer):
    page = serializers.IntegerField(required=False, default=1)
    limit = serializers.IntegerField(required=False, default=10, max_value=100)
    include_deleted = serializers.BooleanField(required=False, default=False)

@validated_request(
    query_serializer=QueryParamSerializer,
    responses={
        200: OpenApiResponse(response=ListResponseSerializer),
    },
)
def list_items(self, request):
    # Access validated query parameters
    page = request.validated_query_data["page"]
    limit = request.validated_query_data["limit"]

    # Use validated query params...
    return Response(...)
Multiple response status codes

Declare multiple possible response status codes:

@validated_request(
    request_serializer=EventCaptureRequestSerializer,
    responses={
        200: OpenApiResponse(response=EventCaptureResponseSerializer),
        400: OpenApiResponse(response=ErrorResponseSerializer),
        500: OpenApiResponse(response=ErrorResponseSerializer),
    },
)
def capture_event(self, request):
    try:
        # Process event...
        return Response(..., status=status.HTTP_200_OK)
    except ValidationError as e:
        return Response(
            {"type": "validation_error", "code": "invalid", "detail": str(e)},
            status=status.HTTP_400_BAD_REQUEST,
        )
No response body

Declare status codes with no response body using None:

@validated_request(
    responses={
        204: None,  # No response body
    },
)
def delete_item(self, request, pk):
    # Delete the item...
    return Response(status=status.HTTP_204_NO_CONTENT)
Validation modes

By default, @validated_request uses strict validation for requests (raises on invalid data) and non-strict for responses (logs warnings in DEBUG mode). You can control this:

@validated_request(
    request_serializer=MySerializer,
    responses={200: OpenApiResponse(response=MyResponseSerializer)},
    strict_request_validation=False,  # Log warnings instead of raising
    strict_response_validation=True,   # Raise on invalid responses
)
def my_endpoint(self, request):
    # ...

Which endpoints have validated request and response definitions

The @validated_request decorator is new and many endpoints have not been annotated yet. The following endpoints have been annotated:

We plan on slowly annotating all endpoints with the @validated_request decorator through Q1 2026.

The special case for Capture

Ingestion is basically an entirely different service and not included in the OpenAPI spec. It also has special limitations like batching, rate limiting, etc that need to be documented separately. It doesn't fit the classic patterns for a RESTful API as well as other endpoints.

The ingestion team and docs team will need to work together to update the OpenAPI spec for the Capture endpoint.

How to publish changelog

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/changelog

We have one of the coolest changelogs on the internet. It's also one of the busiest.

As a company that ships weirdly fast, it's important to share what we're working on with as many people as possible, as often as possible. The changelog is a great way to do that.

<ProductVideo videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/changelog_handbook_1_8038f2d9d4.mp4" autoPlay={false} muted={false} loop={false} background={false} alt="The /changelog page on the website" classes="rounded" />

The /changelog page on the website

Changelog content and ownership

Technically speaking, the changelog is a stream of content that's published across multiple channels.

From start to finish, a changelog entry is:

  1. Posted in the #changelog Slack channel
  2. Published on the website by
  3. Produced into a video by
  4. And then sent in an email by

The engineer is responsible for making sure their feature appears in the #changelog Slack channel and writing the initial draft (details below).

This page mainly covers the first two steps.

The changelog code and data (stored in our Strapi CMS) is mainatined by the . To learn more about how the features work, check out their roadmap and changelog handbook pages.

What gets included

New features! But changelog entries can also include beta launches, UX improvements, or performance improvements.

For engineers, here's the rule of thumb: if you think an update (small or big) is worth sharing with users, it's probably worth posting about in the changelog.

A published changelog entry

How the publishing process works

We have an end-to-end process for moving shipped features into the website changelog.

  1. An engineer merges a feat PR into the monorepo or rolls out a feature flag.
  2. Relay workflows are triggered, which classify and summarize the PR or flag.
  3. The feature is automatically posted in the #changelog Slack channel if classified as "impactful" by the workflow.
  4. The PR author or engineer is tagged in the Slack thread.
  5. The engineer writes the initial changelog draft (2-3 sentences and screenshots) and replies to the thread.
  6. At the end of every week, the team reviews the #changelog channel, compiles the entries, edits them, and then publishes to the /changelog page.

Anyone can manually post in the #changelog Slack channel if something is worth sharing but isn't captured by the automated workflow.

The #changelog Slack channel

How to publish changelog yourself

People are encouraged to self-serve and publish changelog entries. Here's how.

You must be logged into your posthog.com account. Only website moderators (a.k.a PostHog employees) are permitted to publish changelog entries.

Option 1: The main changelog

Go to the /changelog page and click the + button in the top right corner.

<ProductVideo videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/changelog_form_c7f3d3a351.mp4" alt="Changelog form" classes="rounded" />

Fill out the changelog form and click Create to publish. The changelog entry will appear on the website on the next website build, which is usually when a PR is merged into the master branch.

| Field | Required | Recommended value | |-------|----------|-------------------| | Title | Yes | The title of the changelog entry. Keep it short and sweet. | | Description | Yes | The description with native Markdown support. Add screenshots or gifs here. | | Hero image | No | We leave this empty. We add images in the description field for more control. | | Status | Yes | It must be set to Complete to appear in the changelog. | | Date | Yes | The completed date of the changelog entry. | | Team | Yes | The team that shipped the feature. | | Author | No | We normally leave this blank because we pull in GitHub PR metadata which includes author and reviewers. | | Product or feature | Yes | The category or product area of the feature. Select Uncategorized if nothing fits. | | Type | Yes | Set to New feature for most changelog entries. | | GitHub URLs | Yes | It's technically optional, but the GitHub URL populates the changelog entry with the feature's PR metadata. | | Category | Yes | The product category of the changelog entry. | | Show on homepage | No | Always set the toggle to off or no. |

Option 2: The product changelogs

Each product has a dedicated changelog page in their docs. You can also publish from there using the + Add changelog button.

Each product should have a changelog page in their docs

| Product | Changelog page | |---------|----------------| | PostHog AI | /docs/posthog-ai/changelog | | Product Analytics | /docs/product-analytics/changelog | | Session Replay | /docs/session-replay/changelog | | Error Tracking | /docs/error-tracking/changelog | | LLM Analytics | /docs/llm-analytics/changelog | | Feature Flags | /docs/feature-flags/changelog | | ... | ... |

Option 3: Automated drafting via Slack reaction

The team uses an automated flow to draft changelog entries directly from Slack.

  1. React with ✅ on an entry in the #changelog Slack channel. We recommend reacting to the top-level message.
  2. This kicks off a Relay workflow where Claude writes the draft in the required format – YAML frontmatter for the Strapi fields, and Markdown for the body.
  3. The draft opens as a new issue in the changelog-drafts repo.
  4. Review the draft in the GitHub issue, edit as needed, then add the publish label when it's ready to go.
  5. A GitHub Action POSTs the entry to Strapi. It adds a success or failure label (synced or sync-failed) and comment to the GitHub issue, and self-closes on success.
  6. The changelog entry will appear on the website on the next website build.

Because Claude is writing the initial draft, make sure you review the draft in detail before you add the publish label – don't rubber-stamp it.

How to use the content writer agent

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/content-writer-agent

We built an agent that automatically drafts docs PRs in posthog.com when your code changes are merged into the posthog monorepo.

The content writer agent leverages the Inkeep platform to index and reference the PostHog website, codebase, and our docs style guide, so its drafts are usually a solid starting point — but they still need your review for technical accuracy.

Who owns what

Product engineers own the docs for their products. When the agent opens a docs PR based on your merged code, you're responsible for reviewing it for technical accuracy, iterating on it until it's right, and merging it. You don't need docs team sign-off — treat it like any other PR for your product.

The team does not review every docs PR. Engineers loop us in when they want a second opinion. We're responsible for building the system, monitoring its output quality over time, and tuning and steering the agent.

Agent system for the content writer

The workflow

When you merge a PR in the posthog monorepo, the Inkeep bot automatically opens a docs PR on posthog.com and tags you as a reviewer. From there:

  1. Review the draft for technical accuracy, completeness, code examples, and links.
  2. Iterate until the docs are correct. See how to make changes.
  3. (Optional) Loop in the docs team if you want a second opinion on style, structure, or information architecture — tag @team-docs-wizard as reviewers.
  4. Approve and merge when the docs are ready.

If you tagged @inkeep or made changes to the PR, a feedback form is posted after merge. This helps us understand where the agent fell short — please fill it out so we can continue improving the agent.

How to make changes

You can iterate on an Inkeep docs PR in a few ways:

What to check

When to loop in the docs team

You don't need our approval to merge a docs PR. But do loop us in when:

Tag @team-docs-wizard as reviewers on the PR, and we'll help out.

🌾 Context mill

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/context-mill

The context mill repo gathers up-to-date context from multiple sources, packaging developer docs, prompts, and working example code into a versioned manifest, which can be shipped anywhere.

The PostHog MCP server currently fetches the context mill repo manifest and exposes it to any MCP-compatible client as resources and slash commands. This is what currently powers the PostHog AI wizard.

<ProductScreenshot imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/context_mill_5b0f0323b7.png" alt="Context mill" />

The context mill effectively acts as an assembly line for turning disparate PostHog knowledge into something portable, something AI systems can reliably consume.

You can break its context engineering flow into three main stages.

  1. Context sourcing: The context mill can pull from the entire PostHog developer docs, with pages delivered from posthog.com as raw Markdown. It also includes curated, hand-crafted prompts and working example apps.
  1. Context assembly: The context mill transforms and packages the sourced context into a zip file manifest, which is meant to be portable and self-contained. We can structure and shape the manifest however we need.
  1. Context delivery: The context mill creates a versioned release for the manifest, which can be consumed by any agent or MCP server as a skill or resource.

Getting the best results requires some hand-cranking and refining. Context mill packages are created using a simple declarative YAML spec, so it’s worth spending some time experimenting and tuning things until they feel right.

AI wizard

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/developing-the-wizard

Developers love the wizard: it's the fastest way to get a deep, correct integration of PostHog, with none of the hallucinations that come from naive agent-based attempts.

For users, it is a one-line CLI command which runs an AI agent that automatically instruments PostHog into their codebases.

npx -y @posthog/wizard@latest

The wizard's architecture

The wizard is a CLI tool that runs locally against developers' projects.

It wraps the Claude Agent SDK to perform the integration, reviewing project code and making edits as needed.

To direct the agent, the wizard uses the PostHog context mill repository as a context provider. The context mill provides the agent with skills packages for great integrations, which include workflow prompts, documentation, and example code to maximize correctness and completeness.

The context mill repo generates a zip file and manifest that determines the structure of the skills packages.

Developing the wizard

Use the wizard workbench for local, end-to-end development of the wizard. The workbench can run the full wizard stack in local development mode, with hot reload where supported.

The workbench is also responsible for CI and testing the wizard across a matrix of test applications.

<ProductScreenshot imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/wizard_workbench_flow_38a895e0bb.png" alt="Wizard workbench" />

Setting up the workbench

Clone these repos:

Next, configure the workbench to run the wizard and its local dependencies. Read the README.md file in the wizard-workbench repository to get started and create a .env file with the paths to the dependent repos.

Open a terminal at the workbench root and run:

phrocs

Using the MCP inspector

You'll want the link that looks like this from the mcp-inspector phrocs panel:

http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=97e0ba...

Access the link in your browser, set the transport type to Streamable HTTP, and set the URL to http://localhost:8787/mcp for local development. (Alternatively, you can also inspect the production MCP by setting the URL to https://mcp.posthog.com/mcp).

<ProductScreenshot imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/mcp_inspector_auth_c3f67d4db9.png" alt="MCP inspector" />

You'll need a PostHog API key to access the MCP server. Get one from the user API keys settings page. Open the Authentication tab and paste the key into the Bearer token field.

Hit connect and you'll see the MCP server's contents.

Handling wizard drama

First, identify cause of failure: run npx -y @posthog/wizard@latest. You can find target projects known to work in the wizard-workbench repository.

Review the logs at /tmp/posthog-wizard.log. This log can be quite verbose, so agent-driven analysis may be helpful to quickly pinpoint where things are going wrong. Include the below details to help the agent diagnose the issue.

Potential points of upstream failure:

The wizard has the above upstream dependencies. It is also a bundle of client code, subject to various bugs and distribution mishaps. If upstream services are healthy but the wizard is still failing, it's likely a bug in the wizard itself.

Find a previous release version number and run npx @posthog/wizard@<version> against your example project. If the wizard runs successfully, you can compare the logs to the current release to see what changed. This will also confirm a safe rollback path.

To roll back, submit a PR that reverts the bad commits. The PR title must use a conventional commit prefix (e.g. revert: rollback to pre-X.Y.Z). Once merged, release-please will auto-create a release PR with the version bump. Merge that release PR, and the publish workflow will publish the reverted code to npm.

Remember to do a quick sanity check after release with npm @posthog/wizard@latest to see if your fix actually worked.

Declare an incident

If an upstream dependent PostHog service like OAuth or the LLM gateway is down, an incident may already in progress. Check the #incidents channel for any related alerts. If not, declare an incident, describing the highest-level issue that's causing the wizard to fail.

If the wizard client code itself is failing, that's an incident as well.

Docs ownership

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/docs-ownership

Product engineering teams are responsible for writing docs and ensuring they are up-to-date. This means:

Read writing docs as an engineer – it's really important!

The is responsible for improving the docs. This means:

Ownership within the Docs & Wizard team

We've previously assigned ownership to areas of the PostHog platform and product docs to individuals, but we're presently more project orientated.

You can view what we're working on right now by:

  1. Reading our goals on the page
  2. Dropping in on our #team-docs-and-wizard Slack channel

You can share ideas / requests for new docs in the #team-docs-and-wizard Slack channel, or by creating an issue on the posthog.com repo.

As ever, though, PRs > issues. ;)

Sources for inspiration

There are lots of places you can go to find inspiration for what to work on during your stint, such as:

FAQ

I'm really busy, can the team write docs for me?

We can help, but we can't do it all for you. We lack the context necessary to document new features. First drafts of documentation must always come from the relevant product team.

If you need help updating documentation:

Bottom line: It's much easier for the content team to improve a draft than write completely new documentation, especially when documenting new features. Pull requests > Issues.

Who should review docs updates?

Tag the docs reviewers team on GitHub and someone will come running.

How do I add images to my docs?

If you need to add images to your docs, please upload them to Cloudinary first and then embed them into the document.

You can embed light mode and dark mode versions of the image using this code snippet:

<ProductScreenshot
  imageLight = "https://res.cloudinary.com/dmukukwp6/image/upload/add_holdout_light_ce0827be42.png"
  imageDark = "https://res.cloudinary.com/dmukukwp6/image/upload/add_holdout_dark_cc687f7688.png"
  classes="rounded"
  alt="Screenshot of the form to create a new holdout"
/>

Style guide for writing docs

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/docs-style-guide

First, you should start with two assumptions about our users.

  1. They're busy and have limited time.
  2. They're not experts and don't know what we know.

This style guide helps you write docs based on these assumptions.

These are guidelines, not rules. They exist to keep our docs consistent and polished, but good judgement matters more than strict adherence when you're writing. If something makes the docs clearer, more helpful, or just plain better, do it.

See the style guide from the for additional writing guidelines.

---

Tools to enforce style

Tools like prose linters and LLMs are effective at catching style guide violations that humans often miss. Use them to check your writing against this entire guide.

We apply style guides through multiple tools.

---

Voice and tone

Address the reader directly

Address the reader directly using "you" instead of "the user", "developers", or "we".

Do: "You can create an insight by clicking New insight."

Don't: "Users can create insights."

Use the imperative form and drop the "you" when giving instructions, commands, or guidance.

Do: "Create an insight by clicking New insight"

Use active voice

Active voice makes it clear who or what performs an action.

Do: "PostHog captures events automatically."

Don't: "Events are captured automatically by PostHog."

Exception: Use passive voice when the actor is unknown or unimportant.

Acceptable: "The data is encrypted at rest."

Use present tense

Write in present tense. Avoid future tense unless you are explicitly describing future behavior or outcomes.

Do: "The insight displays your data."

Don't: "The insight will display your data."

Be concise

Remove unnecessary words. Every clause should add either value or clarity.

Do: "Click Save"

Don't: "Now you can go ahead and click the Save button to save your changes"

Avoid unexplained jargon

When you introduce technical terms or acronyms, explain them on first use or link to a definition. Don't assume the reader knows what you're talking about.

Do: "Create a cohort to analyze behavior. A cohort is a group of users who share common properties."

Do: "Create a cohort — a group of users who share common properties — to analyze behavior."

Don't: "Enable LTV analysis by configuring your CDP and syncing cohort data to the warehouse."

Contractions

Use contractions to maintain a conversational tone.

Do: "That's it. The experiment is running."

Don't: "That is it. The experiment is running."

---

Product terminology

Capitalize product names

Always capitalize PostHog product names as proper nouns. Use "Product Analytics", not "product analytics".

Do: "Use Session Replay to understand user behavior."

Don't: "Use session replay to understand user behavior."

However, if you're referring to the general industry term or a feature that isn't specific to PostHog, use lowercase. For examples: "many companies offer product analytics."

Keys and tokens

| Term | Description | |------|-------------| | Project token | The public identifier (starts with phs_) used in SDKs and the snippet to send events. This is NOT an API key. Never call it project API key. | | Personal API key | A private key (starts with phx_) used for server-side API access. This IS an API key. | | Feature flags secure API key | A separate key used for local evaluation of feature flags. |

Do: "Add your project token to the PostHog initialization code."

<!-- vale PostHogBase.ProjectToken = NO -->

Don't: "Add your project API key to the PostHog initialization code."

<!-- vale PostHogBase.ProjectToken = YES -->

PostHog platform

| Platform term | Description | |---------------|-------------| | PostHog | Use by default. Refers to our cloud platform. Most users are on cloud, so do not specify "Cloud" unless differentiating from self-hosted. | | PostHog Cloud | Only use when explicitly differentiating cloud features from self-hosted deployments. | | Self-hosted PostHog or hobby deployments | Use when referring to self-hosted installations. |

Do: "Go to Insights in the PostHog app and click New insight."

Do: "This feature is only available on PostHog Cloud."

Don't: "To create an insight on PostHog Cloud, go to the Insights tab."

---

Grammar and mechanics

Use American English

PostHog is a global company. Our team and our customers are distributed around the world. For consistency, we use American English spelling, grammar, date, and time formatting.

Do: color, analyze, behavior, license

Don’t: colour, analyse, behavior, licence

Sentence case for headings

Use sentence case for all headings. Capitalize only the first word and proper nouns like our products.

Do: "## How to create a feature flag"

Do: "## Get started with PostHog Feature Flags"

Don't: "## How To Create A Feature Flag"

Oxford comma

Always use the Oxford comma.

Do: "PostHog offers analytics, session replay, and feature flags."

Don't: "PostHog offers analytics, session replay and feature flags."

Numbers

Do: "You can create three dashboards" or "You can create 15 dashboards."

Do: "Set the timeout to 30 seconds."

Use straight apostrophes and quote marks

Many writing tools, such as Google Docs, Notion and Word, add curly quotes and apostrophes. Please avoid using these. They can normally be turned off in the settings.

Use British-style en dashes

While we default to American English in most things, we prefer using the British-style en dash ( – ) with a space either side rather than the longer em dash with no spaces (—) used in American English.

Please don't use a hyphen instead of en dash. On Macs, holding down Option and the hyphen key will give you an en dash.

Do: "Don’t up vote your own content, and don’t ask other people to – post it and pray."

Don't: "Don't up vote your own content, and don't ask other people to—post it and pray."

---

Word choice

Acronyms

Use all caps for acronyms and initialisms.

Do: SQL, API, HTML, CSS, JSON, REST, HTTP, URL, SDK, CLI, UI, UX

Don't: Sql, Api, Html

Follow official capitalization for branded technologies.

Do: GraphQL, WebSocket, PostgreSQL

Choose simple words

Choose simple, common words over complex alternatives.

| Instead of | Use | |------------|-----| | utilize | use | | facilitate | help | | commence | start, begin | | subsequent | next | | prior to | before |

Use precise verbs

Use precise verbs that clearly describe the action being performed.

| Vague | Specific | |-------|----------| | use the API | call the API | | work with data | query data, analyze data | | handle errors | catch errors, log errors | | manage users | add users, remove users, assign roles |

Inclusive language

Prefer neutral, inclusive terms.

| Instead of | Use | |------------|-----| | blacklist/whitelist | denylist/allowlist | | sanity check | validation, verification | | master/slave | primary/secondary |

Avoid phrases that trivialize

Avoid words or phrases that trivialize the work. They can sound dismissive or minimize the reader's efforts.

Don't use words like "simply", "just", "easily", "obviously", "of course", and "clearly".

Do: "Add the SDK to your project."

Don't: "Simply add the SDK to your project."

---

Formatting and structure

Use descriptive headings

Headings should clearly and explicitly describe what's in the section. Prefer action-oriented titles over nouns and gerunds.

Do: "## How to create a feature flag"

Don't: "## Feature flag creation"

Do: "## Customize styles and layouts"

Don't: "## Customization"

Use short paragraphs

Avoid paragraphs longer than 3-4 lines. Break up longer content with line breaks, subheadings, lists, or visual elements as needed.

Bulleted lists

Use bullets for unordered items of equal importance. Default to prose when 1-2 items would be clearer as a sentence.

Do:

PostHog offers several products:

- Product Analytics

- Session Replay

- Feature Flags

- Experiments

Don't:

Feature flags let you:

- Control feature rollouts

Numbered lists

Use numbered lists when ordering, ranking, or hierarchy matters.

Do:

1. Click New insight

2. Select your event

3. Click Save

Definition-style lists

When listing items with descriptions, use a dash ( - ) to separate the item from its description. Don't use a colon.

Do:

- Product Analytics - Track user behavior and measure conversions

- Session Replay - Watch real user sessions to debug issues

- Feature Flags - Control feature rollouts and run experiments

Don't:

- Product Analytics: Track user behavior and measure conversions

- Session Replay: Watch real user sessions to debug issues

- Feature Flags: Control feature rollouts and run experiments

Punctuation in lists

Use a period when each item is a complete and standalone sentence (has a subject and verb and is an independent thought).

Don't use a period when items are phrases or fragments that complete an introductory phrase.

Be consistent within a single list. If one item is a partial sentence, make all items partial sentences.

Do:

PostHog offers several products:

- Product Analytics

- Session Replay

- Feature Flags

Do:

Use feature flags to:

- Control rollouts to specific users

- Run A/B tests on new features

- Disable features without redeploying

Do:

There are multiple ways to fetch data from PostHog.

- You can use the API.

- You can use the SDK.

- You can use webhooks or data pipelines.

Don't:

To set up PostHog:

- Install the SDK.

- Configure your project token.

- Start capturing events.

Tables

Use tables for listing multiple items across multiple attributes. When a bulleted list isn't easy to scan, try using a table instead.

| Plan | Events | Team members | Price | |------|--------|--------------|-------| | Free | 1M | Unlimited | $0 | | Paid | 2M+ | Unlimited | $0.00031/event |

Don't:

Our plans:

- Free: 1M events per month, unlimited team members, $0

- Paid: 2M+ events per month, unlimited team members, $0.00031 per event

Bold text

Use bold for structured information and visual formatting like callout labels, definition lists, and problem/solution patterns.

Do: "Note: Use feature flags to control rollouts."

Do: "Problem: Events aren't appearing in the dashboard."

Avoid using bold text for general emphasis in prose. If something is important and needs extra emphasis, consider using a callout box instead.

Don't: "This is a really important step in the process."

Don't: "Make sure you always configure this setting before deploying."

Bold UI elements

Use bold for UI elements like buttons, menu items, labels, and text fields. Don't use quotes.

Do: Click New insight in the Insights tab.

Don't: Click the "New insight" button.

For nested UI elements, use > to connect them hierarchically.

Do: In PostHog, navigate to Settings > API keys > Personal API key.

Don't: In PostHog, navigate to Settings, look under API keys, and then click Personal API key.

Avoid excessive formatting

Don't use:

Link the first mention of a PostHog term, feature, or SDK on a page to its docs page.

Example: "To create an insight, first capture events. Then, select the data you want to see."

Link directly to PostHog in-app pages using https://app.posthog.com/. Users are redirected automatically to the correct US or EU subdomain.

Do: "Go to the Insights tab and click New insight."

Don't: "Go to the Insights tab and click New insight."

Don't: "Go to the Insights tab and click New insight."

Link text should describe the destination. Avoid generic text like "click here" or "this page."

Do: "See our installation guide for instructions."

Don't: "Click this link for installation instructions."

---

Code

Use backticks

Follow language conventions

Use the standard style conventions for each programming language:

PostHog event and property naming

Always use snake_case for PostHog event and property names:

posthog.capture('user_signed_up', {
    user_id: '123',
    username: 'Jane Doe',
})

Never use camelCase or PascalCase for event or property names.

Show real-world examples

Use realistic examples that demonstrate actual use cases.

Do:

```js

posthog.capture('purchase_completed', {

product_id: 'prod_12345',

revenue: 49.99,

currency: 'USD'

})

```

Don't:

```js

posthog.capture('event', {

property: 'value'

})

```

Comment sparingly

Only add comments when code isn't self-explanatory:

// Don't show the survey if user dismissed it in the last 30 days
if (lastDismissed > Date.now() - 30 * 24 * 60 * 60 * 1000) {
    return
}

---

Screenshots and media

It's extremely important to ensure screenshots or videos don't show any personal or sensitive user information like emails, phone numbers, or other identifying details.

Screenshot requirements

To maintain consistency and clarity:

When to use videos

Use videos for:

Use Screen Studio with these settings:

How to publish changelog

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/how-to-publish-changelog

We have one of the coolest changelogs on the internet. It's also one of the busiest.

As a company that ships weirdly fast, it's important to share what we're working on with as many people as possible, as often as possible. The changelog is a great way to do that.

<ProductVideo videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/changelog_handbook_1_8038f2d9d4.mp4" autoPlay={false} muted={false} loop={false} background={false} alt="The /changelog page on the website" classes="rounded" />

The /changelog page on the website

Changelog content and ownership

Technically speaking, the changelog is a stream of content that's published across multiple channels.

From start to finish, a changelog entry is:

  1. Posted in the #changelog Slack channel
  2. Published on the website by
  3. Produced into a video by
  4. And then sent in an email by

The engineer is responsible for making sure their feature appears in the #changelog Slack channel and writing the initial draft (details below).

This page mainly covers the first two steps.

The changelog code and data (stored in our Strapi CMS) is mainatined by the . To learn more about how the features work, check out their roadmap and changelog handbook pages.

What gets included

New features! But changelog entries can also include beta launches, UX improvements, or performance improvements.

For engineers, here's the rule of thumb: if you think an update (small or big) is worth sharing with users, it's probably worth posting about in the changelog.

A published changelog entry

How the publishing process works

We have an end-to-end process for moving shipped features into the website changelog.

  1. An engineer merges a feat PR into the monorepo or rolls out a feature flag.
  2. Relay workflows are triggered, which classify and summarize the PR or flag.
  3. The feature is automatically posted in the #changelog Slack channel if classified as "impactful" by the workflow.
  4. The PR author or engineer is tagged in the Slack thread.
  5. The engineer writes the initial changelog draft (2-3 sentences and screenshots) and replies to the thread.
  6. At the end of every week, the team reviews the #changelog channel, compiles the entries, edits them, and then publishes to the /changelog page.

Anyone can manually post in the #changelog Slack channel if something is worth sharing but isn't captured by the automated workflow.

The #changelog Slack channel

How to publish changelog yourself

People are encouraged to self-serve and publish changelog entries. Here's how.

You must be logged into your posthog.com account. Only website moderators (a.k.a PostHog employees) are permitted to publish changelog entries.

Option 1: The main changelog

Go to the /changelog page and click the + button in the top right corner.

<ProductVideo videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/changelog_form_c7f3d3a351.mp4" alt="Changelog form" classes="rounded" />

Fill out the changelog form and click Create to publish.

The changelog entry will appear on the website on the next website build, which is usually when a PR is merged into the master branch.

| Field | Required | Recommended value | |-------|----------|-------------------| | Title | Yes | The title of the changelog entry. Keep it short and sweet. | | Description | Yes | The description with native Markdown support. Add screenshots or gifs here. | | Hero image | No | We leave this empty. We add images in the description field for more control. | | Status | Yes | It must be set to Complete to appear in the changelog. | | Date | Yes | The completed date of the changelog entry. | | Team | Yes | The team that shipped the feature. | | Author | No | We normally leave this blank because we pull in GitHub PR metadata which includes author and reviewers. | | Product or feature | Yes | The category or product area of the feature. Select Uncategorized if nothing fits. | | Type | Yes | Set to New feature for most changelog entries. | | GitHub URLs | Yes | It's technically optional, but the GitHub URL populates the changelog entry with the feature's PR metadata. | | Category | Yes | The product category of the changelog entry. | | Show on homepage | No | Always set the toggle to off or no. |

Option 2: The product changelogs

Each product has a dedicated changelog page in their docs that filters entries from the main changelog. You can also publish directly from these pages using the + Add changelog button.

Each product should have a changelog page in their docs

| Product | Changelog page | |---------|----------------| | PostHog AI | /docs/posthog-ai/changelog | | Product Analytics | /docs/product-analytics/changelog | | Session Replay | /docs/session-replay/changelog | | Error Tracking | /docs/error-tracking/changelog | | LLM Analytics | /docs/llm-analytics/changelog | | Feature Flags | /docs/feature-flags/changelog | | ... | ... |

Overview

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard

At PostHog, we want our docs to win over developers and give us a competitive edge.

The focuses on delivering a delightful developer experience, maintaining a well-organized knowledge base, and writing documentation that is a genuine pleasure to read for both humans and robots.

Our team's values

  1. Treat docs like a product
  2. Be practical, not just technical
  3. Great docs start with writing
  4. Teach the robots
  5. Help our customers win

1. Treat docs like a product

We treat our docs like a product because they are. They have users (readers and AI agents), use cases (implementation, education, troubleshooting, etc.), and success metrics (more on this later).

Documentation presents unique challenges and opportunities. But ultimately, great docs drive product activation by providing the right information, in the right way, at the right stage in a developer's journey.

This means helping developers set up their very first PostHog event and helping existing customers with complex configurations integrate their third or fourth PostHog product. It also means enabling our docs to be used as context for AI workflows. It's a wide spectrum, but the goal is the same: help developers self-serve and succeed with PostHog.

Docs are a core part of the product experience. So when you're working on them, take some time to ask:

2. Be practical, not just technical

Developers don't want abstract examples or out-of-context code snippets. They want to solve real problems and use cases.

We want to showcase code that's runnable, practical, and immediately useful.

As a rule of thumb, our docs should show code within application context whenever possible. The examples we provide should reflect how PostHog is actually used in production, in the wild.

Isolated example:

posthog.capture({
  distinctId: 'distinct_id_of_the_user',
  event: 'user_signed_up',
  properties: {
    login_type: 'email',
    is_free_trial: true,
  },
})

If a code snippet has missing application context or business logic, it can be improved.

In-context example:

// Importing PostHog into the app

// Initializing PostHog client
const posthog = new PostHog(
  '<ph_project_token>',
  { host: 'https://us.i.posthog.com' }
)

app.post('/api/signup', async (req, res) => {
  const { email, password } = req.body

  try {
    const user = await createUser({ email, password })
    // Calling PostHog inside business logic // HIGHLIGHT
    posthog.capture({ // HIGHLIGHT
      distinctId: user.id, // HIGHLIGHT
      event: 'user_signed_up', // HIGHLIGHT
      properties: { // HIGHLIGHT
        login_type: 'email', // HIGHLIGHT
        is_free_trial: true // HIGHLIGHT
      }, // HIGHLIGHT
    }) // HIGHLIGHT
    res.status(201).json({ message: 'Signup successful', userId: user.id})
  } catch (error) {
    res.status(500).json({ error: 'Signup failed' })
  }
})

The in-context example is more verbose, but much more useful. It shows how PostHog fits into applications, which helps developers understand when and where to use it.

3. Great docs start with writing

Writing is something we love to do here at PostHog. The principles of PostHog writing and marketing all still apply here.

But documentation has a few unique demands.

People come to our docs looking for answers, usually with limited time. We focus on precise and consistent writing because they contribute to a smoother, more efficient learning experience.

Docs need to be finely tuned. Even small oversights or tiny mistakes can create snags that confuse readers. So nitpicking isn’t just allowed, it’s encouraged!

<summary>Nitpick #1</summary>

~~Just~~ Click Save ~~and the insight will be created~~ to create an insight.

<summary>Nitpick #2</summary>

~~Events are captured automatically by PostHog.~~ PostHog captures events automatically.

Nits and semantics and formatting (oh my!) – they're all part of the fun of technical writing. Careful attention to detail is what turns good docs into great ones, so don't shy away from it.

This does not mean our docs have to be dry or academic. In fact, they should have a natural flow that makes them easy to read. Be open, direct, and opinionated. Don't be afraid to add humor and personality when there's opportunity.

PostHog's writing voice is one of the key things that sets us apart from a sea of generic SaaS platforms. It's important that this voice can come through in our docs.

The docs style guide is a key reference we'll continue to update with examples and best practices.

4. Teach the robots

Robots aren’t a future concern. They're already here, and they're changing how people discover, evaluate, and use PostHog.

AI workflows depend on accurate and up-to-date context. Our documentation, the knowledge base, is the largest maintained source of natural language context we have. LLMs read our docs. Developers paste them into prompts. Agents use them as skills.

In other words, our docs teach AI how to be useful.

The AI wizard is a direct outcome of this philosophy. An agent that automatically integrates multiple PostHog products across frameworks, the wizard is the fastest way to activate PostHog because it consumes our docs as structured, on-demand context. It closes the gap between curiosity and real usage.

Making this possible requires context engineering and shaping our documentation into a moving, living system. We code as much as we write.

5. Help our customers win

Our customers are smart, discerning, and ambitious. They're here to build. They want to 10x their own products.

Our docs exist to help them win.

This means we should include details beyond references and technical implementations. We should share examples, use cases, and the big picture reasons why they should use a product or feature.

How we prioritize

Here's how we loosely define high-priority docs work:

Measuring success

Our north star indicators that tell us if our docs are heading in the right direction:

MDX and components for docs

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/mdx-and-components

The website's core technical architecture is built and maintained by the . See their handbook pages on the website and MDX components for more information.

Gatsby and MDX features

Snippets for content reuse

Create snippets in a _snippets/ directory for content you want to reuse across multiple pages.

When to create a snippet:

MDX snippets

Create an MDX snippet for static reusable content like tables, callouts, or text blocks.


The error event includes the following properties:

| Property | Type | Description |
|----------|------|-------------|
| `$exception_message` | string | The error message |
| `$exception_type` | string | The error type |

Use the snippet in an MDX page like this:

TSX snippets

Create a TSX snippet for dynamic content for lightweight components or React hooks.

// _snippets/installation-platforms.tsx

export default function InstallationPlatforms() {
  const platforms = usePlatformList('docs/[product]/installation', 'installation')
  return
}

Use the snippet in an MDX page like this:

If the TSX snippet contains substantial logic, create a reusable component or hook in /src/components/ or /src/hooks/ instead.

Frontmatter

All .mdx pages support frontmatter, which Gatsby uses to configure page metadata.


---
title: Install PostHog for React
platformLogo: react
showStepsToc: true
---

This guide walks you through installing PostHog for React.

Here's a table of available frontmatter fields:

| Field | Purpose | Example | |-------|---------|---------| | title | Page title | React installation | | platformLogo | Platform icon key for installation pages | react, python, nodejs | | showStepsToc | Show steps in right sidebar TOC | true | | hideRightSidebar | Hide right sidebar TOC on start-here and changelog pages | true | | contentMaxWidthClass | Control and customize the width of main content column | max-w-5xl | | tableOfContents | Override the auto-generated TOC with custom entries | [{ url: 'section-id', value: 'Section Name', depth: 1 }] |

Magic <placeholders>

You can use magic placeholders or strings to replace the project token, project name, app host, region, and proxy path in the code block with values from the user's project.

| Placeholder | Description | Default | | -------------------------- | ----------------------------- | ----------------------------------------------- | | <ph_project_token> | Your PostHog project token | n/a | | <ph_project_name> | Your PostHog project name | n/a | | <ph_app_host> | Your PostHog instance URL | n/a | | <ph_client_api_host> | Your PostHog client API host | https://us.i.posthog.com | | <ph_region> | Your PostHog region (us/eu) | n/a | | <ph_posthog_js_defaults> | Default values for posthog-js | 2026-01-30 | | <ph_proxy_path> | Your proxy path | relay-XXXX (last 4 digits of project token) |

You can use these placeholders in the code block like this:

const client = new PostHog('<ph_project_token>', { host: '<ph_client_api_host>' })

Docs components

Screenshots and gifs

For UI screenshots and gifs with light and dark variants:

<ProductScreenshot
  imageLight="https://..."
  imageDark="https://..."
  alt="Descriptive alt text"
  classes="rounded"
/>

Videos

For videos like .mp4 or .mov files:

<ProductVideo
  videoLight="https://..."
  videoDark="https://..."
  alt="Descriptive alt text"
  classes="rounded"
  autoPlay={false}
  muted={true}
  loop={true}
  background={false}
/>

Multi-language code blocks

For code examples in multiple programming languages:

// JavaScript example

Python example

Callout boxes

You can add callout boxes to documentation to ensure skimmers don't miss essential information.

    Here is some information

Three styles are available:

They look like this:

Provide detail here. You can go on at length if necessary.

Provide detail here. You can go on at length if necessary.

Provide detail here. You can go on at length if necessary.

Valid icons are listed in PostHog's icon library.

Steps

Use the <Steps> component for content that walks the reader through a strict sequence of instructions. Think how-to guides or step-by-step tutorials.



Steps are automatically numbered.



Write the _content_ in **markdown**.



Add checkpoints to help readers verify their progress.

Our mdx parser does not play nice with certain whitespace. When using the component, make sure you:

Decision tree

Use decision trees to help users choose between 2-6 options:

<DecisionTree
    questions={[
        {
            id: 'platform',
            question: 'What platform are you using?',
            options: [
                { value: 'web', label: 'Web' },
                { value: 'mobile', label: 'Mobile' },
            ],
        },
    ]}
    getRecommendation={(answers) => {
        // return recommendation based on answers
    }}
/>

PostHog AI components

You can also link to PostHog AI which used to be called Max AI.

Use <AskMax> to provide in-context help:

The <AskMax> component opens the PostHog AI chat window directly on the website. Use this for documentation pages where users might need help understanding concepts or troubleshooting. Unlike <MaxCTA> which links to the PostHog app, this keeps users in the docs context.

<AskMax
    quickQuestions={[
        'How do I mask sensitive data?',
        'Can I enable recordings only for certain users?',
        'How can I control costs?',
    ]}
/>

Use <AskAIInput> for troubleshooting sections:

## Have a question? Ask PostHog AI

Platform logos

All platform logos are centralized in src/constants/logos.ts. To add a new platform:

  1. Upload SVG to Cloudinary
  2. Add key to src/constants/logos.ts in camelCase
  3. Reference in MDX frontmatter: platformLogo: myPlatform

Use consistent naming: stripe, react, nodejs, etc.

Debugging MDX issues

Common MDX parsing issues:

Run the formatter to catch issues:

pnpm format:docs

Onboarding docs

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/onboarding-docs

Onboarding docs, or product installation docs, are special because these instructions are shared between the in-app onboarding flow and the getting started pages on the PostHog website.

These are some of the first pieces of docs a new user will see. They show users how to quickly set up and install a product, so they need to be up to date and accurate.

To help keep in-app and website onboarding docs in sync, there is a single source of truth for the onboarding docs in the posthog/posthog repository under the docs/onboarding directory. This means you only need to update the in-app onboarding docs in the PostHog monorepo, and the website docs will be updated automatically.

<ProductVideo videoLight= "https://res.cloudinary.com/dmukukwp6/video/upload/posthog_onboarding_docs_13eddf168e.mp4" alt="How shared onboarding docs work" classes="rounded" autoPlay={false} muted={false} />

Video explainer of how onboarding docs and shared rendering work

Which products have shared onboarding docs

This is a relatively new feature, so we're still migrating old onboarding docs to the new system. As of February 2026:

| Product | Status | |---------|--------| | LLM analytics | ✅ Migrated | | Product Analytics | ✅ Migrated | | Web Analytics | ✅ Migrated | | Session Replay | ✅ Migrated | | Feature Flags | ✅ Migrated | | Experiments | ✅ Migrated | | Error Tracking | ✅ Migrated | | Surveys | ✅ Migrated | | Data Pipelines | ⏳ Not yet migrated | | Data Warehouse | ⏳ Not yet migrated | | Revenue Analytics | ⏳ Not yet migrated | | PostHog AI | ⏳ Not yet migrated | | Workflows | ✅ Migrated | | Logs | ⏳ Not yet migrated | | Endpoints | ⏳ Not yet migrated |

How it all works

Onboarding content is written once as React components in the posthog/posthog repo, then rendered in two places:

  1. PostHog monorepo: For in-app onboarding, the PostHog app imports these docs components directly and wraps them with OnboardingDocsContentWrapper, which provides UI components like Steps, CodeBlock, etc.
  2. PostHog.com repo: The website pulls the docs components from the monorepo via gatsby-source-git, a Gatsby plugin, and then renders them through MDX stub files that use a similar but different OnboardingContentWrapper to provide compatible UI components.

Both wrappers provide the same component names (Steps, CodeBlock, CalloutBox, etc.) so the shared content renders correctly in either place. When you merge changes to master in posthog/posthog, the website automatically pulls the updated content on its next build.

flowchart LR
    subgraph website["<strong>posthog.com repo</strong>"]
        gatsby["gatsby-source-git<br/>(pulls on build)"]
        mdx["MDX stub files"]
        wrapper2["OnboardingContentWrapper"]
        ui2["Docs pages"]
        gatsby --> mdx --> wrapper2 --> ui2
    end

    subgraph monorepo["<strong>posthog/posthog repo</strong>"]
        docs["docs/onboarding/*.tsx<br/>product installation docs"]
        wrapper1["OnboardingDocsContentWrapper"]
        ui1["In-app onboarding flow"]
        docs --> wrapper1 --> ui1
    end
    docs -.->|"auto-sync"| gatsby

If you need some help with structuring your files, this is the architecture for each repo:

posthog/posthog
├── docs/onboarding/
│   └── your-product/
│       ├── index.ts              # Barrel file re-exports all Installation components + snippets
│       ├── sdk-name.tsx          # getSteps + createInstallation
│       └── _snippets/
│           └── reusable-snippet.tsx
│
└── frontend/src/scenes/onboarding/
    └── sdks/your-product/
        └── YourProductSDKInstructions.tsx  # withOnboardingDocsWrapper

posthog/posthog.com
└── contents/docs/your-product/
    └── installation/
        ├── sdk-name.mdx          # MDX stub
        └── _snippets/
            ├── prefix-installation-wrapper.tsx  # Single file with ALL wrappers
            └── shared-helpers.tsx               # modifySteps helpers

For a complete working example, see the Session Replay implementation:

| Repo | File | |------|------| | posthog/posthog | docs/onboarding/session-replay/ | | posthog.com | react.mdx | | posthog.com | sr-installation-wrapper.tsx (single file with all wrappers) |

How to create/migrate new onboarding docs

Step 1: Create the shared component in posthog/posthog

  1. Navigate to the product directory in docs/onboarding/. If it doesn't exist, create it: docs/onboarding/your-product/
  2. Create a new .tsx file: docs/onboarding/your-product/filename.tsx
  3. Export a step function and Installation component. Use createInstallation to automatically handle the rendering:

   // Step function receives a single context object with all components
   export const getPythonSteps = (ctx: OnboardingComponentsContext): StepDefinition[] => {
       const { CodeBlock, Markdown, dedent, snippets } = ctx

       // Reuse installation steps from product-analytics
       const installationSteps = getPythonStepsPA(ctx)

       // Add feature-flag-specific steps
       const flagSteps: StepDefinition[] = [
           {
               title: 'Evaluate feature flags',
               badge: 'required',
               content: (
                   <>
                       Check if a feature flag is enabled:
                       {snippets?.BooleanFlagSnippet && <snippets.BooleanFlagSnippet />}
                   </>
               ),
           },
       ]

       return [...installationSteps, ...flagSteps]
   }

   // createInstallation wraps your step function into a ready-to-use component
   export const PythonInstallation = createInstallation(getPythonSteps)
  1. For reusable snippets, create them in docs/onboarding/<product>/_snippets/ and export a named component.
  2. Create the in-app wrapper in frontend/src/scenes/onboarding/sdks/your-product/. Use the withOnboardingDocsWrapper helper:

   const PYTHON_SNIPPETS = {
       PythonEventCapture,
       BooleanFlagSnippet,
       MultivariateFlagSnippet,
   }

   const FeatureFlagsPythonInstructionsWrapper = withOnboardingDocsWrapper({
       Installation: PythonInstallation,
       snippets: PYTHON_SNIPPETS,
   })

   export const FeatureFlagsSDKInstructions: SDKInstructionsMap = {
       [SDKKey.PYTHON]: FeatureFlagsPythonInstructionsWrapper,
       // ... other SDKs
   }
  1. Test in the app by running the monorepo locally and navigate to localhost:8010/onboarding. From this page, you can select your product and test.

Step 2: Create the website stub in posthog/posthog.com

  1. To test your changes locally, use the GATSBY_POSTHOG_BRANCH environment variable to point to your branch:
   GATSBY_POSTHOG_BRANCH=your-branch-name pnpm start

This tells gatsby-source-git to pull from your branch instead of master.

  1. Create a single TSX wrapper file at contents/docs/<product>/installation/_snippets/<prefix>-installation-wrapper.tsx that exports all SDK wrappers:
       JSWebInstallation,
       ReactInstallation,
       NextJSInstallation,
       // ... import all SDK installations
       SessionReplayFinalSteps,
   } from 'onboarding/session-replay'

   const SNIPPETS = {
       SessionReplayFinalSteps,
   }

   // Export a wrapper for each SDK
   export const SRJSWebInstallationWrapper = () => (
   )

   export const SRReactInstallationWrapper = () => (
   )

   export const SRNextJSInstallationWrapper = () => (
   )

   // ... repeat for all SDKs

The modifySteps prop lets you add website-specific steps (like "Next steps") that aren't needed in-app.

  1. Create an MDX stub file for each SDK at contents/docs/<product>/installation/<name>.mdx:
   ---
   title: React session replay installation
   platformLogo: react
   showStepsToc: true
   ---

   <!--
   This page imports shared onboarding content from the main PostHog repo.
   Source: https://github.com/PostHog/posthog/blob/master/docs/onboarding/session-replay/react.tsx
   -->

  1. Test locally: Run pnpm start and verify the page renders correctly at the expected URL.
  2. Commit and merge both the posthog/posthog and posthog/posthog.com PRs.

Exceptions to the standard pattern

The architecture described above works well for products that have their own SDK installation steps – but not every product fits this mold. Some products are exceptions, and that's fine. The shared onboarding pattern should only be used when it makes sense.

Workflows

Installing an SDK for Workflows is optional. Because of this, Workflows doesn't define its own shared doc components. There are no files in docs/onboarding/workflows/.

Instead, Workflows reuses the Product Analytics Installation components directly and transforms them with a modifySteps function at the in-app level:


// Filter out product-analytics-specific steps and add a workflows-specific final step
function workflowsModifySteps(steps: StepDefinition[]): StepDefinition[] {
    const installationSteps = steps.filter(
        (step) => !['Send events', 'Send an event', 'Send events via the API'].includes(step.title)
    )
    return [
        ...installationSteps,
        {
            title: 'Set up workflows',
            badge: 'recommended',
            content: ,
        },
    ]
}

const WorkflowsReactWrapper = withOnboardingDocsWrapper({
    Installation: ReactInstallation,
    modifySteps: workflowsModifySteps,
})

This pattern works because Workflows only needs a PostHog SDK installed (the same installation steps as Product Analytics), then swaps the final "send events" step for a "set up Workflows" step. Everything lives in a single WorkflowsSDKInstructions.tsx file – no shared docs directory, no website stubs.

If your product's onboarding is essentially "install the PostHog SDK + do one product-specific thing," consider reusing existing Installation components with modifySteps instead of creating a full set of shared doc files. This avoids unnecessary duplication.

SDK reference docs

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/sdk-reference-docs

SDK references document class, method signatures, and type interfaces for each SDK. They complement examples in tutorials and guides by providing a comprehensive reference with all the details. They're important for deep understanding of PostHog SDKs and as context for LLM-based tools. Tutorials and guides reference them for parameter and return type details.

Which SDKs have reference docs

It's an ongoing effort to create SDK reference docs for all our SDKs, starting with popular SDKs. Here's the current status:

| SDK | Status | |-----|--------| | JavaScript Web SDK | ✅ Completed | | Python SDK | ✅ Completed | | Node.js SDK | ✅ Completed | | React Native SDK | ✅ Completed | | iOS SDK | 🚧 In progress | | Flutter SDK | ⏳ Not started | | Android SDK | ⏳ Not started | | Go SDK | ⏳ Not started | | Java SDK | ⏳ Not started | | Rust SDK | ⏳ Not started | | PHP SDK | ⏳ Not started | | .NET SDK | ⏳ Not started |

How the SDK reference docs work

  1. SDKs are parsed for basic information like class names, method names, and type interfaces.
  2. Descriptions, parameters, return types, and examples are extracted from the SDKs or SDK doc comments.
  3. The information is rewritten into a standardized JSON format (HogRef). They're stored in each SDK's repository under a references directory. For example, the JavaScript Web SDK reference is stored here.
  4. When an SDK releases a new version, the reference docs are generated automatically. Here's an example workflow.
  5. The Strapi instance behind the website is configured to fetch the HogRef JSON files from the SDK's repository and display them on the website via a cron job.
  6. The website renders the HogRef JSON files as a table on the SDK reference page.

Each language works slightly differently, but the general process is the same.

<summary>HogRef JSON schema specification</summary>

How to create new SDK reference docs

To contribute a new SDK reference doc:

  1. Create a script to parse the SDK's documentation and extract the information into a HogRef JSON file. The script should:
  1. Create a workflow to generate the HogRef JSON file when a new version of the SDK is released. See an example workflow.
  2. Update the cron-tasks.ts file to fetch the HogRef JSON file from the SDK's repository and display it on the website.
  3. Once the HogRef is ingested into the Strapi instance via the cron job, a new page should be created automatically on the website. The website will render the HogRef JSON file as a table on the SDK reference page.
  4. Find existing links to the SDK's GitHub repository source code and point them to the new HogRef JSON file instead.

Vale and prose linting

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/vale

Vale is a prose linter that enforces PostHog's writing style across the website: docs, blog posts, newsletters, and more.

It catches spelling mistakes and style inconsistencies based on rules we define – like the unforgivable use of em dashes.

Why use a prose linter?

Prose is infinitely diverse. Different authors, tones, and writing goals make inconsistencies easy to introduce and a nightmare to maintain.

A prose linter creates a baseline. It automatically enforces the core mechanical and stylistic rules we care about most as a brand, so our writing stays consistent.

"Never send an LLM to do a linter's job." – someone

LLMs can generate drafts and reviews, but they are not reliable linters. They're slow and expensive compared to deterministic tools.

Use Vale to detect issues, then use LLMs to hep fix them.

<ProductScreenshot imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/pasted_image_2026_02_16_T15_09_34_778_Z_61bcc3fa2b.png" alt="Vale linting" />

Prose linting with Vale

Getting started

Install Vale

brew install vale

Run linting

pnpm vale:staged     # Lint md/.mdx files you have staged in git

pnpm vale contents/blogs/           # Lint a specific directory
pnpm vale contents/blog/my-post.md  # Lint a specific file
pnpm vale .                         # Lint current directory

pnpm vale:test       # Lint the .vale/test/ directory

For real-time linting in your code editor, install the Vale VS Code extension.

Style rules

Styles are enforced by a collection rules and checks written as YAML files. We can organize these rules into directories to create different style guides for different areas of our website.

Vale then applies these rules hierarchically and in combination with each other.

PostHogBase            Rules for all .md and .mdx files
├── AmericanEnglish
├── ProductNames
├── EnDash
├── OxfordComma
├── Spelling
├── Inclusivity
└── ...

PostHogDocs            Rules for /docs
├── DefinitionListDash
├── DirectAddress
├── Trivializers
└── UIBoldNotQuotes

PostHogEditorial       Rules for /blog, /newsletter, /tutorials
├── BulletSpacing
├── EnableNotAllow
└── Hedging

Adding a rule

  1. Pick the right directory. PostHogBase will apply the rule everywhere, PostHogDocs to the docs, and PostHogEditorial to the blog, newsletter, and tutorials.
  1. Create a .yml file in the respective styles/ subdirectory.

The two most common rule types are substitution and existence.

# Substitution – suggest a replacement
extends: substitution
message: "Use '%s' instead of '%s'."
level: warning
swap:
  colour: color
# Existence – flag terms that shouldn't appear
extends: existence
message: "Avoid using '%s'."
level: warning
tokens:
  - simply
  - obviously

Each rule can be configured to a severity level:

  1. Errors
  2. Warnings
  3. Suggestions

We generally stick to warnings and suggestions.

The Vale docs have more information on rule types and configuration.

If you add a new rule, update the test/ directory with examples and run pnpm vale:test to see if it works as expected.

You can also test specific rules with the Vale CLI.

pnpm vale --filter='.Name=="PostHogBase.SentenceCase"' ./docs/error-tracking/pricing.mdx

Breaking the rules

Vocabularies and spelling exceptions

Not every violation is actually a mistake. We frequently use industry terms, brand names, and colloquialisms that aren't in standard dictionaries like "faq", "devops", or "stonks."

You can add exceptions to the Vale rules as vocabulary or as a spelling exception.

Here's how to choose between them:

| Proper noun? | Examples | File | |------------------------------|----------|------| | Yes | HubSpot, JavaScript, ClickHouse, PostHog | config/vocabularies/BrandsAndTechnologies/accept.txt | | No | webhook, cronjob, heatmaps, stonks | PostHogBase/spelling-exceptions.txt |

  1. Vocabularies are case-sensitive regex patterns that enforce exact capitalization. Use for brand names, products, and technologies where casing is a part of correctness. They will be exempt from rules like SentenceCase.yml.
  1. Spelling exceptions are case-insensitive words the spell checker should accept. Use for industry terminology or developer jargon that isn't in standard dictionaries. They will be exempt from the rule Spelling.yml.

The .vale.ini file

We've configured global ignores in .vale.ini based on certain scopes, tokens, and tags.

Vale globally ignores:

How to write product docs

Docs and Wizard | Source: https://posthog.com/handbook/docs-and-wizard/writing-product-docs

This guide explains how to write and structure your product's documentation.

Docs categories

We've created a standard, flexible structure for product docs. Each section contains specific types of pages that serve different purposes in the developer journey.

Every docs page should fit into one of the following categories:

  1. Overview – The landing page for your product docs. Think of it as a one-pager for your product.
  2. Getting started – Docs that focus on the minimal tasks and context necessary to get your product up and running.
  3. Concepts – Docs that explain the core abstractions and building blocks of your product.
  4. Guides – Tutorials on how to use your product's features.
  5. PostHog AI – Docs on how to use PostHog AI or AI workflows with your product.
  6. Resources – Standalone docs that don't fit into the other categories like pricing or changelog.

We recommend using Error Tracking docs as a reference. We've invested significant time in their docs and consider them to be the strongest example of well-structured documentation. It's a good template to use when writing docs for new products or improving existing ones.

<ProductScreenshot imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/pasted_image_2026_02_02_T01_01_56_985_Z_3300a46cb9.png" alt="Error Tracking docs" padding={false} />

Error Tracking is a good template for product docs structure

Disclaimer: PostHog has a wide variety of products. For example, Data Pipelines is integration heavy while PostHog AI and Workflows are UI-oriented. They require different content structures for their docs, and that's okay. You can adapt this structure to your product's needs.

That said, stick to this structure first. It’s worked well for other products, both in terms of docs-to-product conversion and user feedback, so it’s proven to be effective.

This sidebar navigation mirrors the docs structure. The hierarchy drives how users discover and navigate docs.

Docs sidebar
|
├── Your product
|   └── Overview                # Landing page or home page
├── Getting started
|   ├── Start here              # "Syllabus" page
|   ├── Installation
│   │   ├── Framework 1         # Installation quickstart
│   │   └── Framework 2
│   └── Basic config            # Minimal setup quickstart
├── Concepts
|   ├──Concept 1                # In-depth product explainer
|   └──Concept 2
├── Guides
|   ├──Guide 1                  # Tutorial for feature
|   └──Guide 2
├── PostHog AI
|   ├──AI guide 1               # Tutorial for AI feature
|   └──AI guide 2
├── Resources
|   ├──Pricing                  # Pricing and usage limits
|   ├──Troubleshooting          # Common issues and solutions
|   ├──Changelog                # Product updates
|   └──References               # Links to SDK and API docs

Overview

The Overview page is the landing page for your product docs. Think of it like a book cover for your product. People will judge your product based on a quick glance.

The overview needs to work like an effective one-pager. Imagine a busy engineering manager who's evaluating multiple solutions. Someone sends them a link to your product docs. With a quick scan, they need to confirm basic criteria before deciding to learn more or bounce:

<ProductVideo videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/error_docs_overview_compress_e8ddecebaf.mp4" alt="Error Tracking overview video" classes="rounded" />

Example - Error Tracking overview

Your Overview page should include:

Getting started

The Getting started section in your product docs exists to get new users up and running with your product as quickly as possible and with just enough context for them to understand what's going on. It needs to be streamlined for minimal setup.

Avoid including advanced or more complex features in the getting started section. Those should go into the guides section.

Your Getting started section should include:

Start here

The Start here page shows the product adoption journey at a high-level. It gives readers an overview of the milestones necessary to be successful with your product, like the quest log of a video game.

It acts as a syllabus. Users are more willing to invest their effort and time when they can see what they’re signing up for. Otherwise, one setback – a missing link or an outdated config – can be enough for users to give up if they don't know how far along they are in the process.

These pages are high converting pages for our paid ads, so they matter. Use the QuestLog component to create a visual roadmap that guides users through adoption milestones.

<ProductVideo videoLight="https://res.cloudinary.com/dmukukwp6/video/upload/error_start_here_compress_8251df6df1.mp4" alt="Error Tracking start here page" />

Example - Error Tracking start here page

Your Start here page should include:

Installation

The installation pages are quickstarts for your product. An installation page using the <Steps> component should be created for each platform, framework, or language your product supports.

Installation docs have a special architecture; they render the same content as the in-app onboarding flow from the monorepo. The single source of truth lives in the posthog/posthog monorepo, and the website pulls the content automatically. But it requires some boilerplate code to set up.

See the onboarding docs handbook for full details on how to create or migrate installation guides to the shared rendering architecture.

Image: installation platforms overview

Image: installation 1

Image: installation 2

Example - Error Tracking installation docs

Your Installation section should include:

The installation index page displays a grid of platform cards (i.e. frameworks and languages) that's automatically generated from the sidebar navigation with logos and icons.

1. The index page imports a snippet that calls usePlatformList().

2. The hook reads all MDX files in the installation folder.

3. It sorts them based on the order defined in src/navs/index.js.

4. Each platform's logo comes from the platformLogo frontmatter field.

Concepts

The Concepts section explains your product's core abstractions or building blocks in depth. Concept pages help readers understand how your product is designed to work and the underlying mental model.

The goal is to explain to readers why the product behaves the way it does, not just how to use it.

If your product uses any terminology that carries specific meaning or implies functionality, it probably deserves a concept page. Some concepts are shared across an industry, others are specific or adapted to your product. For example, in Error Tracking, we have concept pages for:

Use Mermaid diagrams for data flows and relationships, tables for definitions, and screenshots for in-app UI elements.

Image: Concept page 1

Image: Concept page 2

Image: Concept page 3

Image: Concept page 4

Examples - Fingerprints, Issues and exceptions, Stack traces, Releases

Your Concepts section should include:

Guides

The Guides section contains tutorials for your product's features. These pages are framed around accomplishing specific use cases, jobs-to-be-done, or goals with your product.

Why call this section "Guides" instead of "Features"? Because it's task-oriented and focuses on outcomes. We want to avoid listing out a bunch of branded feature names in the sidebar: they don't mean anything to the user. What your product feature is called is secondary to what it enables, which is to help users solve their problems.

In general, there should be one page for each major feature or workflow in your product. On each page, include a brief intro explaining what the guide helps you do, instructions on how to use the feature in practice, and screenshots of the UI.

Image: Guides page 1

Image: Guides page 2

Image: Guides page 3

Image: Guides page 4

Image: Guides page 5

Image: Guides page 6

Examples - Capture exceptions, Manage and resolve issues, Send alerts, Set up integrations

Your Guides section should include:

PostHog AI

The PostHog AI section showcases your product's AI workflows. This includes integrations with our official PostHog AI product, MCP-based workflows, or examples of useful prompts or skills.

We don't want to be too prescriptive here. The goal is to show off your product's AI capabilities, small or big.

Image: PostHog AI 1

Image: PostHog AI 2

Example - Error Tracking PostHog AI docs

Your PostHog AI section should include:

Resources

The Resources section is where the useful “lookup” stuff lives. These are important but standalone pages like pricing, changelog, troubleshooting, and API or SDK references.

If something doesn't fit neatly into the other categories, it belongs here.

Image: Resource 1

Image: Resource 2

Image: Resource 3

Image: Resource 4

Example - Error Tracking resources

Your Resources section should include:

Pricing

The Pricing page explains the product's pricing model, free tier limits, and how usage is calculated. Transparency is a differentiator for us, so we want to be clear and upfront about how much users will pay.

Just as importantly, we want to show users how to stay in control of costs. This page should include advice on how users can reduce their bill and cut costs.

Example - Error Tracking pricing

Your Pricing page should include:

Troubleshooting

Common issues and solutions that unblock users who are stuck. Keep this updated based on support tickets and community questions.

Start with the <AskAIInput> component to enable AI chat support, then use searchable headings with numbered solutions. Each section should be scannable and actionable.

Example - Error Tracking troubleshooting

Changelog

Displays changelog entries for your product using the <ProductChangelog> component. It filters entries from the main /changelog page.

Example - Error Tracking changelog

API and SDK references

Links to SDK reference docs and API documentation filtered by product.

Example - Error Tracking references

AI platform

Engineering | Source: https://posthog.com/handbook/engineering/ai/ai-platform

What is the AI platform?

The PostHog AI platform is our infrastructure for building and delivering AI-powered features across all PostHog products. Instead of each team building isolated AI capabilities, we provide shared architecture, reusable components, and a consistent framework that lets everyone contribute toward our AI capabilities while maintaining quality and consistency.

Think of it like HogQL: rather than having every team write their own query engines, we built one shared system that everyone can use and extend. The AI platform follows the same philosophy — avoid reinventing AI infrastructure and prevent "death by random AI widgets."

Why we built it

Almost every team at PostHog either is building or needs to build AI features. Without a platform approach, we'd face:

The AI platform solves these problems by providing:

  1. Shared architecture: A single-loop agent system that any product can extend with domain-specific tools and expertise
  2. Reusable components: Common tools (search, data access, taxonomy reading) that work across all AI features
  3. Consistent UX: Standard patterns for AI interactions, loading states, error handling, and result presentation
  4. Platform-level improvements: When we improve the core agent (better reasoning, faster responses, cheaper inference), all products benefit automatically

Vision: Product autonomy

The overarching goal of PostHog's AI direction is product autonomy — a closed loop where PostHog data automatically drives product improvements with minimal human intervention.

Here's how the loop works:

  1. Signals: PostHog collects signals from all products and external sources — error patterns, frustration in session recordings, experiment results, survey responses, insight thresholds, support tickets, Slack threads, and more. These signals represent real problems or opportunities.
  1. Enrichment: PostHog processes and enriches these signals, deduplicating across data sources and adding context. A vague signal like "users seem frustrated during checkout" becomes a concrete, contextualized finding.
  1. Plans: The enriched signals are transformed into structured plans — similar to how Claude Code works, but driven by data rather than human prompts. Each plan describes what needs to happen, why, and what evidence supports it.
  1. Execution: A sandboxed coding agent takes these plans and acts on them. Today, we're focused on automatically creating pull requests. The agent also handles instrumentation automatically — adding tracking events, feature flags, and experiments as part of the code it ships. Better instrumentation produces better signals, making the entire loop smarter over time. In the future, other artifact types will be supported — decks, growth reviews, and more.
  1. Review: Product engineers review, iterate on, and merge (or decline) the proposed changes.
  1. Feedback: Once a change ships, a new signal is created so the system can evaluate what happened after the PR was merged. Did the metric improve? Did new errors appear? This feeds back into step 1.
  1. Loop: The cycle continues until the agent finds an exit condition — low actionability, non-important signals, noisy signals, de-prioritized work, etc.
graph LR
    Signals[Signals<br/>Internal: errors, recordings, experiments<br/>External: support tickets, Slack] --> Enrichment[Enrichment<br/>ML & agentic pipelines<br/>deduplicate, contextualize, prioritize]
    Enrichment --> Plans[Plans<br/>Structured, data-driven<br/>action items]
    Plans --> Execution[Execution<br/>Coding agent creates<br/>PRs and artifacts]
    Execution --> Review[Review<br/>Human oversight,<br/>iterate or merge]
    Review --> Feedback[Feedback<br/>New signal created<br/>from shipped changes]
    Feedback --> Signals

This vision connects all the individual AI products. PostHog products and external sources (support tickets, Slack) generate signals, ML and agentic pipelines enrich them into structured plans, background and local coding agents execute on those plans, and product engineers review and collaborate on the changes. The loop closes when shipped changes generate new signals that feed back into the cycle.

For how product teams can contribute to this vision, see Integration vectors for product teams.

Architecture at a glance

The AI platform has three main layers:

1. User-facing products

These are the AI features users interact with directly:

2. Core infrastructure

The shared components that power all products:

3. Integration points

How everything connects together:

For a detailed technical overview, see AI platform architecture.

Products overview

PostHog AI [General availability]

Your primary interface for working with PostHog. Instead of clicking through forms and menus, describe what you want in natural language. PostHog AI can create dashboards, write SQL queries, set up surveys, and answer questions about your data — all through conversation.

Best for: Quick answers, creating resources, learning PostHog, iterative exploration Status: Beta | Pricing: Paid with free tier

Learn more →

Deep research [Beta]

When you need to investigate complex, open-ended problems, Deep research digs deep. It systematically explores your data — session recordings, analytics, error logs — and produces comprehensive research reports that would take a human analyst hours to create.

Best for: Understanding why metrics changed, investigating user behavior patterns, root cause analysis Status: Under development | Pricing: Paid with free tier

Learn more →

Session summaries [Alpha]

Analyze hundreds of session recordings in minutes instead of hours. Session summaries finds patterns, clusters similar issues, and shows you what's actually happening across your user sessions — not just what you caught in the first few recordings you watched.

Best for: Understanding UX issues, debugging problems affecting multiple users, finding edge cases Status: Alpha | Pricing: Paid with free tier

Learn more →

PostHog Code [Under development]

An agent development environment that solves the messy workflow problem of engineering with coding agents. Each task gets its own isolated workspace where an agent works — you can guide the agent, review changes, and switch between workspaces, with everything related to a task in one place instead of across your terminal, editor, and GitHub.

Best for: Product engineers who work on multiple tasks simultaneously and already use agents heavily Status: Under development | Pricing: TBD

Learn more →

Wizard [General availability]

Get PostHog set up in minutes instead of hours. The Wizard detects your tech stack, generates integration code, verifies the installation, and gets you collecting data with minimal manual work.

Best for: New PostHog users, setting up new projects, quick integration Status: General availability | Pricing: Free

Learn more →

MCP server [General availability]

Bring PostHog into your development environment. The MCP server makes PostHog AI's features available to Claude Code, VS Code, and other MCP-compatible tools, so you never have to leave your editor to check analytics or create insights.

Best for: Engineers who prefer editor-based workflows, combining PostHog with other data sources Status: General availability | Pricing: Free

Learn more →

Key concepts

For a list of key concepts definitions, see the Glossary.

Getting started

For users

For engineers building AI features

For product managers

What's next?

The AI platform is actively evolving. Major initiatives include:

For details on upcoming work, see Future directions.

Documentation navigation

Contact

For questions about working with the AI platform:

AI platform architecture

Engineering | Source: https://posthog.com/handbook/engineering/ai/architecture

This page provides a technical deep dive into the PostHog AI platform architecture. For a high-level overview, see the AI platform overview.

AI platform architecture overview

The following diagram shows how all components of the AI platform work together:

graph TB
    subgraph "User Facing Products"
        PostHogAIUI[PostHog AI<br/>In-app Agent]
        DeepResearch[Deep Research]
        SessionSum[Session Summaries]
        PostHogCode[PostHog Code<br/>Desktop App]
        Wizard[Wizard<br/>CLI Tool]
        ClaudeCode[Claude Code /<br/>Other AI Tools]
    end

    subgraph "Core AI Infrastructure"
        subgraph "Single-Loop Agent"
            Agent[Agent with<br/>Full Context]
            CoreTools[Core Tools<br/>search, read_data,<br/>read_taxonomy,<br/>todo_write]
            Modes[Agent Modes<br/>SQL, Analytics, CDP,<br/>Custom Product Modes]
        end

        MCP[MCP Server<br/>CloudFlare<br/>Model Context Protocol]
        TaskGen[Task Generation<br/>Temporal Jobs]
    end

    Agent --> CoreTools
    Agent --> Modes

    Modes --> MCP

    MCP --> PostHogAIUI
    MCP --> DeepResearch
    MCP --> SessionSum
    MCP --> PostHogCode
    MCP --> Wizard
    MCP --> ClaudeCode

    SessionSum --> TaskGen
    DeepResearch --> TaskGen

    TaskGen --> PostHogCode

Key integration points

  1. The agent uses dynamic modes: The single-loop agent architecture uses dynamically loadable modes that expose PostHog capabilities.
  1. MCP provides universal access: The MCP server makes agent features accessible to any MCP-compatible client. PostHog AI, PostHog Code, Session Summaries, Wizard, and third-party tools like Claude Code all consume the same MCP server.
  1. Task generation feeds PostHog Code: Signals from PostHog data, PostHog AI conversations, and Deep Research investigations are processed into structured tasks that PostHog Code can execute.
  1. Shared features: Every surface consumes the same agent features through the MCP, ensuring consistency across the platform.

Single-loop agent architecture

Mode switching

PostHog AI is based on a single-loop agent architecture, heavily inspired by Claude Code, with some PostHog unique flavour. The core insight is simple: instead of routing between multiple specialized agents that act as black boxes, we have one agent that maintains full conversation context and can dynamically load expertise as needed.

The single-loop agent has direct access to all tools, uses a todo-list pattern to track progress across long-running tasks (just like Claude Code), and provides complete visibility into every step it takes. When it needs specialized knowledge, it doesn't delegate to a sub-agent — it switches its own mode to become an expert in that domain.

How the single-loop agent works

sequenceDiagram
    participant User
    participant Agent as Single-Loop Agent<br/>(Full Context)
    participant Tools

    User->>Agent: "Create a funnel for signup flow"

    Agent->>Tools: Call read_taxonomy<br/>(check what events exist)
    Tools-->>Agent: Returns actual events:<br/>'user_signed_up', 'account_created'

    Agent->>Tools: Call enable_mode("Analytics")<br/>(load funnel creation tools)
    Tools-->>Agent: Analytics mode enabled<br/>with insight creation tools

    Agent->>Tools: Create funnel with correct events:<br/>'user_signed_up' → 'account_created'
    Tools-->>Agent: Funnel created successfully

    Agent-->>User: "I've created a funnel tracking your signup flow.<br/>It shows 45% conversion from 'user_signed_up'<br/>to 'account_created'"

The key differences from older architectures:

Core tools: Always available

No matter what mode the agent is in, it always has access to a core set of tools:

The search tool is unified search with a kind discriminator. You can search documentation (kind=docs), search existing insights (kind=insights), or search other resources as we add them. This replaced having separate search_docs and search_insights tools.

The read_data tool lets the agent read database schema and billing information. The read_taxonomy tool is how the agent explores your events, entities, actions, and properties. These are crucial for avoiding hallucination problems we had before — the agent can always check what data actually exists before making assumptions.

The enable_mode tool is how the agent switches between different areas of expertise, which we'll discuss in detail next.

Finally, todo_write is the tool that lets the agent manage long-running tasks. When you ask for something complex, the agent can write out a plan, track its progress, and make sure it doesn't lose context.

Agent modes: Dynamic expertise

Here's the key innovation: instead of having specialized sub-agents, we have a single agent that can "switch gear" by switching modes. Each mode gives the agent new tools, a new system prompt with domain expertise, and example workflows (which we call "trajectories") to follow.

It works in two stages. First, a small model router analyzes the user's request and enables some default modes. Then, during the conversation, the agent can call enable_mode("SQL") to switch into SQL expert mode, gaining SQL-specific tools and knowledge. The agent knows which tools it had before, which new ones it gained, and can switch back or switch to a completely different mode at any time.

Each mode is defined by three things:

A routing prompt that explains when to activate this mode and lists the available tools. This is what the small model router and the main agent use to decide when to switch modes.

A system prompt that contains expert instructions for this domain. When the agent switches to CDP mode, for example, it gets a system prompt explaining how CDP destinations work, what Hog functions are, and how transformations should be structured.

Workflow trajectories that give the agent examples of how to accomplish tasks. We inject example workflows into the todo_write tool description. For instance, the CDP mode might include a trajectory like: "Setting up CDP destination: 1. Write HogQL transformation code, 2. Define input variables, 3. Set event/property filters, 4. Test with sample data before activating."

This architecture allows product teams to create their own modes without touching the core agent. Modes can be composed and nested. Think of it as "thousands of agents" through mode combinations, rather than a fixed set of AI products.

When do black-box sub-agents still make sense?

There are exceptions. Some processes benefit from being hidden from the main agent — usually when the logic is completely detached from the conversation context, or when you want to use strategies or optimizations that would confuse the main agent if exposed. Our agentic RAG system for insight search is a good example: it iteratively searches through insights and cherry-picks the best ones using a complex scoring system. The main agent doesn't need to see all that — it just needs the final result.

Architecture diagram

graph TB
    User[User Message] --> Router[Small Model Router<br/>Analyze request]

    Router --> |Enable default modes| Agent[Single-Loop Agent<br/>Full conversation context]

    subgraph "Core Tools (Always Available)"
        Search[search<br/>docs, insights, etc.]
        ReadData[read_data<br/>schema, billing]
        ReadTax[read_taxonomy<br/>events, properties]
        TodoWrite[todo_write<br/>task tracking]
        EnableMode[enable_mode<br/>switch expertise]
    end

    Agent --> CoreTools[Core Tools]
    CoreTools --> Search
    CoreTools --> ReadData
    CoreTools --> ReadTax
    CoreTools --> TodoWrite
    CoreTools --> EnableMode

    EnableMode --> |Dynamic loading| Modes[Agent Modes]

    subgraph Modes
        SQLMode[SQL Mode<br/>+ SQL tools<br/>+ SQL system prompt<br/>+ SQL trajectories]
        AnalyticsMode[Analytics Mode<br/>+ insight tools<br/>+ analytics prompt<br/>+ analytics trajectories]
        CDPMode[CDP Mode<br/>+ destination tools<br/>+ CDP prompt<br/>+ CDP trajectories]
        CustomMode[Custom Product Mode<br/>+ product tools<br/>+ product prompt<br/>+ product trajectories]
    end

    Modes --> Agent
    Agent --> |Execute with full context| Actions[Actions<br/>Create insights, write SQL,<br/>set up destinations, etc.]

    Actions --> Response[Response to User]

How PostHog AI and MCP share the same features

The problem we needed to solve: PostHog AI and the MCP server were developed by different teams, didn't offer the same tools, and had completely different architectures. Users would find features in PostHog AI that didn't exist in the MCP, and vice versa.

The solution is an abstraction layer. Agent modes expose both high-level LLM tools (like "create a funnel with these parameters") and low-level API endpoint tools (like "call POST /api/projects/{id}/insights"). Both PostHog AI and the MCP have access to the same features, just through different interfaces.

How PostHog Code and Wizard fit in

Both PostHog Code and the Wizard currently consume the MCP. This integration gives them access to all the agent modes we're building. If Claude Code (which PostHog Code uses for code generation) ever becomes a bottleneck, we could swap in PostHog's own single-loop agent since they share the same mental model. We'd need to copy over Claude Code's terminal and file system tools (bash, grep, etc.) and add them as core tools.

We could also tag modes for specific interfaces. For example, a CodingMode(tags=["posthog-code"]) would only be exposed to the PostHog Code agent, not to PostHog AI, because it's specific to code generation workflows.

Glossary

Agent: An autonomous AI process that can reason about what to do, plan multiple steps, and take actions by calling tools. PostHog is an agent. Claude is an agent.

Single-loop architecture: An agent architecture that maintains full context throughout a conversation without delegating to black-box sub-agents. The agent can see all tools, all previous messages, and all decisions it's made.

Feature: Any Agent capability we expose to the user. Creating insights, summarizing sessions, performing a Deep research, all of these are features.

Tool: A capability the agent can call to perform actions — search docs, create insights, write SQL queries, etc.

Agent mode: A specialized configuration of an agent that gives it domain-specific tools, expert knowledge (via system prompts), and workflow examples. When PostHog AI switches to "SQL mode," it becomes an expert in writing and debugging SQL queries.

Trajectory: An example workflow showing the sequence of steps to accomplish a specific task. We use trajectories instead of the heavier "jobs-to-be-done" framework to teach agents how to use tools together effectively.

MCP (Model Context Protocol): A standard protocol for connecting AI models to external tools and data sources in a structured, secure way. Think of it like an API, but specifically designed for AI agents.

MCP Server: The component that exposes tools and data sources following the MCP specification. PostHog's MCP server makes our analytics data available to any MCP-compatible client.

MCP Client: The component that connects to MCP servers to discover and use tools. Claude Code, VS Code with AI extensions, and other tools can act as MCP clients.

Implementing AI features

Engineering | Source: https://posthog.com/handbook/engineering/ai/implementation

This page provides implementation guidance for building AI features at PostHog. For a high-level overview, see the AI platform overview.

How PostHog AI works across surfaces

PostHog AI isn't a single product – it's a platform that works wherever customers work. Through a combination of MCP tools and skills, PostHog AI is available across any agent of the customer's choice: PostHog AI in the web, PostHog Code, Claude Code, Cursor, Codex, and others.

All of these surfaces share the same underlying capabilities. The MCP server exposes PostHog's API as atomic tools, and skills teach agents how to compose those tools into workflows. When a product team adds a new MCP tool or writes a new skill, every surface benefits automatically.

PostHog AI in the web

PostHog AI in the web is a sandboxed coding agent built on the Agents SDK (Claude Code's harness). It runs in a controlled environment with access to PostHog's full API surface and unlocks use cases that go beyond what a simple chat interface can offer:

PostHog Code

PostHog Code is a desktop agent that turns PostHog signals into shipped code. It watches PostHog for problems (errors, frustration patterns, user feedback) and automatically creates tasks, generates fixes, and opens pull requests with human oversight at key decision points.

Third-party agents

Engineers who prefer to work in Claude Code, Cursor, Codex, or any other MCP-compatible tool get access to the same PostHog capabilities.

Headless first, then UI

Product teams must think about AI features as headless (UI-less) workflows. Agents don't need UI – they compose tools and follow skills to accomplish goals. But customers do need UI, and for that we have MCP Apps.

The rule of thumb: first headless, then UI for a persona.

  1. Build the capability headless – expose your product's API as MCP tools and write skills that teach agents how to use them. This makes the capability available across all surfaces immediately.
  2. Then build UI where it matters – if a persona (product manager, engineer, analyst) needs a dedicated experience, build an MCP App that provides the right UI for that workflow.

This order matters because headless capabilities are reusable across every surface, while UI is specific to one. If you build UI first, you've created something that only works in one place. If you build headless first, you've created something that works everywhere, and you can always add UI later.

MCP tools vs skills

Understanding the distinction between tools and skills is essential for building effective AI features.

MCP tools are atomic capabilities – CRUD operations and simple actions. They answer "what can I do?" (list feature flags, execute SQL, create a survey, summarize a session recording). Tools should be basic primitives that agents compose into higher-level workflows.

Skills answer "how do I accomplish X?" They combine tools, domain knowledge, query patterns, and step-by-step workflows into a template that agents follow to solve a class of problems. A skill might reference multiple tools, include HogQL query examples, explain what data to verify before querying, and describe the desired outcome for the customer.

This separation matters because agents are good at composing simple tools but need guidance on _which_ tools to use, in _what order_, with _what constraints_.

For implementation details:

Implementation recommendations

For engineers adding AI features

  1. Expose your product's API as MCP tools. Every product should be accessible through the MCP server. Scaffold a YAML definition, enable the operations that make sense, and add a HogQL system table for data access. See Adding tools to the MCP server.
  1. Write skills for jobs to be done. If your product has jobs that require domain knowledge – specific tool ordering, constraints, query patterns, or reasoning about what data to check – write a skill that teaches agents how to accomplish that job well. See Writing skills.
  1. Build UI only when a specific persona needs it. Don't start with a UI-specific AI feature. Start headless, validate that agents can accomplish the workflow, then add UI if a persona needs a dedicated experience.

Serializer best practices

Descriptions flow through the entire pipeline:

Django serializer field → OpenAPI spec → Zod schema → MCP tool description

Product teams should type and describe their serializer fields. These descriptions are what agents read to understand tool parameters – vague or missing descriptions lead to worse agent behavior.

Tips:

Pricing and product positioning

How we think about pricing

With our AI pricing, we want to follow the PostHog pricing principles. Concretely, this means:

  1. We offer a generous free tier
  2. We charge usage-based instead of a flat subscription

The unit that matches usage the closest is token consumption. This means to fix a SQL query with AI, the user would pay very little, analysing hundreds of session recordings will cost more. Since token costs differ based on token type & model, we are passing on our own costs to our users, with a small markup, instead of having a fixed price per token.

To keep our AI pricing simple, this pricing applies to all AI features once they are in general availability, that means per-product AI features as well as Session summaries and Deep research.

So that users can learn how to use PostHog without worrying about being charged, we are keeping chats that refer to our documentation free without a limit.

How users should think about our products

PostHog AI is the main PostHog product for AI interactions. You can use it in the web for the richest experience, through PostHog Code for code-generation workflows, or through any third-party agent via MCP. The web UX is best for sharing, navigation, and linking between AI results and PostHog artifacts. PostHog AI is also trained on PostHog-specific patterns and your actual usage data, so it provides higher quality, more contextual results than a general-purpose AI.

Deep research is a feature available within PostHog AI, but also accessible through its own dedicated UI if you want to jump straight into research. Use it for open-ended investigative work where you're trying to understand a complex problem.

Session summaries is callable from PostHog AI and Deep research, and also has its own UI. Use it when you need to analyze many session recordings and extract patterns or issues.

PostHog Code is a desktop product for single-engineer use. It's separate from PostHog AI because the workflow is different – you're not asking questions, you're letting an AI agent watch PostHog for problems and automatically fix them in your codebase. Think of it as an AI assistant that lives in your development environment.

MCP is for users who prefer to work in third-party tools like Claude Code, Cursor, or Codex. You get access to PostHog's data and can combine it with other MCP servers (like Hubspot or Zendesk). The trade-off is you don't get PostHog AI's polished UX or PostHog-specific optimizations.

How to develop and test

  1. Set up the MCP stack locally. Run hogli dev:setup and add the MCP stack to your local environment.
  2. Write YAML configs and skills. Use the monorepo Claude Code skills to scaffold tool definitions and write skills (TODO: dedicated skill for this).
  3. Build skills and dump them locally. Run hogli build:skills to render all skills, then unzip them into .agents/skills/ so Claude Code can pick them up during local testing: unzip -o products/posthog_ai/dist/skills.zip -d .agents/skills/.
  4. Test with headless agents, not UIs. Forget about UIs – that's for humans. Test your tools and skills by talking to Claude Code or another headless agent. If the agent can accomplish the job, the capability works.
  5. Test with PostHog Code. Sign in to a local environment in PostHog Code and verify the end-to-end workflow.
  6. Alternatively, add the local MCP server to Claude Code. Run claude mcp add --transport http posthog-local http://localhost:8787/mcp to point Claude Code at your local MCP server.

Future directions

Third-party context integration

We want to connect PostHog AI to third-party tools for additional context. Imagine PostHog AI analyzing data across PostHog, Slack messages, and Zendesk tickets to understand not just what users are doing, but what they're saying and reporting. This data could also generate signals for PostHog Code – if users are complaining about a bug in Slack and PostHog sees errors in the same area, that's a strong signal to investigate and potentially fix automatically.

Continuous instrumentation

The Wizard's future evolution involves continuous instrumentation – watching your codebase and suggesting event tracking for new features, filling gaps in existing tracking, and standardizing event patterns. This could integrate with PostHog Code to automatically handle PostHog instrumentation when generating code.

Research improvements

Deep research is being refined with better research strategies, improved denoising algorithms, and more sophisticated pattern recognition. The goal is to reduce rabbit holes and improve data interpretation accuracy.

Contact and resources

For questions about working with PostHog AI, ask in the #team-posthog-ai Slack channel.

Additional resources:

AI products

Engineering | Source: https://posthog.com/handbook/engineering/ai/products

This page provides detailed information about each user-facing product in the PostHog AI platform. For a high-level overview, see the AI platform overview.

PostHog AI [Beta]

PostHog AI is our primary in-app agent, accessible through a chat interface embedded directly into the product. Think of PostHog AI as a fundamentally different way to interact with PostHog — instead of clicking buttons and filling out forms, you ask questions and make requests in natural language.

The problem we're solving

PostHog has grown incredibly powerful, but that power comes with complexity. New users face a learning curve: Which insight type should I use? How do I filter for the data I need? What's the right SQL syntax for this query? Even experienced users spend time navigating through menus and forms to accomplish what they already know they want to do.

PostHog AI eliminates this friction. You don't need to know where a feature lives or how to configure it — you just describe what you want, and PostHog AI handles the details.

Who uses PostHog AI

Everyone. PostHog AI is designed to be useful whether you're:

How it works

PostHog AI is built on a single-loop agent architecture with dynamic mode switching. When you send a message, PostHog AI analyzes your request, determines which specialized "modes" it needs to activate, and dynamically loads the appropriate tools and expertise. For example, if you ask PostHog AI to "create a funnel tracking the signup flow," it might:

  1. Use the read_taxonomy tool to check which events actually exist
  2. Switch to Analytics mode to access insight creation tools
  3. Switch to SQL mode if you need custom transformations
  4. Switch to CDP mode if you want to set up a destination based on the funnel results

Throughout this entire process, PostHog AI maintains full context — it can see all previous messages, all decisions it's made, and all tools it's used. This is fundamentally different from older architectures we implemented where specialized sub-agents worked in isolation.

For a technical deep dive on how this works, see the Architecture page.

Key capabilities

PostHog AI can do most things you can do through the PostHog UI:

PostHog AI is powered by Inkeep for documentation search, which means it can pull from PostHog's entire doc library to answer questions about how to use the platform.

Pricing

PostHog AI is paid platform, with a generous free tier (see Pricing).

Current status & ownership

PostHog AI is currently in beta as we migrate to the new single-loop architecture. Early results show significant improvements in reliability and capability, but we're still ironing out edge cases before moving to general availability.

The owns the architecture, performance, and UX/UI. Product teams are responsible for adding their product-specific tools and capabilities, with the PostHog AI team providing reviews and guidance (see Team Structure for details on collaboration).

Deep research [Under development]

Deep Research is PostHog AI's bigger sibling — where PostHog AI gives you quick answers, Deep Research digs deep to understand complex, open-ended problems.

The problem we're solving

Product analytics often requires real investigative work. You don't just want to know "what's my conversion rate?" — you want to understand why it's dropping, which user segments are affected, where in the flow they're getting stuck, and what patterns exist across multiple data sources. This kind of research is time-consuming. You might spend hours jumping between dashboards, filtering recordings, cross-referencing error logs, and synthesizing findings.

Deep Research automates this investigative work. It can spend minutes or hours (depending on complexity) systematically exploring your data, following leads, and producing a comprehensive research report that would take a human analyst half a day or more.

Who uses Deep research

Deep research is designed for anyone who needs to understand complex problems:

If you have a vague question that requires digging through multiple data sources to answer, Deep Research is the right tool.

How it works: Test-time diffusion

Deep research's architecture is based on Google's test-time diffusion researcher framework. Here's the high-level flow:

  1. Input: You either start with a templated research notebook (for common research patterns) or describe your question and Deep Research generates a custom notebook structure.
  1. Parallel initialization: Deep research simultaneously creates a draft report (outlining what it expects to find) and a research plan (what questions to investigate).
  1. Iterative research: The agent systematically investigates each part of the research plan. It might filter session recordings, run analytics queries, check error logs, compare cohorts, and more. Each investigation adds findings to the draft report.
  1. Denoising: As research progresses, Deep research "denoises" the draft report — removing speculative parts that turned out to be wrong, strengthening findings that are supported by data, and identifying new questions to investigate.
  1. Loop: Research continues until the draft report is fully denoised — meaning all sections are supported by actual findings rather than speculation.
  1. Final report: Once complete, you get a structured notebook with the findings, including embedded session recordings, charts, and data that support each conclusion.

Architecture diagram

graph TB

    Input[User Question] --> DraftReport[Draft report<br/>with speculative findings]
    Input[User Question] --> ResearchPlan[Research Plan<br/>with questions to investigate]

    ResearchPlan --> Investigate[Iterative Investigation]

    subgraph Investigation
        Investigate --> Sessions[Session Summaries]
        Investigate --> Analytics[Run Analytics<br/>Queries]
        Investigate --> Errors[Check Error<br/>Logs]
        Investigate --> Cohorts[Compare<br/>Cohorts]
    end

    Sessions --> Findings[Add Findings to<br/>Draft Report]
    Analytics --> Findings
    Errors --> Findings
    Cohorts --> Findings

    Findings --> Denoise[Denoising Process]

    subgraph Denoising
        Denoise --> Remove[Remove speculative<br/>parts that are wrong]
        Denoise --> Strengthen[Strengthen findings<br/>with data support]
        Denoise --> NewQ[Identify new<br/>questions]
    end

    NewQ --> |Add to plan| ResearchPlan

    Remove --> Check{Report fully<br/>denoised?}
    Strengthen --> Check

    Check --> |No| Investigate
    Check --> |Yes| Final[Final Notebook Report<br/>with embedded recordings,<br/>charts, and data]

    DraftReport -.->|Continuously updated| Denoise

Why notebooks?

Notebooks are the perfect format for research because they combine narrative explanation with data visualization. You can see not just the conclusions ("conversion drops 40% at the payment step") but the evidence (charts showing the drop, session recordings showing users struggling, error logs showing timeouts).

We're building customizable notebook templates similar to what Granola does. You'll be able to pick a template or modify one ahead of time, so research results come back in exactly the format you need. This is especially useful for recurring research tasks where you want consistency.

Key differences from PostHog AI

While both PostHog AI and Deep research can answer questions about your data, they're optimized for different use cases:

Think of PostHog AI as your coworker who can quickly pull up data, and Deep Research as the analyst who will spend the afternoon really digging into a problem.

Access and pricing

Access Deep Research by toggling "Research" mode in PostHog AI, or via the dedicated Deep Research UI. It's a paid feature with a generous free tier (see Pricing).

Current status & ownership

Deep Research is under active development. The PostHog AI team owns Deep Research. The architecture is implemented but we're still refining the research strategies and denoising algorithms. Early results show it can find patterns and insights that human analysts miss, but it occasionally goes down rabbit holes or misinterprets data — we're working on improving these edge cases.

Session summaries [Alpha]

Session summaries solves a specific but painful problem: you have dozens or hundreds of session recordings, and you don't have time to watch them all. Instead of spending hours scanning through recordings one by one, Session Summaries analyzes them all at once and gives you a structured report of what it found.

The problem we're solving

Session recordings are incredibly valuable — they show you exactly what users are experiencing. But they're also time-consuming to review. If you have 100 recordings from users reporting checkout issues, do you really want to watch all 100? Most people watch a few, spot some patterns, and hope they caught the important stuff. This means you miss edge cases, low-frequency issues, and patterns that only emerge across many sessions.

Session summaries changes this calculus. You can analyze hundreds of recordings in minutes, with confidence that you're seeing all the significant patterns, not just the ones that happened to appear in the first few recordings you watched.

Who uses session summaries

Session summaries is designed for anyone who needs to understand patterns across multiple user sessions:

If you find yourself thinking "I need to watch a bunch of recordings to understand this," Session Summaries is the right tool.

How it works

You can trigger Session summaries in three ways:

  1. Ask PostHog AI directly: "Summarize the last 50 sessions from company X"
  2. Trigger Session summaries from the Session Replay UI or from other products
  3. Let Deep research invoke it as part of a larger investigation

Here's what happens under the hood:

  1. Collection: Session summaries retrieves all the recordings matching your criteria (time range, company, feature area, etc.)
  1. Analysis: An AI agent "watches" a session recording (right now, analyzing the stream of metadata, and soon enough, by watching video clips), noting significant events: errors, timeouts, rage clicks, confusion indicators (rapid back-and-forth navigation), unexpected user paths, and other behavioral signals.
  1. Clustering: Instead of giving you 50 individual summaries, Session summaries clusters similar issues together. For example, if 15 users all experience timeout errors at checkout, these get grouped into a single issue: "Timeout errors during payment processing (affects 15/50 users)."
  1. Report generation: You get a notebook with:

What Session summaries finds

Currently, Session Summaries is trained to identify:

Future capabilities

We're expanding Session summaries beyond just finding problems. Future capabilities include:

The underlying technology is the same — watch many recordings, find patterns, cluster similar behaviors — but the training and prompts can be tuned for different objectives.

Access and pricing

Access Session Summaries through PostHog AI, Deep Research, or its dedicated UI entry points. It's a paid feature with a generous free tier (see Pricing).

Current status & ownership

Session summaries is in alpha. The PostHog AI team owns Session summaries. It's working well for error and frustration detection, and early users report finding issues they would have missed. We're refining the clustering algorithms (sometimes it groups issues too broadly or too narrowly) and integrating video and GIF analysis to support findings with visual confirmation.

PostHog Code [Under development]

PostHog Code is our most ambitious bet: an agent development environment that turns PostHog data into shipped code. The vision is to free product engineers from distractions so they can focus on what they love — building great features — by automating all the chores that eat up their day.

The problem we're solving

Today, product engineers spend most of their day managing random inputs: Slack messages, GitHub notifications, tickets, emails, and alerts from various monitoring tools. This work is essential but time-consuming. Experienced AI-native engineers have already evolved a workaround — they practice "structured development," creating PRDs, breaking work into tasks, and shipping incrementally. Tools like Claude Code or Cursor only work well when given clean context and well-defined tasks.

PostHog Code aims to productize that discipline, turning chaos into structured, buildable work.

Who we're building for

PostHog Code is designed for experienced product engineers who already use AI coding tools regularly. We're explicitly not targeting non-technical "vibe coders" or hobbyist users. Our initial customer profile is early-stage startups with 2-10 engineers and hundreds to low thousands of users. We'll expand to larger startups later as internal workflows and scale requirements become more complex.

How it works: From Signals to shipped code

The core insight is that PostHog collects massive amounts of data across all our products — analytics, session recordings, error tracking, surveys, experiments. All of this data can be transformed into actionable "tasks" that describe real problems to fix or opportunities to pursue.

Here's the flow:

  1. Signal generation: Something happens in PostHog that indicates work needs to be done. This could be a recurring error pattern, frustration signals from session recordings, a survey response indicating a missing feature, or experiment results suggesting an optimization. The Signals team focuses on surfacing this data in useful ways.
  1. Task creation: An LLM-based system receives these signals, deduplicates them across data types, and translates them into concrete tasks with appropriate context. This uses a non-deterministic approach — we use a document store and LLMs to judge how to structure tasks. A vague signal like "users seem frustrated during checkout" becomes a specific task: "Investigate and fix timeout issues in payment processing, affecting 15% of transactions from company X."
  1. Task execution: Once a task is defined, it gets assigned to a workflow. Different tasks need different approaches — a well-defined bug fix might be a one-shot fix with human QA, while a vague feature request might need definition, breaking into chunks, gradual shipping behind a flag, and automated feedback collection.
  1. Coding: PostHog Code uses an agent running in a cloud sandbox (though we support local execution too). The agent clones your repo, reads your codebase for context, makes changes, writes tests, and opens a pull request. Changes are automatically wrapped in feature flags when appropriate.
  1. Human oversight: You're always in control. The desktop app shows you what PostHog Code is working on, lets you review and edit tasks, and requires your approval before shipping. This "human-in-the-loop" approach means you can trust PostHog Code to work in the background while you sleep, but nothing ships without your sign-off.

Why a desktop app?

This is a crucial design decision. We could have built PostHog Code directly into the PostHog web app, and it would work. But it wouldn't generate the adoption we need.

Desktop apps win because of bottom-up adoption. Individual engineers can choose tools that make them more productive in a permissionless, frictionless way. A desktop app feels like a personal tool — like VS Code, Cursor, or your terminal — rather than a team product that requires management buy-in. Engineers already make personal choices about vim vs VSCode, which terminal to use, which AI coding assistant to try. PostHog Code slots into that category.

The UX also matters more for tools you use all day, not just a few times a week. PostHog Code is designed to feel like something between Warp, Ghostty, and Cursor: super fast, keyboard-first with lots of shortcuts, easy to navigate with tabs and split windows. Think of it as having the directness of a CLI but with the richness of a UI when you need it.

The interface

PostHog Code is tab-based with the home tab being a task list. You navigate with arrow keys, click a task to open it in a new tab with a two-pane view: task details on the left (title, description, tags, origin, PR link) and a live log of activities on the right. When a task is in progress, it streams output to this log so you can watch the agent work. There's also a workflow builder view where you can see tasks moving through stages kanban-style.

Technical architecture

PostHog Code is built as an Electron app for speed, familiarity (React), and cross-platform ease. When a task kicks off, we have two execution options:

Cloud agent (preferred): Tasks execute in a cloud sandbox owned by the PostHog AI team. The agent runs in an isolated environment, clones the repo, does its work, and pushes to a branch. The downside is you need to grant GitHub app access. The upside is truly magical — PostHog Code can work on tasks while you sleep, and you wake up to PRs ready for review.

Local agent (more permissionless): We spin up Claude Code-like execution in the background on your local filesystem. This is the most permissionless version, closest to how developers use Claude Code today. We still give it access to the MCP and PostHog tools, and we likely need to proxy through our infrastructure to maintain control and provide a smooth experience.

We support both modes, but push for cloud execution as the optimal experience.

Architecture diagram

graph TB
    subgraph "PostHog Code Desktop App (Electron)"
        UI[Task List UI]
        Backend[Backend Service]
    end

    subgraph "Task Generation"
        Signals[PostHog Signals<br/>Errors, Frustration, etc.]
        DR[Deep Research]
        SS[SessionSummaries]
        TaskGen[Temporal Job<br/>Task Generation]
    end

    subgraph "Execution: Cloud Agent (Preferred)"
        CloudSDK[SDK Wrapping<br/>Coding Agent]
        Sandbox[Sandbox + API<br/>Micro VM]
    end

    subgraph "Execution: Local Agent"
        LocalRepo[Local Repo<br/>User Filesystem]
        LocalExec[Local Execution<br/>Claude Code-like]
    end

    Signals --> TaskGen
    DR --> TaskGen
    SS --> TaskGen

    TaskGen --> Backend
    Backend --> UI

    UI --> Backend
    Backend --> CloudSDK
    Backend --> LocalExec

    CloudSDK --> Sandbox
    LocalExec --> LocalRepo

What kinds of tasks?

PostHog Code isn't just for data-driven bug fixes. The system for shipping a fix is the same as the system for shipping any feature. A vague task needs definition, then breaking into chunks, then shipping with proper releases planned. A small, well-defined task just needs a one-shot fix and QA.

Even inspiration-driven features (not from user data) benefit from PostHog Code's workflow: add event tracking, ship behind a flag, automatically message users for feedback, set up an experiment to measure impact. PostHog Code productizes best practices for shipping features, not just fixing bugs.

Current status

Right now we're focused on dogfooding — getting the to build everything using PostHog Code itself. This lets us refine product quality and identify friction fast.

For engineers not using PostHog Code

When PostHog Code isn't the right fit (maybe you don't trust AI to ship code automatically, or your workflow is very particular), we offer "copy prompt" features throughout PostHog. In error tracking, for example, you can generate an AI prompt to fix an error and paste it into your own code editor. This bridges the gap for engineers who want AI assistance but prefer to maintain manual control.

Ownership

The dedicated owns the product. The owns the background sandboxed agents. See Team Structure for collaboration details.

Wizard: AI-powered onboarding [General availability]

The Wizard is PostHog's AI-powered installation assistant that gets you from zero to collecting data in minutes instead of hours. Instead of reading documentation, finding the right SDK, figuring out configuration, and manually integrating PostHog into your codebase, you run one command and the Wizard handles everything.

The problem we're solving

Setting up analytics is tedious. You need to pick the right SDK for your tech stack, install dependencies, configure authentication, add initialization code in the right place, set up your first events, and verify everything works. For a developer who just wants to start tracking user behavior, this feels like unnecessary friction before you even get value from the product.

Even experienced developers waste 15-30 minutes on setup. For new developers or teams trying PostHog for the first time, it can take much longer — and if anything goes wrong, they might give up entirely.

The Wizard eliminates this friction. You run a single command, answer a few questions, and the Wizard writes all the integration code for you.

Who uses Wizard

The Wizard is designed for:

Basically, anyone who would rather spend time using PostHog than setting it up.

How it works

The Wizard is a CLI tool that runs locally in your development environment. Here's the flow:

  1. Detection: The Wizard scans your codebase to detect your tech stack (React, Next.js, Python, etc.), framework version, and project structure.
  1. Configuration: It asks you a few questions — which PostHog project to connect to, whether you want autocapture enabled, any custom configuration. The questions are contextual based on what it detected.
  1. Code generation: The Wizard writes the integration code. This includes:
  1. Verification: The Wizard verifies the integration works by sending a test event to PostHog and confirming it arrives.
  1. Next steps: It suggests what to do next — track your first custom event, set up a dashboard, or explore session recordings.

The entire experience uses Clack.cc for a polished CLI interface with clear prompts, progress indicators, and helpful error messages.

Current capabilities

Right now, the Wizard handles installation and basic setup across PostHog's supported SDKs. It's particularly good at:

Future direction

The Wizard's long-term vision is much broader than one-time setup. Imagine:

This would turn the Wizard from a one-time setup tool into an ongoing assistant that keeps your PostHog instrumentation clean and comprehensive.

Current status & ownership

The Wizard is in general availability and actively used during customer onboarding. It's currently owned by the .

MCP: PostHog for third-party tools [General availability]

The MCP (Model Context Protocol) server is PostHog's way of meeting engineers where they already are. Not everyone wants to switch to the PostHog UI to analyze data — many prefer to stay in their code editor, terminal, or favorite AI tool. The MCP server makes that possible.

The problem we're solving

Context switching is expensive. If you're deep in debugging code in VS Code and need to check PostHog analytics, opening a browser, navigating to PostHog, finding the right insight, and coming back to your editor breaks your flow. It's even worse when you're using an AI coding assistant — you want to ask "which error is affecting the most users?" or "create a funnel for the checkout flow" without leaving your development environment.

The MCP server solves this by bringing PostHog directly into the tools engineers already use. No context switching, no mental overhead.

Who uses MCP

MCP is designed for engineers who prefer working in their development environment:

How it works

The Model Context Protocol (MCP) is a standard for connecting AI assistants to external services. Here's what happens when you use PostHog via MCP:

  1. Connection: Your MCP client (like Claude Code) connects to https://mcp.posthog.com/mcp with your PostHog API key for authentication.
  1. Tool discovery: The client asks the MCP server what tools are available. The server returns a list of about 30 tools covering PostHog's API surface — everything from creating insights to filtering session recordings to managing feature flags.
  1. Dynamic filtering: You can control which tools load using query parameters: https://mcp.posthog.com/mcp?features=flags,insights,workspace. This keeps context windows small by only loading relevant tools.
  1. Execution: When you ask the AI assistant to do something with PostHog, it calls the appropriate MCP tools. These tools interface with PostHog's APIs (and eventually dedicated /ai endpoints, under development) to accomplish the task.
  1. Mode switching: The MCP server is being aligned with our mode switching framework. This means AI agents can dynamically enable and disable different modes during a conversation, loading only the expertise they need when they need it. This solves the context window problem — currently, loading all tools takes up about 14% of Claude Code's context window, which we're reducing through dynamic tool discovery.

Key architectural decisions

The MCP server is deployed independently on CloudFlare. This gives us fast iteration, proven reliability, and excellent developer UX with quick deployments. We dogfood PostHog's customer-facing API wherever possible, which gives us good incentive to take care of it.

The MCP server also supports session state (active project ID, org ID, distinct ID), so it can fingerprint sessions and maintain context across multiple requests.

PostHog AI vs. MCP: When to use each

Both PostHog AI and MCP give you access to the same features, but they serve different workflows:

Use PostHog AI when:

Use MCP when:

Our goal is to make PostHog AI so good that users want to "own" their workflow in PostHog, while still supporting MCP for engineers who prefer different tools or need to combine multiple data sources.

Current Status & Ownership

MCP is in general availability. The PostHog AI team owns the MCP server, with Josh Snyder as the primary support contact. We're actively working on dynamic tool discovery to reduce context window usage and aligning the server with our mode switching framework to share features with PostHog AI.

AI platform team structure and collaboration

Engineering | Source: https://posthog.com/handbook/engineering/ai/team-structure

This page explains how teams collaborate on AI features at PostHog. For a high-level overview, see the AI platform overview.

Who does what

The PostHog AI team

is responsible for the architecture, performance, and UX/UI of the AI platform. We build and maintain the core infrastructure – the MCP server, skills system, PostHog AI in the web, background sandboxed agents, and shared tooling (search, read_data, read_taxonomy, enable_mode). We're also proactive when we see big opportunities for PostHog or when new capabilities can be used across multiple products, like SQL generation or universal filtering.

The PostHog Code team

builds PostHog Code, an agent development environment for product engineers. Working with coding agents today is bottlenecked by messy workflows — switching between agents, branches, worktrees, and manually managing PRs across multiple applications. PostHog Code solves this by giving each task its own isolated workspace where an agent works, with everything related to a task in one place instead of scattered across your terminal, editor, and GitHub.

The PostHog Code team owns the desktop app and the task execution pipeline.

The Signals team

turns PostHog data into tasks that coding agents can work on — suggested improvements from session replays, fixes for errors from error tracking, new experiments based on product analytics data. Signals surfaces something useful, creates a task with context, and the cloud agent works on it.

Product teams

Product teams own their product's AI capabilities end-to-end. The AI platform is designed so that any team can ship MCP tools and skills independently, without needing the PostHog AI team to be involved. This means you can:

Once you ship a tool or skill, it's automatically available across every surface – PostHog AI in the web, PostHog Code, Claude Code, Cursor, and any other MCP-compatible agent.

How the teams connect

Together, these teams form the product autonomy loop:

Integration vectors for product teams

There are multiple ways product teams can contribute to PostHog's product autonomy vision. These are listed roughly in order of effort, from easiest to most ambitious.

MCP: Expose your APIs to agents

The most obvious and lowest-effort vector. Expose your product's APIs through the MCP server so agents can interact with your features.

Effort: Low

Consumers: PostHog AI, PostHog Code, coding agents (Claude Code, Codex, etc.), Wizard, vibecoding platforms (Lovable, Replit, etc.), ChatGPT & Claude Desktop, and more.

Skills: Teach agents how to do jobs

If you've already exposed your APIs, the next step is explaining how an agent should accomplish typical jobs-to-be-done — analyzing activity in PostHog, debugging why a feature flag was turned off, implementing enterprise features, etc. Skills combine tools, domain knowledge, and step-by-step workflows into templates agents can follow.

Effort: Medium, but the impact is very high.

Consumers: PostHog AI, PostHog Code, coding agents (Claude Code, Codex, etc.), ChatGPT & Claude Desktop, and more.

Signals: Feed the autonomy loop

If your product produces actionable or near-actionable signals — an insight threshold reached, a new error-tracking issue, a frustration pattern detected — use the signals API so agents can discover these hints and act on them later. Signals are what enable the product autonomy loop. PostHog Code acts on plans generated from these signals.

Effort: Low to medium.

Consumers: PostHog Code (local development) and PostHog AI (background agents).

PostHog Code: Features for the agentic development environment

PostHog Code is an agentic development environment where coding agents work on tasks in isolated workspaces. If your product area can make those agents smarter or the engineer's workflow faster, you can build features directly into it. Think PR reviews that check session recordings for regressions, QA steps that verify instrumentation coverage, or task prioritization that weighs your product's signals. This is the highest-effort vector but also the most deeply integrated.

Effort: High.

Consumers: PostHog Code.

Automations & background agents

Run PostHog AI based on triggers from PostHog Workflows, CRON, Temporal, etc., to automate complex workflows. Example use cases: analyze an incoming support ticket based on indexed documentation and respond to the customer, or spawn a new signal like "here is a bug, fix it."

Effort: Medium to high.

Consumers: Your persona using the web browser (UI), PostHog AI, PostHog Code, coding agents (Claude Code, Codex, etc.), Wizard, vibecoding platforms (Lovable, Replit, etc.), ChatGPT & Claude Desktop, and more.

How to get started

The AI platform is self-service by design. Follow the implementation guides to add tools and skills for your product area:

  1. Add MCP tools. Scaffold a YAML definition, enable the operations that make sense, and add a HogQL system table for data access.
  2. Write skills. If your product has jobs that require domain knowledge – specific tool ordering, constraints, query patterns, or reasoning about what data to check – write a skill that teaches agents how to accomplish that job well.
  3. Test with headless agents. Validate that agents can accomplish the workflow by talking to Claude Code or another MCP-compatible agent before building any UI.
  4. Tag the PostHog AI team in PRs. We review PRs that touch the AI platform to ensure they meet our quality bar and integrate well with the rest of the system.

For the full implementation workflow, see Implementing AI features.

When to reach out

You don't need the PostHog AI team to ship tools and skills, but we're always happy to help. Reach out to us in #team-posthog-ai if:

Don't hesitate to reach out early, even if it's just a vague idea. We'd rather help you think through the approach upfront than have you discover a dead end after building.

Best practices

Start headless, then UI

Build your product's AI capabilities as headless workflows first – expose the API as MCP tools, write skills for the key jobs. This makes the capability available across all surfaces immediately. Only add dedicated UI when a specific persona needs it. See Implementing AI features for more on this approach.

Start small

Begin with simple tools and iterate based on user feedback. It's better to ship something that works reliably for one workflow than to build something ambitious that works unreliably for ten workflows.

Describe your API fields

API field descriptions flow through the entire pipeline and become what agents read to understand tool parameters. Vague or missing descriptions lead to worse agent behavior. See Adding tools to the MCP server for details.

Contact

For questions about working with the AI platform:

Bug prioritization

Engineering | Source: https://posthog.com/handbook/engineering/bug-prioritization

User experience degradation

When bugs are reported it's critical to properly gauge the extent and impact to be able to prioritize and respond accordingly. These are the priorities we use across the entire engineering org, along with the relevant labels to quickly identify them in GitHub.

Please always remember to tag your issues with the relevant priority.

<td>GitHub Label</td> <td>Description</td>

<td><span class="tag-label" style="background:#ff0000; color: white;">P0</span></td> <td>Critical, breaking issue (page crash, missing functionality)</td>

<td><span class="tag-label" style="background:#f0a000;">P1</span></td> <td>Urgent, non-breaking (no crash but low usability)</td>

<td ><span class="tag-label"style="background:#ffe000;">P2</span></td> <td>Semi-urgent, non-breaking, affects UX but functional</td>

<td><span class="tag-label" style="background:#1d76db; color: white;">P3</span></td> <td>Icebox, address when possible</td>

Security issues

Security issues, due to their nature, have a different prioritization schema. This schema is also in line with our internal SOC 2 related policies (Vulnerability Management Policy). When filing security-related GitHub issues, remember to attach label security and the appropriate priority label. More details on filing can be found in the README of the product-internal repo.

Security issue information should not be made public until a fix is live and sufficiently (ideally completely) adopted.

PostHog security issues include a priority (severity) level. This level is based on our self-calculated CVSS score for each specific vulnerability. CVSS is an industry standard vulnerability metric. You can learn more about CVSS at FIRST.org and calculate it using the FIRST.org calculator.

| GitHub Label | Priority Level | CVSS V3 Score Range | Definition | Examples | |---|---|---|---|---| |security-P0|Critical|9.0 - 10.0|Vulnerabilities that cause a privilege escalation on the platform from unprivileged to admin, allows remote code execution, financial theft, unauthorized access to/extraction of sensitive data, etc.|Vulnerabilities that result in Remote Code Execution such as Vertical Authentication bypass, SSRF, XXE, SQL Injection, User authentication bypass| |security-P1|High|7.0 - 8.9|Vulnerabilities that affect the security of the platform including the processes it supports.|Lateral authentication bypass, Stored XSS, some CSRF depending on impact| |security-P2|Medium|4.0 - 6.9|Vulnerabilities that affect multiple users, and require little or no user interaction to trigger.|Reflective XSS, Direct object reference, URL Redirect, some CSRF depending on impact| |security-P3|Low|0.1 - 3.9|Issues that affect singular users and require interaction or significant prerequisites (MitM) to trigger.|Common flaws, Debug information, Mixed Content|

ClickHouse Clusters

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/clusters

We have three different ClickHouse clusters here at PostHog:

  1. Prod-US: Our main production cluster for the US.
  2. Prod-EU: Our main production cluster for the EU.
  3. Dev: Our development cluster.

Common features

All clusters have these features in common:

ZooKeeper

We use ZooKeeper for ClickHouse replicated MergeTree tables. It is responsible for managing the replication of data between replicas and ensuring consistency.

Eventually we want to migrate to ClickHouseKeeper.

US
EU
Dev

Backup Policy

We backup all production tables every day. Once a week we take a full backup and then incremental backups for the rest of the week. We keep backups for 8 days.

Backups are managed through HouseWatch

We are able to do point in time recovery between daily backups and replay from Kafka topics. We have retention on the events to ClickHouse topic set to 5 days.

HouseWatch

https://github.com/PostHog/HouseWatch

HouseWatch is our internal tool (that is also open source!) that we use to manage ClickHouse maintenance tasks such as backups. We also use it to benchmark queries, access logs, and other tasks the help to operate the cluster.

You can find HouseWatch deployed on both Prod US and Prod EU Kubernetes clusters here:

CH Version

Currently we run 23.12.5.81

We run the same version in:

We do this because ClickHouse is notorious for breaking changes between versions. We've seen issues with query syntax compatibility, data result consistency, and other unexpected issues between upgrades.

In order to upgrade ClickHouse, we need to bump the version on CI to test for regressions and compatibility issues. We do this by adding the ClickHouse version to the CI Matrix of ClickHouse versions. This has the issue of slowing down CI because we then run two tests for everything on the current and desired version on ClickHouse, but it works nicely because it shows the discrepancies between the versions clearly.

Once we have resolved all issues on CI, we can then upgrade the ClickHouse version on:

Prod-US

Prod US is our main production cluster for the US.

It is made up of the following topology:

Online Cluster

2 out of the 3 replicas are what we call the 'Online cluster'. It serves all traffic coming in from us.posthog.com. We do this to guard against background tasks consuming resources and slowing down query times on the app. We've seen query time variability otherwise.

Offline Cluster

The third replica is what we call the 'Offline cluster'. It serves all background tasks and other non-essential traffic.

Traffic that it serves:

Load Balancing

We use AWS Network Load Balancers to route traffic to the correct replica. We have a separate load balancer for the Online and Offline clusters. We also have another Load Balancer that hits all nodes (Online and Offline) for tasks that don't need to be separated.

Each of these have a target group for each node targeting ports :8443 and :9440.

Data Retention Policy

We currently keep all data for all time with no TTL when it comes to Events.

We have a TTL for Session Replay data, which is 30 days.

Instance types

The original nodes of the cluster are using i3en.12xlarge instances. We are currently in the process of migrating to im4gn.16xlarge instances.

Online cluster
Offline cluster

Old nodes are using r6i.16xlarge instances. These are being retired due to IO throughput constraints.

Tiered storage

One of the nice features of our data is that recent data (data < 30 days old and generally hitting the 2 most recent active partitions) is the hottest both for reads and writes. This is a perfect fit for tiered storage. We can basically read and write to local ephemeral NVMe (with solid backup strategies) and then move the data to cheaper EBS volumes as it ages out. We currently do this using tiered storage configured simply by setting the storage configs in ClickHouse, but we eventually will want to move to setting TTLs on tables and having ClickHouse manage the tiering for us consistently.

https://altinity.com/blog/2019-11-29-amplifying-clickhouse-capacity-with-multi-volume-storage-part-2

Monitoring

https://grafana.prod-us.posthog.dev/d/vm-clickhouse-cluster-overview/clickhouse-cluster-overview?from=now-3h&to=now&timezone=utc&refresh=30s

Prod-EU

Prod EU is our main production cluster for the EU.

It is made up of the following setup:

We hit a problem with having smaller shards on EU that had a significant performance impact. We were running out of memory for larger queries which was also impacting our page cache hit rate. This was mainly due to limiting query size restrictions to protect other users of the cluster. We had two solutions for this. Increase the size of the nodes...meaning double the size for each instance x 16, very expensive. Or, we could setup a coordinator node. The coordinator node is a topology that allows us to effectively split the storage and compute tiers of the cluster into two pools of resources. We treat the current cluster of small nodes with many shards as the storage tier, they effectively are the mappers of the cluster and quickly fetch the data that we want for queries. We then send that relatively small data back to the coordinator and do the heavy lifting there which includes joins and aggregates.

For the EU the coordinator node is:

This is more than anything we can get with any combination of EBS volumes alone (within reason $$$). A nice bonus on top of this is this does not impinge on the EBS throughput limits of the node.

Coordinator schema

The coordinator has distributed tables (events, session_replay) tables that point to the EU Prod cluster

All non-sharded tables are replicated so that they are local to the coordinator.

Coordinator future

We should probably consider moving this to a m7g.metal if we hit any memory constraints with this, but so far we have not because of the dedicated nature of this node.

We will also want to create new coodinators for multi-tenant workloads in the future. This will allow us to scale up and down easily over time, and even potentially throughout the day as the workload rises and falls.

Monitoring

https://grafana.prod-eu.posthog.dev/d/vm-clickhouse-cluster-overview/clickhouse-cluster-overview

Dev

Dev is a relatively basic setup for development and testing.

We have a single shard with 2 replicas. This is to mimic the production setup as closely as possible. We have a single shard because we don't have the same volume of data as production. We have 2 replicas because we want to test failover scenarios.

Problems

The biggest pain points on our ClickHouse clusters is Disk Throughput. We still are using mutations too frequently. Every mutation rewrites large portions of our data on disk. This requires reading, and writing huge amounts of data which robs normal queries and inserts of resources. The best solution that we've found to support the current high utilization of mutations is to move to nodes that have local NVMe storage. This, along with RAID 10 far 2 configs provides us with roughly 1000 MB/s writes and 4000 MB/s reads at the same time on a node. This is than anything we can get with any combination of EBS volumes alone (within reason $$$). A nice bonus on top of this is this does not impinge on the EBS throughput limits of the node. Meaning that on top of the baseline speed to NVMe disk we can tier out to EBS and have full instance EBS throughput available for that JBOD disk pack.

Currently US is entirely on NVMe backed nodes. EU will need to be migrated to this setup as well.

Data ingestion

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/data-ingestion

This document covers:

Using INSERTs for ingestion

As any database system, ClickHouse allows using INSERTs to load data.

Each INSERT creates a new part in ClickHouse, which comes with a lot of overhead and, in a busy system, will lead to errors due to exceeding parts_to_throw MergeTree table setting (default 300).

ClickHouse provides a bunch of options to make INSERTs still work. For example:

These come with their own trade-offs, consistency problems, and require the ClickHouse cluster to always be accessible.

Why we ingest via Kafka tables

We instead rely on the Kafka table engine to handle ingestion into ClickHouse.

The benefits are:

It also has minimal overhead in terms of memory used and allows us to always temporarily stop ingestion by removing the tables in question.

How Kafka tables work

Kafka engine tables act as Kafka consumers in a given consumer group. Selecting from that table advances the consumer offsets.

A Kafka table on its own does nothing beyond allowing querying data from Kafka - it needs to be paired with other tables for ingestion to work.

Important note: Given Kafka engine tables operate like consumers, querying data from them moves the offsets for the consumer group forward. Doing this while ingesting data may cause data loss, and has been disallowed by default on the latest ClickHouse versions.

Example kafka engine table:

CREATE TABLE kafka_ingestion_warnings
(
    team_id Int64,
    source LowCardinality(VARCHAR),
    type VARCHAR,
    details VARCHAR CODEC(ZSTD(3)),
    timestamp DateTime64(6, 'UTC')
)
ENGINE = Kafka('kafka:9092', 'clickhouse_ingestion_warnings_test', 'group1', 'JSONEachRow')

It is important to send correctly formatted messages to the topic you're selecting from. When selecting from a Kafka table, ClickHouse assumes messages in the topic are formatted correctly. If not, this may stall the consumer depending on the value of kafka_skip_broken_messages, breaking ingestion.

Beyond just skipping broken messages, it's also possible to set up a dead letter queue system for these in ClickHouse. You can read more about doing so in this Altinity blog post.

Materialized views

Materialized views in ClickHouse can be thought of as triggers - they react to new blocks being INSERTed into source tables and allow transforming and piping that data to other tables.

Materialized views come with a lot of gotchas. A great resource for learning more about them is this presentation.

Example schema - reading and writing ingestion events

Consider the following sharded table schema together with kafka_ingestion_warnings:

CREATE TABLE sharded_ingestion_warnings
(
    team_id Int64,
    source LowCardinality(VARCHAR),
    type VARCHAR,
    details VARCHAR CODEC(ZSTD(3)),
    timestamp DateTime64(6, 'UTC'),
    _timestamp DateTime,
    _offset UInt64,
    _partition UInt64
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/posthog.sharded_ingestion_warnings', '{replica}')
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY (team_id, toHour(timestamp), type, source, timestamp)

CREATE TABLE ingestion_warnings ON CLUSTER 'posthog'
(
    team_id Int64,
    source LowCardinality(VARCHAR),
    type VARCHAR,
    details VARCHAR CODEC(ZSTD(3)),
    timestamp DateTime64(6, 'UTC'),
    _timestamp DateTime,
    _offset UInt64,
    _partition UInt64
)
ENGINE = Distributed('posthog', 'posthog', 'sharded_ingestion_warnings', rand())

CREATE MATERIALIZED VIEW ingestion_warnings_mv
TO posthog.ingestion_warnings
AS SELECT
    team_id,
    source,
    type,
    details,
    timestamp,
    _timestamp,
    _offset,
    _partition
FROM posthog.kafka_ingestion_warnings

In this schema:

Example schema visualized

This is the same schema visualized in a ClickHouse cluster with 2 shards and 1 replica each:

flowchart LR
    classDef table fill:#f6e486,stroke:#ffc45d;

    subgraph CH ["ClickHouse Cluster<br/>"]
        subgraph Blank [ ]
            style Blank fill:none,stroke-dasharray: 0 1

            subgraph CH1["ClickHouse Shard 1, Replica 1"]
                kafka_ingestion_warnings1["kafka_ingestion_warnings table<br/>(Kafka table engine)"]:::table
                ingestion_warnings_mv1["ingestion_warnings_mv table<br/>(Materialized view)"]:::table
                ingestion_warnings1["ingestion_warnings table</br/>(Distributed table engine)"]:::table
                sharded_ingestion_warnings1["sharded_ingestion_warnings table<br/>(ReplicatedMergeTree table engine)"]:::table
            end

            subgraph CH2["ClickHouse Shard 2, Replica 1"]
                kafka_ingestion_warnings2["kafka_ingestion_warnings table<br/>(Kafka table engine)"]:::table
                ingestion_warnings_mv2["ingestion_warnings_mv table<br/>(Materialized view)"]:::table
                ingestion_warnings2["ingestion_warnings table</br/>(Distributed table engine)"]:::table
                sharded_ingestion_warnings2["sharded_ingestion_warnings table<br/>(ReplicatedMergeTree table engine)"]:::table
            end
        end
    end

    Kafka["clickhouse_events_proto topic in Kafka"]


    ingestion_warnings_mv1 --reads from--> kafka_ingestion_warnings1
    ingestion_warnings_mv2 --reads from--> kafka_ingestion_warnings2
    ingestion_warnings_mv1 --pushes data to--> ingestion_warnings1
    ingestion_warnings_mv2 --pushes data to--> ingestion_warnings2
    ingestion_warnings1 -.pushes data to.-> sharded_ingestion_warnings1
    ingestion_warnings1 -.pushes data to.-> sharded_ingestion_warnings2
    ingestion_warnings2 -.pushes data to.-> sharded_ingestion_warnings1
    ingestion_warnings2 -.pushes data to.-> sharded_ingestion_warnings2
    kafka_ingestion_warnings1 -..- Kafka
    kafka_ingestion_warnings2 -..- Kafka

Further reading

about materialized views.](https://den-crane.github.io/Everything_you_should_know_about_materialized_views_commented.pdf)

Next in the ClickHouse manual: Working with JSON

Data storage or what is a MergeTree

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/data-storage

This document covers the answers to the following questions:

Introduction to MergeTree

Why is ClickHouse so fast? states:

ClickHouse was initially built as a prototype to do just a single task well: to filter and aggregate data as fast as possible.

Rather than force all possible tasks to be solved by singular tools, ClickHouse provides specialized "engines" that each solve specific problems.

MergeTree engine family tables are intended for ingesting large amounts of data, storing that data efficiently, and running analytical queries on it.

How MergeTree stores data

Consider the following (simplified) table for storing sensor events:

CREATE TABLE sensor_values (
    timestamp DateTime,
    site_id UInt32,
    event VARCHAR,
    uuid UUID,
    metric_value Int32
)
ENGINE = MergeTree()
ORDER BY (site_id, toStartOfDay(timestamp), event, uuid)
SETTINGS index_granularity = 8192

Data for this table would be stored in parts, each part a separate directory on disk. Data for a given part is always sorted by the order set in ORDER BY statement and compressed.

Parts can be Wide or Compact depending on its size. We'll be mostly dealing with Wide parts as part of day-to-day operations.

Wide parts are large and store each column in a separate binary data file, which are sorted and compressed.

ClickHouse also stores a sparse index for the part. A collection of rows with size equal to the index_granularity setting is called a granule. For every granule, the primary index stores a mark containing the value of the ORDER BY statement as well as a pointer to where that mark is located in each data file.

💡 For better performance when running queries, it is not recommended to set index_granularity too low. The default value for engines in the MergeTree family is 8192. An implication of this is that accessing data by primary key (in this case the ORDER BY clause is equivalent to the primary key) will not read just one row, but rather up to index_granularity number of rows. This is acceptable given ClickHouse is meant to perform well with aggregations, rather than point lookups.

<details><summary>Diving deeper into data-on-disk for a Wide part</summary>

This assumes you're using a docker-based ClickHouse installation and have clickhouse-client running

Seeding data
INSERT INTO sensor_values
SELECT *
FROM generateRandom('timestamp DateTime, site_id UInt8, event VARCHAR, uuid UUID, metric_value Int32', NULL, 10)
LIMIT 200000000
Looking at part data

system.parts table contains a lot of metadata about every part.

To find out what type each part is, its size, and where on disk it's located, you can run the following query:

SELECT
    name,
    part_type,
    rows,
    marks,
    formatReadableSize(bytes_on_disk),
    formatReadableSize(data_compressed_bytes),
    formatReadableSize(data_uncompressed_bytes),
    formatReadableSize(marks_bytes),
    path
FROM system.parts
WHERE active and table = 'sensor_values'
FORMAT Vertical

The result might look something like this:

Row 1:
──────
name:                                        all_12_17_1
part_type:                                   Wide
rows:                                        6291270
marks:                                       769
formatReadableSize(bytes_on_disk):           476.07 MiB
formatReadableSize(data_compressed_bytes):   475.92 MiB
formatReadableSize(data_uncompressed_bytes): 474.00 MiB
formatReadableSize(marks_bytes):             90.12 KiB
path:                                        /var/lib/clickhouse/store/267/267cd730-33ca-4e43-8a84-e4f0786e364b/all_12_17_1/
Inspecting data on disk
⟩ docker exec -it posthog_clickhouse_1 ls -lhS /var/lib/clickhouse/store/267/267cd730-33ca-4e43-8a84-e4f0786e364b/all_12_17_1/
total 477M
-rw-r----- 1 clickhouse clickhouse 308M Nov  2 07:33 event.bin
-rw-r----- 1 clickhouse clickhouse  97M Nov  2 07:33 uuid.bin
-rw-r----- 1 clickhouse clickhouse  25M Nov  2 07:33 metric_value.bin
-rw-r----- 1 clickhouse clickhouse  25M Nov  2 07:33 timestamp.bin
-rw-r----- 1 clickhouse clickhouse  25M Nov  2 07:33 site_id.bin
-rw-r----- 1 clickhouse clickhouse  58K Nov  2 07:33 primary.idx
-rw-r----- 1 clickhouse clickhouse  19K Nov  2 07:33 event.mrk2
-rw-r----- 1 clickhouse clickhouse  19K Nov  2 07:33 metric_value.mrk2
-rw-r----- 1 clickhouse clickhouse  19K Nov  2 07:33 site_id.mrk2
-rw-r----- 1 clickhouse clickhouse  19K Nov  2 07:33 timestamp.mrk2
-rw-r----- 1 clickhouse clickhouse  19K Nov  2 07:33 uuid.mrk2
-rw-r----- 1 clickhouse clickhouse  494 Nov  2 07:33 checksums.txt
-rw-r----- 1 clickhouse clickhouse  123 Nov  2 07:33 columns.txt
-rw-r----- 1 clickhouse clickhouse   10 Nov  2 07:33 default_compression_codec.txt
-rw-r----- 1 clickhouse clickhouse    7 Nov  2 07:33 count.txt

What are these files?

You can read more on the exact structure of these files and how they're used in ClickHouse Index Design documentation.

What does the Merge stand for?

In every system, data must be ingested and kept up-to-date somehow. When data is inserted into MergeTree tables, each insert creates one or multiple parts for the data inserted.

As having a lot of small files would be disadvantageous for many reasons from query performance to storage, ClickHouse regularly merges small parts together until they reach a maximum size.

The merge combines the two parts into a new one. This is similar to how merge sort works and atomically replaces the two source parts.

Merges can be monitored using the system.merges table.

Query execution

Aggregation supported by ORDER BY

Our sensor_values table is set up in a way that queries similar to the following are really fast to execute.

SELECT
    toStartOfDay(timestamp),
    event,
    sum(metric_value) as total_metric_value
FROM sensor_values
WHERE site_id = 233 AND timestamp > '2010-01-01' and timestamp < '2023-01-01'
GROUP BY toStartOfDay(timestamp), event
ORDER BY total_metric_value DESC
LIMIT 20

Executing this reports:

20 rows in set. Elapsed: 0.042 sec. Processed 90.11 thousand rows, 3.54 MB (2.13 million rows/s., 83.60 MB/s.)

Why can it be fast? Because ClickHouse:

  1. leverages the table ORDER BY clause (ORDER BY (site_id, toStartOfDay(timestamp), event, uuid)) to skip reading a lot of data
  2. is fast and efficient about I/O and aggregation

Let's dig into how the primary index for this query is used by using EXPLAIN.

EXPLAIN indexes=1, header=1 SELECT
    toStartOfDay(timestamp),
    event,
    sum(metric_value) as total_metric_value
FROM sensor_values
WHERE site_id = 233 AND timestamp > '2010-01-01' and timestamp < '2023-01-01'
GROUP BY toStartOfDay(timestamp), event
ORDER BY total_metric_value DESC
LIMIT 20
FORMAT LineAsString

<details><summary>Show full EXPLAIN output</summary>

Expression (Projection)
Header: toStartOfDay(timestamp) DateTime
        event String
        total_metric_value Int64
  Limit (preliminary LIMIT (without OFFSET))
  Header: toStartOfDay(timestamp) DateTime
          event String
          sum(metric_value) Int64
    Sorting (Sorting for ORDER BY)
    Header: toStartOfDay(timestamp) DateTime
            event String
            sum(metric_value) Int64
      Expression (Before ORDER BY)
      Header: toStartOfDay(timestamp) DateTime
              event String
              sum(metric_value) Int64
        Aggregating
        Header: toStartOfDay(timestamp) DateTime
                event String
                sum(metric_value) Int64
          Expression (Before GROUP BY)
          Header: event String
                  metric_value Int32
                  toStartOfDay(timestamp) DateTime
            Filter (WHERE)
            Header: timestamp DateTime
                    event String
                    metric_value Int32
              SettingQuotaAndLimits (Set limits and quota after reading from storage)
              Header: and(greater(timestamp, '2010-01-01'), less(timestamp, '2023-01-01')) UInt8
                      timestamp DateTime
                      site_id UInt32
                      event String
                      metric_value Int32
                ReadFromMergeTree
                Header: and(greater(timestamp, '2010-01-01'), less(timestamp, '2023-01-01')) UInt8
                        timestamp DateTime
                        site_id UInt32
                        event String
                        metric_value Int32
                Indexes:
                  PrimaryKey
                    Keys:
                      site_id
                      toStartOfDay(timestamp)
                    Condition: and(and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf))), and((site_id in [233, 233]), and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf)))))
                    Parts: 2/2
                    Granules: 11/24415

The full output of explain is obtuse, but the most important part is also the most deeply nested one:

ReadFromMergeTree
Header: and(greater(timestamp, '2010-01-01'), less(timestamp, '2023-01-01')) UInt8
        timestamp DateTime
        site_id UInt32
        event String
        metric_value Int32
Indexes:
    PrimaryKey
    Keys:
        site_id
        toStartOfDay(timestamp)
    Condition: and(and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf))), and((site_id in [233, 233]), and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf)))))
    Parts: 2/2
    Granules: 11/24415

At the start of the query, ClickHouse loaded the primary index of each part into memory. From this output, we know that the query first used the primary key to filter based on site_id and timestamp values stored in the index. This allowed it to know that only 11 out of 24415 granules (0.05%) contained any relevant data.

From there it read those 11 granules (11 * 8192 rows) worth of data from timestamp, side_id, event and metric_value columns and did the rest of filtering and aggregation on that data alone.

See this documentation for a guide on how to choose ORDER BY.

"Point queries" not supported by ORDER BY

Consider this query:

SELECT * FROM sensor_values WHERE uuid = '69028f26-768f-afef-1816-521b22d281ca'

Executing this query reports:

1 row in set. Elapsed: 0.703 sec. Processed 200.00 million rows, 3.20 GB (304.43 million rows/s., 4.87 GB/s.)

While the overall execution time of this query is not bad thanks to fast I/O, it needed to read 2200x the amount of data from disk. As the dataset size or column sizes increase, this performance would get dramatically worse.

Why is this query slower? Because our ORDER BY does not support fast filtering by uuid and ClickHouse needs to read the whole table to find a single record _and_ read all columns.

ClickHouse provides some ways to make this faster (e.g. Projections) but in general these require extra disk space or have other trade-offs.

Thus, it's important to make sure the ClickHouse schema is aligned with queries that are being executed.

PARTITION BY

Another tool to make queries faster is PARTITION BY. Consider the updated table definition:

CREATE TABLE sensor_values (
    timestamp DateTime,
    site_id UInt32,
    event VARCHAR,
    uuid UUID,
    metric_value Int32
)
ENGINE = MergeTree()
PARTITION BY intDiv(toYear(timestamp), 10)
ORDER BY (site_id, toStartOfDay(timestamp), event, uuid)
SETTINGS index_granularity = 8192

Here, ClickHouse would generate one partition per 10 years of data, allowing to skip reading even the primary index in some cases.

In the underlying data, each part would belong to a single partition and only parts within a partition would get merged.

One additional benefit of partitioning by a derivate of timestamp is that if most queries touch recent data, you can also set up rules to automatically move older parts and partitions to cheaper storage or drop them entirely.

Query analysis

Let's use an identical query as before to explain with the new dataset:

SELECT
    toStartOfDay(timestamp),
    event,
    sum(metric_value) as total_metric_value
FROM sensor_values
WHERE site_id = 233 AND timestamp > '2010-01-01' and timestamp < '2023-01-01'
GROUP BY toStartOfDay(timestamp), event
ORDER BY total_metric_value DESC
LIMIT 20

<details><summary>Show full EXPLAIN output</summary>

Expression (Projection)
Header: toStartOfDay(timestamp) DateTime
        event String
        total_metric_value Int64
  Limit (preliminary LIMIT (without OFFSET))
  Header: toStartOfDay(timestamp) DateTime
          event String
          sum(metric_value) Int64
    Sorting (Sorting for ORDER BY)
    Header: toStartOfDay(timestamp) DateTime
            event String
            sum(metric_value) Int64
      Expression (Before ORDER BY)
      Header: toStartOfDay(timestamp) DateTime
              event String
              sum(metric_value) Int64
        Aggregating
        Header: toStartOfDay(timestamp) DateTime
                event String
                sum(metric_value) Int64
          Expression (Before GROUP BY)
          Header: event String
                  metric_value Int32
                  toStartOfDay(timestamp) DateTime
            Filter (WHERE)
            Header: timestamp DateTime
                    event String
                    metric_value Int32
              SettingQuotaAndLimits (Set limits and quota after reading from storage)
              Header: and(greater(timestamp, '2010-01-01'), less(timestamp, '2023-01-01')) UInt8
                      timestamp DateTime
                      site_id UInt32
                      event String
                      metric_value Int32
                ReadFromMergeTree
                Header: and(greater(timestamp, '2010-01-01'), less(timestamp, '2023-01-01')) UInt8
                        timestamp DateTime
                        site_id UInt32
                        event String
                        metric_value Int32
                Indexes:
                  MinMax
                    Keys:
                      timestamp
                    Condition: and(and((timestamp in (-Inf, 1672531199]), (timestamp in [1262304001, +Inf))), and((timestamp in (-Inf, 1672531199]), (timestamp in [1262304001, +Inf))))
                    Parts: 2/14
                    Granules: 3589/24421
                  Partition
                    Keys:
                      intDiv(toYear(timestamp), 10)
                    Condition: and(and((intDiv(toYear(timestamp), 10) in (-Inf, 202]), (intDiv(toYear(timestamp), 10) in [201, +Inf))), and((intDiv(toYear(timestamp), 10) in (-Inf, 202]), (intDiv(toYear(timestamp), 10) in [201, +Inf))))
                    Parts: 2/2
                    Granules: 3589/3589
                  PrimaryKey
                    Keys:
                      site_id
                      toStartOfDay(timestamp)
                    Condition: and(and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf))), and((site_id in [233, 233]), and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf)))))
                    Parts: 2/2
                    Granules: 12/3589

The relevant part of EXPLAIN is again nested deep within:

ReadFromMergeTree
Header: and(greater(timestamp, '2010-01-01'), less(timestamp, '2023-01-01')) UInt8
        timestamp DateTime
        site_id UInt32
        event String
        metric_value Int32
Indexes:
  MinMax
    Keys:
      timestamp
    Condition: and(and((timestamp in (-Inf, 1672531199]), (timestamp in [1262304001, +Inf))), and((timestamp in (-Inf, 1672531199]), (timestamp in [1262304001, +Inf))))
    Parts: 2/14
    Granules: 3589/24421
  Partition
    Keys:
      intDiv(toYear(timestamp), 10)
    Condition: and(and((intDiv(toYear(timestamp), 10) in (-Inf, 202]), (intDiv(toYear(timestamp), 10) in [201, +Inf))), and((intDiv(toYear(timestamp), 10) in (-Inf, 202]), (intDiv(toYear(timestamp), 10) in [201, +Inf))))
    Parts: 2/2
    Granules: 3589/3589
  PrimaryKey
    Keys:
      site_id
      toStartOfDay(timestamp)
    Condition: and(and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf))), and((site_id in [233, 233]), and((toStartOfDay(timestamp) in (-Inf, 1672531200]), (toStartOfDay(timestamp) in [1262304000, +Inf)))))
    Parts: 2/2
    Granules: 12/3589

What this tells us is that ClickHouse:

  1. First leverages an internal MinMax index on timestamp to whittle down the number of parts to 2/14 and granules to 3589/24421
  2. Then it tries to filter via the partition key but this doesn't narrow things down further
  3. Then, it loads and leverages the Primary key as before to narrow data down to 12 granules.
  4. Lastly reads, filters and aggregates data in those 12 granules

The benefit here is that it could skip reading the primary key index for most of the parts that did not contain relevant data. If and how much this speeds up the query however depends on the size of the dataset.

Choosing a good PARTITION BY

Use partitions wisely - each INSERT should ideally only touch 1-2 partitions and too many partitions will cause issues around replication or prove useless for filtering.

Loading the primary index/marks file might not be the bottleneck you expect, so be sure to benchmark different schemas against each other.

See the following Altinity documentation for more guidance:

Other notes on MergeTree

Data is expensive to update

Updating data in ClickHouse is expensive and analogous to a schema migration.

For example, to update an event's properties, ClickHouse frequently needs to:

This makes things operationally hard. We mitigate this by:

No query planner

ClickHouse doesn't have a query planner in the sense PostgreSQL or other databases do.

On the one hand, you often end up fighting the query planner in other databases. If we know how ClickHouse works internally and can develop that into intuition for how SQL is executed, we're well-equipped to deal with performance issues as they arise.

On the other, this means that we'll need to be careful writing SQL as small changes can have huge performance implications.

Examples:

One notable exception to "no query planner" is that ClickHouse often pushes predicates from WHERE into PREWHERE. Filters in PREWHERE are executed first and ClickHouse moves columns it thinks are "cheaper" or "more selective" into it. However putting the wrong column (e.g. a fat column containing JSON) in PREWHERE can cause performance to tank.

Read more on PREWHERE in the ClickHouse docs.

Data compression

Compression means that if subsequent column values of a given column are often similar or identical, the data compresses really well. At PostHog we frequently see uncompressed / compressed ratios of 20x-40x for JSON columns and 300x-2000x for sparse small columns.

Compression ratios have direct impact on query performance: I/O is often the bottleneck, meaning that highly compressed data can be read faster from disk at the cost of more CPU work for decompression.

By default columns are compressed by the LZ4 algorithm. We've found good success using ZSTD(3) for storing JSON columns - see benchmarks for more information.

Another tip is to use ClickHouse's LowCardinality data type modifier on schemas where a given column will store values with low cardinality i.e. the total number of values is low. An example of this would be "country name".

Weak JOIN support

ClickHouse excels at aggregating data from a single table at a time. If you however have a query with JOINs or subqueries, the right-hand-side of the JOIN would be loaded into memory first. Thus, you should always have the bigger table on the left side of left-hand-side!

This means that at scale JOINs can kill performance. Read more on the effect of removing JOINs from our events database here:

Suggested reading

Next in the ClickHouse manual: Data replication

ClickHouse Dictionaries

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/dictionaries

We don't use ClickHouse dictionaries very often, and there are a few aspects to them that have caused headaches in production.

Using a ClickHouse table as a source

If you don't provide a PASSWORD when creating a dictionary, you will likely get errors like this with calling getDictOrNull:

Code: 516. DB::Exception: default: Authentication failed: password is incorrect, or there is no user with such name. If
you have installed ClickHouse and forgot password you can reset it in the configuration file. The password for default
user is typically located at /etc/clickhouse-server/users.d/default-password.xml and deleting this file will reset the
password. See also /etc/clickhouse-server/users.xml on the server where ClickHouse is installed. : while executing '...'

You could provide one in the DDL, but the problem with this is that if the password is ever rotated, you will need to re-run the migration, or you'll start getting auth errors again.

Here's an example of providing this section in the DDL:

CREATE DICTIONARY posthog.my_dict
(
    `a` String,
    `b` String,
    `c` Nullable(String),
    `d` Nullable(String),
    `e` Nullable(String)
)
PRIMARY KEY a, b
SOURCE(CLICKHOUSE(TABLE 'my_table' PASSWORD 'hunter42'))
LIFETIME(MIN 3000 MAX 3600)
LAYOUT(COMPLEX_KEY_HASHED())

ClickHouse Manual

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse

Welcome to PostHog's ClickHouse manual.

About this manual

PostHog uses ClickHouse to power our data analytics tooling and we've learned a lot about it over the years. The goal of this manual is to share that knowledge externally and raise the average level of ClickHouse understanding for people starting work with ClickHouse.

If you have extensive ClickHouse experience, and want to contribute thoughts or tips of your own, please do by opening an PR or issue on GitHub!

Consider this manual a companion to other great resources out there:

Why ClickHouse

In 2020, we had launched PostHog for the first time, were getting great early traction, but were struggling with scaling.

To solve this problem we looked at a wide range of OLAP solutions, including Pinot, Presto, Druid, TimescaleDB, CitusDB, and ClickHouse. Some of our team had used these tools before at other companies, such as Uber where Pinot and Presto are both used extensively.

While assessing each tool, we looked at three main factors:

ClickHouse was a good fit for all of these factors, so we started doing a more thorough investigation. We read up on benchmarks and researched the experience of companies such as Cloudflare that uses ClickHouse to process 6m requests per second. Eventually, we set up a test cluster to run our own benchmarks.

ClickHouse repeatedly performed an order of magnitude better than other tools we considered. We also discovered other perks, such as the fact that it is column-oriented and written in C++. We found these to be the key benefits of ClickHouse:

Eventually, we decided we knew enough to proceed and so we spun our test cluster out into an actual production cluster. It’s just part of how we like to bias for speed.

Now, ClickHouse powers all of our analytics features and we're happy with the path taken.

However knowledge on how to build on it and maintain it is more important than ever, bringing us to this manual.

Manual sections

Operations

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/operations

This document gives an overview of the kitchen side of ClickHouse: how various operations work, what tricky migrations we have experience with as well as various settings and tips.

System tables

ClickHouse exposes a lot of information about its internals in system tables.

Some stand-out tables:

contain metadata about tables and columns

contain information about ongoing operations

system.replication_queue contain information about data replication

contain information about errors and crashes respectively

For examples of usage and tips, check out this ClickHouse blog article

Settings

ClickHouse provides daunting amounts of configuration on all levels. This section provides information on the different kind of settings and how to configure them.

Query settings

Query settings allow to manipulate the behavior of queries, for example setting limits on query execution time and resource usage or toggling specific behaviors on-and-off.

Documentation:

Using query settings is done:

Server settings

Server settings allow tuning things like global thread or pool sizes, networking and other clickhouse-server-level configuration.

Documentation:

You can change server settings via config.xml file. Note: some settings may require a server restart.

MergeTree table settings

MergeTree settings allow configuring things from primary index granularity to merge behavior to limits of usage of this table.

Documentation:

MergeTree table settings are set either:

Profiles and users

ClickHouse allows creating different profiles and users with their own set of settings. This can be useful to grant read-only access to some users or otherwise limit resource use.

Read more in documentation:

Querying ClickHouse from the app

While not currently the case, all PostHog products and features that query ClickHouse _should_ use a specific ClickHouse user for the use case. We have a bunch of product-specific ClickHouse users that are specified when running a query with the ClickHouse client. For example, see this.

When developing a new product or feature, using a dedicated ClickHouse user is important for multiple reasons:

Mutations

ALTER TABLE ... UPDATE and ALTER TABLE ... DELETE operations which mutate data require ClickHouse to rewrite whole data via special merge operations. These are frequently expensive operations and require monitoring.

You can monitor progress of mutations via the following system tables:

When creating mutations, it's often wise to alter the value of mutations_sync setting.

Running mutations can be stopped by issuing a KILL MUTATION WHERE mutation_id = '...' statement.

Note that this may not stop any currently running merges. To do so, check out section on SYSTEM STOP MERGES

GDPR

When necessary to delete user data due to GDPR or otherwise, it's wise to do so in batches and asynchronously.

At PostHog, when deleting user data, we schedule for all deletions to occur once per week to minimize the cost of rewriting data.

In the future, lightweight deletes might simplify this process.

Merges

As explained previously, merges are the lifeblood of ClickHouse, responsible for optimizing how data is laid out on disk as well as for deduplicating data.

Merges can be monitored via the following tables:

OPTIMIZE TABLE

[OPTIMIZE TABLE] statement schedules merges for a table, optimizing the on-disk layout or speeding up queries or forcing some schema changes into effect.

Note: not all parts are guaranteed to be merged if the size of parts exceeds maximum limits or if data is already in a single part. In this case adding a FINAL modifier forces the merge regardless.

SYSTEM STOP MERGES

SYSTEM STOP MERGES statement can stop background merges from occurring temporarily for a table or the whole database. This can be useful during trickier schema migrations when copying data.

Note unless ingestion is paused during this time, this can easily lead to too many parts errors.

Merges can be resumed via SYSTEM START MERGES statement.

Important settings

Merges have many relevant settings associated to be cognizant about:

when parts count gets high.

server settings control how many merges are executed in parallel

Simple schema changes

As in any other database, schema changes are done via ALTER TABLE statements.

One area where ClickHouse differs from other databases is that schema changes are generally lazy and apply to only new data or merged parts. This applies to:

You can generally force these changes onto old data by forcing data to be merged via OPTIMIZE TABLE FINAL statement, but this can be expensive.

TTLs

ClickHouse TTLs allow dropping old rows or columns after expiry.

It's suggested to set up your table to partition by timestamp as well, so old files can be dropped completely instead of needing to be rewritten as a result of TTL.

Tricky schema changes

Some schema changes are deceptively hard and frequently requires rewriting the whole table or re-creating the tables.

Make sure to never re-use Zookeeper paths when re-creating replicated table!

The difference often comes down to how data is stored on disk and its implications.

Async migrations

At PostHog, we've developed Async Migrations for executing these long-running operations in the background without affecting availability.

You can learn more about Async Migrations in our blog, handbook, and runbook.

Pausing ingestion

This is frequently a prerequisite of any large-scale schema change as new data may get lost when you are copying data from one place to another.

If you're using Kafka engine tables for ingestion, you can pause ingestion by dropping materialized view(s) attached to Kafka engine tables.

To restart ingestion, recreate the dropped table(s).

Note that you can also detach the materialized views instead of dropping them (DETACH TABLE my_mv), but be aware that detached views have some weird behaviors, such as being re-attached on node restarts, "existing in a limbo" (they do not show up on system.tables and cannot be dropped but SHOW CREATE TABLE my_mv will return results), as well as potentially causing naming clashes.

Changing table engines

When changing table engines, you can leverage ATTACH PARTITION commands to move data between tables.

Note: ATTACH PARTITION commands only work if the two tables have identical structure: same columns and ORDER BY/PARTITION BY. It works by creating hard links between partitions, so the operation does not require any extra disk space until merges happen.

Thus it's important to stop ingesting new data and merges during this operation.

PostHog needed to implement this kind of operation to move to a sharded schema: 0004_replicated_schema.py.

Changing ORDER BY or PARTITION BY

Changing ORDER BY and PARTITION BY affects how data is stored on disk and requires rewriting this data.

In the case of ORDER BY, you can modify it with ALTER TABLE my_table MODIFY ORDER BY, but only to add a new column expression. Other changes require using the approaches below.

Suggested procedure if using ReplacingMergeTree:

  1. Create a new table with correct ORDER BY
  2. Create a new materialized view table, writing new data to new table.
  3. Copy data over from old table via INSERT INTO SELECT
  4. Deduplicate via OPTIMIZE TABLE FINAL if feasible.

Note that INSERT-ing data this way may be slow or time out. Consider:

Note that this operation temporarily doubles the amount of disk space you need.

An example (from PostHog) of an async migration: 0005_person_replacing_by_version.py

Resharding

At PostHog, we've haven't had to reshard data (yet), but the process would look similar to changing ORDER BY or PARTITION BY, requiring either to pause data or deduplicate at the end.

Storing/restoring parts of data from backups might also simplify this process.

Denormalizing columns via dictionaries

A powerful tool in the arsenal of performance is de-normalization of data.

At PostHog, we eliminated some JOINs for person data by storing information on person identities and properties directly on events.

Backfilling this data was implemented via ALTER TABLE UPDATE populating new columns. The column data was pulled in using dictionaries which allowed to query and store data from other tables in memory during the update.

An alternative approach might have been to create a new table and populate it similar to changing ORDER BY, but this would have required expensive deduplication, a lot of extra space and even more memory usage.

Learn more on this:

Useful information for cluster admins

Detached materialized views

If you ever DETACH a materialized view, it's important to keep in mind that the view now exists in a "limbo" state that can be confusing and cause issues.

Detached views don't show up on system.tables, but you can assert that a view exists by running SHOW CREATE TABLE <detached_mv>.

In addition, detached views (except if DETACH was executed with PERMANENTLY) will be reattached on server restarts!

As an example of how this has been problematic for us in the past, we once detached views to handle ingestion problems, and then on rebooting the nodes we got confused as to why ingestion hadn't stopped!

Orphan Zookeeper records

Prior to ClickHouse 22.3, bugs in ClickHouse meant that reasonably often Zookeeper would end up with "orphan records". These are references to things like parts in ClickHouse that no longer exist, but remain referenced. While orphan records were common prior to 22.3, it's still possible that such records come to exist on newer ClickHouse versions as well, as an expected consequence of distributed systems.

Orphan records pose a problem because they may cause ClickHouse to use resources and try to perform operations on e.g. non-existent parts. For instance, we've seen mutations hang for months due to ClickHouse expecting it still needs to modify a part but the part no longer existing.

As a result, it's important to clean these up.

Orphan parts

Orphaned parts are perhaps the most common type of orphan record, so much so that Altinity has written a guide to help identify and delete them, as well as they recommended everyone do so when upgrading past 22.3.

To do this cleanup properly, you should:

  1. Check if you have any orphan parts (this should be run per node in your cluster, or you could modify the query to use clusterAllReplicas):
select zoo.p_path as part_zoo, zoo.ctime, zoo.mtime, disk.p_path as part_disk
from
(
  select concat(path,'/',name) as p_path, ctime, mtime
  from system.zookeeper where path in (select concat(replica_path,'/parts') from system.replicas)
) zoo
left join
(
  select concat(replica_path,'/parts/',name) as p_path
  from system.parts inner join system.replicas using (database, table)
) disk on zoo.p_path = disk.p_path
where part_disk=''
order by part_zoo
  1. Generate delete statements for each record that needs to be removed from Zookeeper:
clickhouse-client --password <password> --query "select 'delete '||part_zoo
from (
select zoo.p_path as part_zoo, zoo.ctime, zoo.mtime, disk.p_path as part_disk
from
(
 select concat(path,'/',name) as p_path, ctime, mtime
 from system.zookeeper where path in (select concat(replica_path,'/parts') from system.replicas)
) zoo
left join
(
 select concat(replica_path,'/parts/',name) as p_path
 from system.parts inner join system.replicas using (database, table)
) disk on zoo.p_path = disk.p_path
where part_disk='' and zoo.mtime <= now() - interval 30 day
order by part_zoo) format TSVRaw" > tmp_zk_orphans
  1. SSH into _one_ of your Zookeeper nodes
  2. Start up the ZK CLI (zkCli.sh) and paste the delete statements
  3. Check that the query from step 1 no longer returns anything
Orphan replication queue records

A more confusing issue can also happen when the replication queue contains operations that reference inexistent parts.

This is harder to notice proactively but may manifest itself in a migration that hangs indefinitely because it still has parts it needs to operate on but those parts don't exist.

If you spot a migration that doesn't seem to be progressing after a long time, it's worth checking if the parts in the parts_to_do column of the system.mutations table contains any parts that don't exist.

You can also spot this by looking at the replication queue for long-running operations. You could run the following query, for example:

select * from clusterAllReplicas('<cluster_name>', system.replication_queue) order by create_time

And check if any operations were created a long time ago, particularly simple ones like GET_PART.

Finally, another symptom you can look out for are recurrent logs that look like the following:

Checking part 137_0_27780_19674
Checking if anyone has a part 137_0_27780_19674 or covering part.

If the server has been looking for a part for days and hasn't found it anywhere, there's probably something wrong.

Having established this problem, the way to fix it is as follows:

  1. Get the node_name of the hanging queue record
  2. SSH into a Zookeeper node and using ZK CLI, delete the record. Note that for this you will need the full Zookeeper path of the record. You can use ls within the Zookeeper CLI to understand the storage structure if necessary. The path should look something like this: /clickhouse/tables/<shard_number>/<database_name>.<table_name>/replicas/<replica_name>/queue/<node_name> but will also vary for replicated and non-replicated tables.
  3. Having deleted the record, you should run SYSTEM RESTART REPLICA <table_name> on the ClickHouse node with the orphan queue item. This command will fetch the updated metadata from Zookeeper. It's also worth running it across your cluster for good measure.

Learn more

More information for ClickHouse operations can be found in:

Next in the ClickHouse manual: Schema case studies

Query performance

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/performance

This document goes over:

Tooling

clickhouse-client

clickhouse-client is a command-line application for running queries against ClickHouse.

When executing queries, it details progress, execution time, how many rows and gigabytes of data were processed, and how much CPU was used.

Image: clickhouse-client progress reporting

You can get additional logging from ClickHouse by setting SET send_logs_level = 'trace' before running a query.

system.query_log

ClickHouse saves all queries it runs into system.query_log table.

It includes information on:

Some tips for querying the query_log:

At PostHog, we also add metadata to each query via the log_comment setting to make results easier to analyze. This includes information on the source of the query and how it was constructed. See this runbook for more details.

An example query to get recent slow queries:

SELECT
    query_duration_ms,
    query,
    event_time,
    read_rows,
    formatReadableSize(read_bytes) as read_size,
    result_rows,
    formatReadableSize(result_bytes) as result_size,
    formatReadableSize(memory_usage) as memory,
    columns,
    query_id
FROM system.query_log
WHERE
    query NOT LIKE '%query_log%'
    AND type = 'QueryFinish'
    AND event_time > now() - toIntervalDay(3)
    AND query_duration_ms > 30000
    /* If using log_comment, consider including something like:
    AND JSONExtract(log_comment, 'kind') = 'request'
    */
ORDER BY query_duration_ms desc
LIMIT 10

Note that this table is not distributed - on a cluster setting you might need to run query against each node separately or do ad-hoc distributed queries.

EXPLAIN

Previous pages in this manual showed various examples of using the ClickHouse EXPLAIN statement to your advantage.

Various forms of explain can detail:

Read more about EXPLAIN in ClickHouse's EXPLAIN Statement docs.

Flame graphs

For CPU-bound calculations, flamegraphs can help visualize what ClickHouse worked on during query execution.

<div class="relative mt-2 mb-4"

<object data={'/images/flamegraph.svg'} type="image/svg+xml" />

We've built flamegraph support into PostHog. You can find tools to generate flamegraphs for queries under PostHog instance settings.

Importance of the page cache

When running queries, you might encounter an odd artifact: the first time you run a query, it's really slow but it speeds up significantly when run again.

This behavior is due to the Linux page cache. In broad terms, the operating system caches recently read files into memory, speeding up subsequent reads of the same data.

As most queries in ClickHouse are dependent on fast I/O to execute fast, this can have a significant effect on query performance. It is a reason why at PostHog our ClickHouse nodes have a lot of memory available.

Effect on benchmarking

This behavior can be a problem for profiling: users constructing new queries might not hit the page cache and receive a worse experience than benchmarking may show.

This means it's often important to wipe page cache on ClickHouse when doing queries. This can be achieved with the following command on a ClickHouse node:

sudo sh -c "/usr/bin/echo 3 > /proc/sys/vm/drop_caches"

Note that the above will only drop the cache on the given node, but distributed queries might still be affected from the page cache on nodes in the other shards.

For completely clean benchmarking, you might also want to drop ClickHouse's internal mark cache.

You can also use the min_bytes_to_use_direct_io setting to bypass the page cache at the query level.

When set to a value greater than 0, ClickHouse will use O_DIRECT for disk reads whenever the total data to be read exceeds the threshold (in bytes).

SETTINGS min_bytes_to_use_direct_io = 1;

Join algorithms

JOINs are expensive in ClickHouse, so any opportunities to speed them up are welcome.

One of the quickest possible wins on that front is by benchmarking different join algorithms.

Newer ClickHouse versions have added more algorithms, and it's worth keeping an eye on the ones that come out and check if they help improve query performance.

In PostHog's case, we have moved away from the default algorithm (alias for direct,hash) in favor of direct,parallel_hash. parallel_hash is effectively the same as hash, but it does the computation in multiple buckets. It aims to be faster by consuming a bit more resources.

In our extensive benchmarking (including in our production environment), we've found that across the board using parallel_hash over hash provided us with the following speed improvements:

This came at a cost of up to 1.5x more memory usage, as well as a bit more CPU usage, which were acceptable tradeoffs in our case.

Tips for achieving well-performing queries

Previous pages in the ClickHouse manual have highlighted the importance of setting up the correct schema and detailing how queries work in a distributed setting.

This section highlights some general rules of thumb that can help speed up queries:

Also always do your benchmarking in a realistic setting: on large datasets on powerful machines.

Next in the ClickHouse manual: Operations

Query attribution

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/query-attribution

A guideline for making the ClickHouse queries attribute correctly.

Current state

We don't fully understand why our ClickHouse get sometimes overloaded. We extensively use query's SETTING: log_comment to put JSON with bunch of metadata inside it.

A bit more background

We process thousands queries per second, historically it used to be mostly the traffic from our application us.posthog.com / eu.posthog.com and using only one default ClickHouse user. Recently, it has been a mix of different query issuers (Temporal, Celery, and services cut out from Django), with most of the queries still using default user.

We've managed to separate batch_export, app and api traffic to use separate ClickHouse users and tune the settings to not fully starve any of those use cases of capacity.

Most ClickHouse queries made as a result of HTTP request to Django app contains the proper http_request_id, route_id and id.

This allow us to do basic analysis.

Where we want to be

We want to know:

This will allow us to better manage the ClickHouse load and understand which products and features require the most compute resources and how they are correlated. Especially, how one request to an API may end up multiple queries to ClickHouse.

Tags

In Python, there is a tag_queries helper function one may use, be aware that it tags all queries issued from within a Python thread (it uses thread local memory).

Alternatively, you may consider tags_context for localized tags.

Each query send to ClickHouse must have the following tags:

Types were reverse engineered from our ClickHouse system.query_log.log_comment column.

Current state of log_comment

We use at least 42 unique tags:

| Tag Key | Occurrences | |---------|-------------| | query_settings | 1,826,072 | | workload | 1,826,072 | | kind | 1,788,761 | | id | 1,788,745 | | team_id | 1,667,247 | | user_id | 1,479,256 | | http_request_id | 1,441,577 | | container_hostname | 1,441,577 | | route_id | 1,441,577 | | http_user_agent | 1,433,807 | | http_referer | 1,127,494 | | query_type | 809,331 | | has_joins | 694,281 | | has_json_operations | 694,281 | | person_on_events_mode | 456,285 | | modifiers | 439,930 | | timings | 439,930 | | cache_key | 394,951 | | sentry_trace | 394,951 | | query | 374,767 | | client_query_id | 324,355 | | access_method | 322,253 | | feature | 297,173 | | insight_id | 157,439 | | entity_math | 133,956 | | filter | 133,956 | | filter_by_type | 133,956 | | number_of_entities | 133,956 | | query_time_range_days | 133,956 | | dashboard_id | 102,664 | | session_id | 62,901 | | user_email | 45,918 | | trigger | 33,422 | | chargeable | 26,807 | | $process_person_profile | 1,856 | | experiment_name | 1,697 | | experiment_id | 1,697 | | experiment_feature_flag_key | 1,697 | | experiment_is_data_warehouse_query | 770 | | clickhouse_exception_type | 556 | | usage_report | 25 | | batch_export_id | 16 |

Queries to dive into log_comment

Get tag frequency

SELECT
    arrayJoin(JSONExtractKeys(log_comment)) AS tag_key,
    count() AS occurrences
FROM clusterAllReplicas(posthog, system.query_log)
WHERE
    event_date = '2025-06-06'
    AND is_initial_query
    AND type = 'QueryStart'
    AND log_comment != ''
GROUP BY tag_key
ORDER BY occurrences DESC;

Get tag type, number of occurrences and values

SELECT
    {{tag_name}} AS tag_name,
    JSONType(log_comment, {{tag_name}}) AS value_type,
    count() AS occurrences,
    count(distinct JSONExtractRaw(log_comment, {{tag_name}})) AS distinct_values,
    groupUniqArray(20)(JSONExtractRaw(log_comment, {{tag_name}})) AS example_value -- Get an example
FROM clusterAllReplicas(posthog, system.query_log)
WHERE
    event_date >= '2025-06-06'
    AND is_initial_query
    AND type = 'QueryStart'
    AND JSONHas(log_comment, {{tag_name}})
GROUP BY value_type
ORDER BY occurrences DESC;
Example values of query tag
{
  "kind": "DataVisualizationNode",
  "source": {
    "kind": "HogQLQuery",
    "query": "SELECT\n round(avg($is_bounce), 1) AS bounce_rate\nFROM\n sessions\n INNER JOIN events ON sessions.id = events.properties.$session_id\nWHERE\n event = '$pageview'\n AND properties.$current_url LIKE '%forum.%'\n",
    "variables": {}
  },
  "display": "BoldNumber",
  "chartSettings": {
    "yAxis": [
      {
        "column": "bounce_rate",
        "settings": {
          "display": {
            "color": "#1d4aff",
            "label": "",
            "trendLine": false,
            "displayType": "auto",
            "yAxisPosition": "left"
          },
          "formatting": {
            "style": "percent",
            "prefix": "",
            "suffix": ""
          }
        }
      }
    ],
    "seriesBreakdownColumn": null
  },
  "tableSettings": {
    "columns": [
      {
        "column": "bounce_rate",
        "settings": {
          "display": {
            "color": "#1d4aff",
            "label": "",
            "trendLine": false,
            "displayType": "auto",
            "yAxisPosition": "left"
          },
          "formatting": {
            "style": "percent",
            "prefix": "",
            "suffix": ""
          }
        }
      }
    ],
    "conditionalFormatting": []
  }
}

app_metrics

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/schema/app-metrics

Problem and constraints

PostHog provides Apps for data imports, exports and transformation purposes.

App metrics helps users of apps want to know whether the apps are reliable and have tooling to debug errors.

When designing the schema, we needed ingestion of these stats to be as 'cheap' as possible. On the flip side, queries against the data did not need to support much beyond time-range filtering.

Schema

CREATE TABLE sharded_app_metrics
(
    team_id Int64,
    timestamp DateTime64(6, 'UTC'),
    plugin_config_id Int64,
    category LowCardinality(String),
    job_id String,
    successes SimpleAggregateFunction(sum, Int64),
    successes_on_retry SimpleAggregateFunction(sum, Int64),
    failures SimpleAggregateFunction(sum, Int64),
    error_uuid UUID,
    error_type String,
    error_details String CODEC(ZSTD(3)),
    _timestamp DateTime,
    _offset UInt64,
    _partition UInt64
)
ENGINE = ReplicatedAggregatingMergeTree('/clickhouse/tables/{shard}/posthog.sharded_app_metrics', '{replica}')
PARTITION BY toYYYYMM(timestamp)
ORDER BY (team_id, plugin_config_id, job_id, category, toStartOfHour(timestamp), error_type, error_uuid)

<details><summary>Click here to see supporting tables</summary>

CREATE TABLE app_metrics
(
    team_id Int64,
    timestamp DateTime64(6, 'UTC'),
    plugin_config_id Int64,
    category LowCardinality(String),
    job_id String,
    successes SimpleAggregateFunction(sum, Int64),
    successes_on_retry SimpleAggregateFunction(sum, Int64),
    failures SimpleAggregateFunction(sum, Int64),
    error_uuid UUID,
    error_type String,
    error_details String CODEC(ZSTD(3)),
    _timestamp DateTime,
    _offset UInt64,
    _partition UInt64

)
ENGINE=Distributed('posthog', 'posthog', 'sharded_app_metrics', rand())

CREATE MATERIALIZED VIEW app_metrics_mv
TO posthog.sharded_app_metrics
AS SELECT
    team_id,
    timestamp,
    plugin_config_id,
    category,
    job_id,
    successes,
    successes_on_retry,
    failures,
    error_uuid,
    error_type,
    error_details
FROM posthog.kafka_app_metrics


CREATE TABLE kafka_app_metrics
(
    team_id Int64,
    timestamp DateTime64(6, 'UTC'),
    plugin_config_id Int64,
    category LowCardinality(String),
    job_id String,
    successes Int64,
    successes_on_retry Int64,
    failures Int64,
    error_uuid UUID,
    error_type String,
    error_details String CODEC(ZSTD(3))
)
ENGINE=Kafka('kafka:9092', 'clickhouse_app_metrics_test', 'group1', 'JSONEachRow')

Decision: Store errors in the same table as metrics

Error tracking is fundamentally different from metrics, but we wanted to avoid "failure counts" and errors we have data on going out of sync.

For this reason, the two are stored in the same table, with error_details column containing JSON-encoded metadata about the error including the relevant event payload, stack trace.

This runs the risk of data storage increasing significantly if a lot of large errors occur.

For this reason error_details column uses ZSTD(3) codec.

Sorting by error_type also has a significance: error_details of a given error type should be similar and compress well.

In the future, we might introduce TTLs for the error columns if storage becomes a problem or periodically wipe error data in other ways.

Decision: pre-aggregate metrics in app in memory

Apps act on events as they're processed and users might have dozens of apps installed at the same time.

For this reason, emitting a Kafka message per app per event ingested ends up being too expensive. We instead aggregate metrics (and errors) in memory and only periodically flush data to Kafka.

This runs the trade-off of counts being subtly off after deploys or restarts. If this becomes a significant user concern, we may reduce the precision of the numbers shown in the UI.

Decision: using AggregatingMergeTree

To make ingesting and storing this data cheaper, AggregatingMergeTree is used.

Each time two parts are merged, rows with identical ORDER BY values are collapsed into a single row in the new part.

In this setup, this means that:

Even with all of this we still need to sum values in queries as merges may never occur.

Decision: sharding

To make data cheaper to store, this table is sharded.

Results

On US Cloud, the disk size of this table was 6 MB after aggregating nearly 2 billion metrics. For comparison, storage similar number of events can require hundreds of gigabytes.

Queries against this schema are also usually measured in milliseconds.

The reason we were able to leverage pre-aggregation to this extent was since we only needed to answer a few questions:

These queries all lend itself well to pre-aggregation, meaning an expert schema could store this data very cheaply at the cost of some flexibility.

Next schema in the ClickHouse manual: person_distinct_id

Overview

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/schema

When designing a schema for ClickHouse, there are dozens of large and small decisions engineers need to make to design a well-performing solution fit for the problem being solved.

The following documents outline various schemas we have at PostHog, examining why they are designed this way, what are some good parts about them, and mistakes that were made.

Schemas

person_distinct_id

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/schema/person-distinct-id

person_distinct_id table makes for an interesting case study on how initial schema design flaws were exposed over time and how they were fixed.

Problem being solved

PostHog needs to know who are the users associated with each event.

In frontend libraries like posthog-js, when persons land on a site they're initially anonymous with a random distinct ID. As persons log in or sign up, posthog.identify should be called to signal that the anonymous person is actually some logged in person and their prior events should be grouped together.

The semantics of this have changed significantly with person-on-events project.

Schema

CREATE TABLE person_distinct_id
(
    distinct_id VARCHAR,
    person_id UUID,
    team_id Int64,
    _sign Int8 DEFAULT 1,
    is_deleted Int8 ALIAS if(_sign==-1, 1, 0),
    _timestamp DateTime,
    _offset UInt64
) ENGINE = ReplicatedCollapsingMergeTree('/clickhouse/tables/noshard/posthog.person_distinct_id', '{replica}-{shard}', _sign)
ORDER BY (team_id, distinct_id, person_id)

The is_deleted column is not actually being written to, it is dynamically calculated based on the _sign column.

This table was queried often joined with events table along the following lines:

SELECT avg(count())
FROM events
INNER JOIN (
    SELECT distinct_id, argMax(person_id, _timestamp) as person_id
    FROM (
        SELECT distinct_id, person_id, max(_timestamp) as _timestamp
        FROM person_distinct_id
        WHERE team_id = 2
        GROUP BY person_id, distinct_id, team_id
        HAVING max(is_deleted) = 0
    )
    GROUP BY distinct_id
) AS pdi ON (pdi.distinct_id = events.distinct_id)
WHERE team_id = 2
GROUP BY pdi.person_id

Design decision: no sharding

Since this table was almost always joined against the events table, this table was not sharded.

Sharding it means that each shard would need to send back all the events and person_distinct_id sub-query result rows to coordinator node to execute queries, which would be expensive and slow.

Design decision: CollapsingMergeTree

The given distinct_id belonging to a person can change over time as posthog.identify or posthog.alias are called.

For this reason the data needs to be constantly updated, yet updating data in ClickHouse requires rewriting large chunks of data.

Rather than rewriting data, we opted to use CollapsingMergeTree. CollapsingMergeTree adds special behavior to ClickHouse merge operation: if on a merge rows with identical ORDER BY values are seen, they are collapsed according to _sign column:

This was used to update-via-insert:

Due to this logic, both person_id and distinct_id needed to be in the ORDER BY key.

Problem: CollapsingMergeTree for updates

CollapsingMergeTree is not ideal for frequently updating a single row as merges occur in an non-deterministic order and that will cause trouble if subsequent rows signifying deletes get discarded before being merged with an "insert" row.

When updating columns ReplacingMergeTree engine tables with an explicit version column has proven to be reliable.

Problem: Expensive queries

In December 2021, PostHog started seeing significant performance problems and out-of-memory errors due to this schema for largest users.

The problem was two-fold:

Improved schema

To fix both problems, a new table was created:

CREATE TABLE person_distinct_id2
(
    team_id Int64,
    distinct_id VARCHAR,
    person_id UUID,
    is_deleted Int8,
    version Int64 DEFAULT 1,
    _timestamp DateTime,
    _offset UInt64,
    _partition UInt64
)
ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/noshard/posthog.person_distinct_id2', '{replica}-{shard}', version)
ORDER BY (team_id, distinct_id)
SETTINGS index_granularity = 512

JOINs with this table look something like this:

SELECT avg(count())
FROM events
INNER JOIN (
    SELECT distinct_id,
            argMax(person_id, version) as person_id
    FROM person_distinct_id2
    WHERE team_id = 2
    GROUP BY distinct_id
    HAVING argMax(is_deleted, version) = 0
) AS pdi ON e.distinct_id = pdi.distinct_id
WHERE team_id = 2
GROUP BY pdi.person_id

This schema:

Closing notes

Even with improvements JOINs still are expensive and after the person-on-events project we were able to store person_id column on the events table to great effect.

sharded_events

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/schema/sharded-events

sharded_events table powers our analytics and is the biggest table we have by orders of magnitude.

In this document, we'll be dissecting the state of the table at the time of writing, some potential problems and improvements to it.

Schema

CREATE TABLE sharded_events
(
    uuid UUID,
    event VARCHAR,
    properties VARCHAR CODEC(ZSTD(3)),
    timestamp DateTime64(6, 'UTC'),
    team_id Int64,
    distinct_id VARCHAR,
    elements_chain VARCHAR,
    created_at DateTime64(6, 'UTC'),
    person_id UUID,
    person_created_at DateTime64,
    person_properties VARCHAR Codec(ZSTD(3)),
    group0_properties VARCHAR Codec(ZSTD(3)),
    group1_properties VARCHAR Codec(ZSTD(3)),
    group2_properties VARCHAR Codec(ZSTD(3)),
    group3_properties VARCHAR Codec(ZSTD(3)),
    group4_properties VARCHAR Codec(ZSTD(3)),
    group0_created_at DateTime64,
    group1_created_at DateTime64,
    group2_created_at DateTime64,
    group3_created_at DateTime64,
    group4_created_at DateTime64,
    $group_0 VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$group_0'), '^"|"$', '') COMMENT 'column_materializer::$group_0',
    $group_1 VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$group_1'), '^"|"$', '') COMMENT 'column_materializer::$group_1',
    $group_2 VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$group_2'), '^"|"$', '') COMMENT 'column_materializer::$group_2',
    $group_3 VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$group_3'), '^"|"$', '') COMMENT 'column_materializer::$group_3',
    $group_4 VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$group_4'), '^"|"$', '') COMMENT 'column_materializer::$group_4',
    $window_id VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$window_id'), '^"|"$', '') COMMENT 'column_materializer::$window_id',
    $session_id VARCHAR MATERIALIZED replaceRegexpAll(JSONExtractRaw(properties, '$session_id'), '^"|"$', '') COMMENT 'column_materializer::$session_id',
    _timestamp DateTime,
    _offset UInt64
) ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/77f1df52-4b43-11e9-910f-b8ca3a9b9f3e_{shard}/posthog.events', '{replica}', _timestamp)
PARTITION BY toYYYYMM(timestamp)
ORDER BY (team_id, toDate(timestamp), event, cityHash64(distinct_id), cityHash64(uuid))
SAMPLE BY cityHash64(distinct_id)

The table is sharded by sipHash64(distinct_id)

ORDER BY

The ORDER BY clause for this table is:

ORDER BY (team_id, toDate(timestamp), event, cityHash64(distinct_id), cityHash64(uuid))

Most insight queries have filters along the lines of:

WHERE team_id = 2
  AND event = '$pageview'
  AND timestamp > '2022-11-03 00:00:00'
  AND timestamp < '2022-11-10 00:00:00'

Which is well-served by the first 3 parts of this ORDER BY.

Note that:

Also instead of distinct_id, we now would want to include person_id in the ORDER BY to make counting unique persons faster.

Evaluation: 🤷 It's reasonably good, but there are some potential improvements.

Sharding by cityHash64(distinct_id)

Sharding by distinct_id means that many queries such as unique person counts or funnels cannot be evaluated on individual replicas and data must be sent to coordinator node.

Luckily, this isn't the worst bottleneck in our production environments due to fast networking.

Evaluation: 👎 this needs fixing, however resharding data is hard.

PARTITION BY toYYYYMM(timestamp)

All analytics queries filter by timestamp and recent data is much more frequently accessed.

Partitioning this way allows us to skip reading a lot of parts and to move older data to cheaper (but older) storage.

Evaluation: 👍 Critical to PostHog functioning well.

person columns

Prior to having person_id, person_properties, person_created_at columns, when calculating funnels, unique users or filtering by persons or cohorts, queries always needed to JOIN one or two tables.

JOINs in ClickHouse are expensive and this frequently caused memory errors for our largest users, so a lot of effort was put into being able to denormalize that data and for it to be stored in events table.

Evaluation: 👍 Removes a fundamental bottleneck for scaling queries and allows for more efficient sharding in the future.

properties column and materialized columns

A lot of queries touch JSON properties or person_properties columns, meaning performance on them is critical. On the other hand, JSON properties columns are the biggest ones we have and filtering on them is frequently slow due to I/O and parsing overheads.

Some developments have helped speed this expensive operation up significantly:

  1. Adding support for materialized columns
  2. Compressing these columns via ZSTD(3) codec

The biggest unrealized win here is also allowing to skip reading rows via indexing or ORDER BY, but it's unclear on how that might be achieved.

Read more on working with JSON data and materialized columns in the ClickHouse JSON guide.

Evaluation: 🤷 A lot better than could be, but also a lot of unrealized potential.

SAMPLE BY cityHash64(distinct_id)

Allowing data to be sampled helps speed up queries at the expense of having less accurate results. This can be helpful for allowing a fast data exploration experience.

We should now be sampling by person_id column as analytics-by-person is _likely_ the most important thing.

At the time of writing (November 2022), sampling is not yet supported by the PostHog app.

ReplacingMergeTree with uuid as sort key

ReplacingMergeTree allows "replacing" old data with new given identical ORDER BY values. In our case, since we have a uuid column in ORDER BY, in theory users should be able to "fix" past data by re-emitting events with same event, date, and uuid but improved data.

However this does not work as ReplacingMergeTree only does work at merge-time and:

  1. merges are not guaranteed to occur
  2. we're not accounting for duplicate-uuid data in queries and it would be prohibitively expensive to do so

The only way to use this is to regularly use OPTIMIZE TABLE sharded_events FINAL, but that could make operations harder and require a lot of I/O due to needing to rewrite data.

Sending data with custom uuids is also undocumented and prone to bugs on our end.

Evaluation: 🚫 This design decision is a mistake.

elements_chain column

PostHog's JavaScript library has an autocapture feature, where we store actions users do on pages and DOM elements these actions are done against.

elements_chain column contains the DOM hierarchy autocaptured events were done against. In queries we match against this using regular expressions.

Evaluation: 🤷 Potentially suspect, but hasn't become an obvious performance bottleneck so far.

No pre-aggregation / precalculation

Every time an insight query is made that doesn't hit the cache, PostHog needs to re-calculate the result off of the whole dataset.

This is likely inevitable during exploration, but when working with dashboards that are refreshed every few hours or with common queries this is needlessly inefficient.

There are several unproven ideas on how to could optimize this:

1. Combine with previous results

Due to person columns now being stored on sharded_events table, historical data in the table can be considered immutable.

This means PostHog could store every query result after queries. On subsequent queries only query data ingested after previous query and combine results with the previous query results.

In theory this works well with line graphs but harder to do with e.g. funnels and requires extensive in-app logic to build out.

2. Projections

Similarly, due to immutable data PostHog could calculate some frequent insights ahead of time.

The projections feature could feasibly help do this at a per-part level for consistency without special application logic.

Next schema in the ClickHouse manual: app_metrics

Working with JSON

Engineering | Source: https://posthog.com/handbook/engineering/clickhouse/working-with-json

At PostHog, we store arbitrary payloads users send us for further analysis as JSON. As such, it's critical we do a good job at storing and analyzing this data.

This document covers:

JSON Strings

At PostHog, we store JSON data as VARCHAR (or String) columns.

Relevant properties are then parsed out from the String columns at query-time using JSONExtract functions.

This has the following problems:

  1. These columns end up really large even after compression, meaning slow I/O
  2. It requires CPU to parse properties
  3. Data is not stored optimally. As an example, JSON keys are frequently repeated and numbers are stored as strings.

Compressing JSON

Luckily, JSON compresses really well, speeding up reading this data from disk.

By default our JSON columns are compressed by the LZ4 algorithm. See benchmarks for more information and benchmarks.

Materialized columns

ClickHouse has support for Materialized columns which are columns calculated dynamically based off of other columns.

We leverage them to dynamically create new columns for frequently-queried JSON keys to speed up queries as each materialized column is stored the same way as normal columns and requires less resources to read and parse.

Read more in our blog and in this guide for PostHog specific details.

Operational notes

After adding a materialized column, it is only populated for new data and on merges. When querying old data, this can introduce performance regressions, so forcing the column to be written to disk, even for historical data, is recommended.

Materialized columns may cause issues during operations - e.g. they can make copying data between tables painful. It's sometimes worth considering dropping them before large operations.

Alternative solutions

Arrays

Uber published an article on their logging, popularizing the idea to store JSON data as arrays: one for keys, one for values.

However internal benchmarking showed that in our use-case the improvement wasn't big enough to be worth the investment (yet)..

Semi-structured data / JSON data type

In 2022, ClickHouse released support for semi-structured data.

However after testing we encountered several fundamental problems which make this feature unusable in our case until they are resolved: 1, 2, 3, 4, and 5

Next in the ClickHouse manual: Query performance

Working with cloud providers

Engineering | Source: https://posthog.com/handbook/engineering/cloud-providers

AWS

How do I get access?

Create a PR to the posthog-cloud-infra repository to add your details in the identity center Terraform configuration with groups = local.default_groups.

To give someone access

  1. Add the new user to the cloud infra repo (see link above)
  2. Use their email address as their username
  3. Add them to the "Developers" and "DevelopersRO" groups (just use groups = local.default_groups)
  4. Add team infra as reviewer.
  5. Once this is merged, tell them to use http://go/aws to log in

Elevating permissions via #aws-access

To access the dev AWS environment, use the /awsaccess slash command in the #aws-access Slack channel and fill out the form that appears. Make sure to set up your AWS config file as described in our docs.

A dedicated secrets-editor role is available for managing secrets. Use this role across all AWS environments.

EKS access via #aws-access is currently in development. In the near future, expect all AWS access to be managed through the #aws-access channel.

Permissions errors using AWS CLI

If you see something like:

<my-user> is not authorized to perform: <action> on resource: <resource> with an explicit deny

Note the "with an explicit deny" in the end which likely is due to the fact that we force Multi-Factor Authentication (MFA). Follow this guide to use a session token.

TLDR:

  1. Look up your security credential MFA device name from AWS console from https://console.aws.amazon.com/iam/home#/users/<user-name>?section=security_credentials
  2. Run aws sts get-session-token --serial-number <arn-of-the-mfa-device> --token-code <code-from-token> --duration 129600 where code-from-token is the same code you'd use to login to the AWS console (e.g. from Authy app).
  3. Run the following code, replacing the placeholder values with the appropriate ones:
export AWS_ACCESS_KEY_ID=example-access-key-as-in-previous-output
export AWS_SECRET_ACCESS_KEY=example-secret-access-key-as-in-previous-output
export AWS_SESSION_TOKEN=example-session-token-as-in-previous-output
  1. Unset them when done (after they expire before running get-session-token again):
unset AWS_ACCESS_KEY_ID && unset AWS_SECRET_ACCESS_KEY && unset AWS_SESSION_TOKEN

Deploying PostHog

See the AWS self-host deployment guide.

GCP

How do I get access?

Ask in the #team-infrastructure Slack channel for someone to add you.

To give someone access: Navigate to PostHog project IAM and use the +Add button at the top to add their PostHog email address and toggle Basic -> Editor role.

Deploying PostHog

See the GCP self-host deployment guide.

DigitalOcean

How do I get access?

Ask in the #team-infrastructure Slack channel for someone to add you.

To give someone access: navigate to PostHog team settings page and use the Invite Members button to add their PostHog email address.

Edit 1-Click app info

This can be done in the vendor portal, click on PostHog with Approved status to edit the listing.

The code and setup files are in digitalocean/marketplace-kubernetes repository.

Deploying PostHog

See the DigitalOcean self-host deployment guide.

Consistent scripts to rule them all

Engineering | Source: https://posthog.com/handbook/engineering/conventions/scripts

PostHog has 235 (at the time of writing) public repositories. Each of these repositories has a unique way to get the project up and running locally. To limit the friction of this we adapt GitHub's approach of scripts to rule them all. As they say:

Being able to get from git clone to an up-and-running project in a development environment is imperative for fast, reliable contributions.

Not every repository will need every script. Some repositories will need scripts custom to the environment (for example, make files). That's all fine. The goal is to have a baseline set of scripts that we can use to get a development environment up and a known location to look for those scripts.

Standard scripts at PostHog

When starting a new project, create a bin directory and include the following scripts (when relevant):

Warning: Some environments add bin to the .gitignore file by default because that's where they compile binaries to.

Example scripts are available in the PostHog/scripts repository.

Customer comms as an engineer

Engineering | Source: https://posthog.com/handbook/engineering/customer-comms

Got a service change you need to email customers about — an API deprecation, a new quota limit, a breaking SDK change, a migration deadline? Loop in Joe. He owns customer comms and will handle the copy and the send via Customer.io.

All you need to bring:

Prior art to mirror: the feature-flags quota-limit rollout in product-internal#720.

For the underlying email infrastructure (Customer.io tags, categories, unsubscribe behavior), see the email comms handbook page. For incidents specifically, see engineering incidents — Marketing handles those comms too.

Deployments support

Engineering | Source: https://posthog.com/handbook/engineering/deployments-support

If you're the week's support hero or you are providing support for a customer and they have questions about their self-hosted deployment, follow this guide to provide initial support before looping in someone from the Infrastructure team.

Gather basic information

Here's a sample message that should help gather the relevant information up-front (appropriate for #community-support, but if working in a private channel with a paid customer, remove some of the obvious questions).

👋 Are you self-hosting or on PostHog Cloud? (if self-hosting please answer below)

1. What have you tried to solve the issue so far? How did that go?

2. Which cloud provider are you using? How many nodes are you running?

3. Are you using our Helm chart to deploy PostHog? Have you make any customisations? Can you please share your values.yaml file?

4. If you have any pod(s) erroring/restarting, can you please share the logs?

5. Do you have any kind of monitoring configured? (if not, can you please enable at least Grafana and Prometheus in the values.yaml of the Helm chart?)

6. How many events are in ClickHouse, and how many were ingested last month?

Kickstart the debugging process

  1. What's the output of kubectl get pods -n posthog?

This should look something like this:

    NAME                                                READY   STATUS    RESTARTS   AGE
    chi-posthog-posthog-0-0-0                           1/1     Running   0          11d
    clickhouse-operator-6b5438eh5fb-bt5fk               2/2     Running   0          11d
    posthog-beat-7782927b778-wxvhl                      1/1     Running   0          11d
    posthog-cert-manager-69fahs7b57-c48dn               1/1     Running   0          11d
    posthog-cert-manager-cainjector-6d95d93mn8-6tz6k    1/1     Running   0          11d
    posthog-cert-manager-webhook-6469930mdfc-6l958      1/1     Running   0          11d
    posthog-events-55283995cc-rpjdm                     1/1     Running   0          11d
    posthog-ingress-nginx-controller-648bdn892f-w7qhp   1/1     Running   0          11d
    posthog-pgbouncer-77fb4djs85c-2d24t                 1/1     Running   0          11d
    posthog-plugins-54fjd8292649-66gsm                  1/1     Running   0          18m
    posthog-posthog-kafka-0                             1/1     Running   0          11d
    posthog-posthog-postgresql-0                        1/1     Running   0          11d
    posthog-posthog-redis-master-0                      1/1     Running   0          11d
    posthog-posthog-zookeeper-0                         1/1     Running   0          11d
    posthog-posthog-zookeeper-1                         1/1     Running   0          11d
    posthog-posthog-zookeeper-2                         1/1     Running   0          11d
    posthog-web-78dns2f5d7-6zdkc                        1/1     Running   0          11d
    posthog-worker-7857nd8268-j8c4f                     1/1     Running   0          11d
  1. When they send you the output from the command in step 1, if any of the pods has a status other than Running, ask them for the output of kubectl logs pod-name -n posthog
  1. The output from the previous step may or may not be familiar to you. Sometimes the logs will be something you've seen before. If that's the case, try to reproduce the issue locally and come up with a fix. If things are cryptic to you, loop in someone from the Infrastructure team.
  1. If a pod is listed as Failed, suggest that they try an upgrade with helm upgrade -f values.yaml -n posthog

Common issues

Some common issues you might encounter are:

PostHog is stuck migrating/the migrate job has an issue

Tell them to run the following:

kubectl -n posthog delete job $(kubectl -n posthog get jobs --no-headers -o custom-columns=NAME:.metadata.name)
helm upgrade -f values.yaml -n posthog

The app/plugin server has an issue

The first thing that you can safely try here is to tell them to restart the apps pod:

# terminate the running plugins pod
kubectl scale deployment posthog-plugins --replicas=0 -n posthog

# start a new plugins pod
kubectl scale deployment posthog-plugins --replicas=1 -n posthog

How can I connect to Postgres?

Send them a link to this doc.

Kafka crashed / out of storage

Send them a link to this doc.

Connection is not secure / DNS problems

Before looping in someone, ask them to check that DNS records are correctly set and have propagated with this command:

nslookup <your-hostname> 1.1.1.1

Other issues

Check out our Troubleshooting page for other common issues and how you might be able to provide "first aid" before looping in someone.

All is lost

The idea of this doc is to cover some basic support that you can provide in order to either help the customer solve their issue or gather basic info before someone from the Infrastructure team shows up.

However, never hesitate to call us! We're more than happy to help.

Also, if things seem very serious and/or relate to a paying customer, do reach out to us right away.

Developer Experience

Engineering | Source: https://posthog.com/handbook/engineering/developer-experience

The DevEx team owns the shared developer tooling and workflows that cut across all product teams: local dev, CI, builds, framework upgrades, codebase structure, type systems, migration safety, and more. If it affects how fast and safely engineers can work on code and ship it, it's probably this team's thing.

Scope

| Area | What's owned | |------|------------| | Local dev | Local stack, hogli CLI, startup time, worktrees, Docker Compose, cloud envs | | CI | Pipeline speed, cost, reliability, flaky test triage, PR previews | | Build & tooling | Frontend/backend build pipelines, formatters, linters | | Type system | Backend/frontend type sync, OpenAPI generation, schema integrity | | Upgrades | Framework/language upgrades (Django, React, TS), dependency & security updates | | Architecture | Product folder structure, isolation model, legacy migration | | Migrations | Safe migration tooling, migration checkers, squashing |

Things you can use

Local dev

CI

Code quality

How to work with this team

Report what's slowing you down — flaky tests, slow builds, local dev friction, tooling that doesn't work right. A lot of it is known but there might be stuff that's been missed.

Loop the team into conversations early — if your team is making decisions that touch shared tooling, CI, code architecture, or conventions, bring DevEx in. Better to be in the discussion than clean up after it. Think: new products, services, big refactors, dependency changes, CI workflow tweaks.

Shipping & releasing

Engineering | Source: https://posthog.com/handbook/engineering/development-process

Any process is a balance between speed and control. If we have a long process that requires extensive QA and 10 approvals, we will never make mistakes because we will never release anything.

However, if we have no checks in place, we will release quickly but everything will be broken. For this reason we have some best practices for releasing things, and guidelines on how to ship.

How to decide what to build

Set milestones

To start, Product and Engineering should align on major milestones (e.g. Collaboration) which we have strong conviction will drive our success metrics for a feature.

There are two types of goals.

Goals should be time-bound, but since we primarily use goals for our two-weekly sprint planning we should consider them generally timebound to two weeks.

Use the following principles:

Assign an owner

A single engineer should be accountable for a milestone partnering closely with other functions to ensure it’s successful.

Think about other teams

Most things won't cause issues for other teams. However, if it will, don't "align teams" or write a huge RFC (unless that'd help you). Do a quick 1:1 with the most relevant person on another team.

Consider:

If this is a big feature which will need an announcement, content, or other marketing support then it's _never_ too early for the owner to let the Marketing team know. Drop a post in their Slack channel or tagging them on an issue.

Break up goals

The owner turns the ambiguous milestone into a roadmap of ambitious, meaningful, sprint-sized goals, thinking 2 - 3 sprints ahead to give other functions time. Goal principles still apply.

Iterate through the work

We used to have company-wide sprint planning sessions but as we've grown there were so many teams that it started being plan-reading and not planning.

PostHog works in two week iterations. Each team plans their work together and adds their sprint plan to a pinned issue in GitHub. If the issue for the next iteration doesn't exist when you come to comment on it then you create it.

When planning your work you should also have a retrospective for the previous iteration. Like most things at PostHog this can be a very low ceremony retro and ideally checking the team is working on the right things in the right way is a frequent thing not a once a fortnight thing.

Work in the iteration should:

As one of our values is Why not now?, during the iteration you might come across something that should be much higher priority than what was already planned. It's up to you to then decide to work on that as opposed to what was agreed in the planning session.

Evaluate success

Review impact of each major milestone and feedback into the planning process.

When we review the status of goals we classify them as follows:

What about the small stuff?

Not everything directly contributes to a company level goal. It’s important that the small stuff also gets done for us to succeed. Use the following principles:

Sizing tasks and reducing WIP

Efficient engineering organizations actively reduce Work In Progress (WIP) to avoid negative feedback loops that drive down productivity.

Hence, a PR should be optimized for two things:

  1. Quality of implementation
  2. The speed with which we can merge it in

PRs should ideally be sized to be doable in one day, including code review and QA. If it's not, you should break it down into smaller chunks until it fits into a day. Tasks of this size are easy to test, easy to deploy, easy to review and less likely to cause merge conflicts.

Sometimes, tasks need a few review cycles to get resolved, and PRs remain open for days. This is not ideal, but it happens. What else can you do to make sure your code gets merged quickly?

Writing code

We're big fans of Test Driven Development (TDD). We've tried to create test infrastructure that helps you rather than annoys you. If that isn't the case, please raise an issue! Keeping tests on point is a high priority to keep developer productivity high.

Other than that, you know what to do.

Creating PRs

When you have a piece of code ready to be reviewed, create a PR. Link the PR to the issue it solves, and add a clear description of what the PR does and how to test it. Follow PR templates if they exist for the area you're working on.

All PRs should be attributable to a human author as far as possible, even if they were assisted by an agent.

Fully automatically generated PRs might come from an agent like PostHog Code or from systems like Dependabot. These PRs are fine, but they should be clearly labelled as such and include a clear description of the changes being made and any relevant context about the generation process. These PRs should in turn never be attributed to a human author, as the changes were not directly or indirectly made by a human.

For external contributors, our AI contributions policy covers expectations around AI-assisted PRs.

To make sure our issues are linked correctly to the PRs, you can tag the issue in your commit.

git commit -m "Closes #289 add posthog logo to website"

Testing code

See: How we review.

Storybook Visual Regression Tests

In our CI pipeline, we use Playwright to load our Storybook stories and take snapshots. If any changes are detected, the updated snapshots are automatically committed to the PR. This helps you quickly verify whether you've introduced unexpected changes or if the UI has been altered in the intended way.

Check the test-runner.ts file to see how this is configured. We use the @storybook/test-runner package; you can find more details in the official Storybook documentation.

Running Tests Locally
  1. Start Storybook in one terminal:
    pnpm storybook
    # or
    pnpm --filter=@posthog/storybook
  1. Install Playwright and run the visual tests in debug mode in another terminal:
    pnpm exec playwright install
    pnpm --filter=@posthog/storybook test:visual:debug

This setup will help catch unintended UI regressions and ensure consistent visual quality.

If you wish to locally run test-runner.ts and output all snapshots:

pnpm --filter=@posthog/storybook test:visual:ci:update

Or if you wish to run one particular story:

pnpm --filter=@posthog/storybook test:visual:ci:update <path_to_story>

# example: pnpm --filter=@posthog/storybook test:visual:ci:update frontend/src/scenes/settings/stories/SettingsProject.stories.tsx
Merge conflicts with visual regression snapshots

It happens often that your PR will show conflics with our snapshots, as our CI pipeline will run test-runner.ts on every push, generating and pushing to your PR any significant visual changes.

Github does not allow for conflict resolution inside their website, so you must do it manually.

The following is done on your branch in question.

  1. Bring your branch up to date with master.
    git fetch origin
  1. Rebase master into your branch
    git rebase master
  1. Rebase your upstream into your local branch
    git pull --rebase <your branch>

In your terminal, it should show you the conflicts mimicking what you see in your Github PR.

warning: Cannot merge binary files: frontend/__snapshots__/<conflicted_file_1>.png (HEAD vs. xxx (Update UI snapshots for `chromium` (1)))
Auto-merging frontend/__snapshots__/<conflicted_file_1>.png
CONFLICT (content): Merge conflict in frontend/__snapshots__/<conflicted_file_1>.png
error: could not apply xxx... Update UI snapshots for `chromium` (1)
hint: Resolve all conflicts manually, mark them as resolved with
hint: "git add/rm <conflicted_files>", then run "git rebase --continue".
hint: You can instead skip this commit: run "git rebase --skip".
hint: To abort and get back to the state before "git rebase", run "git rebase --abort".

If all your conflicts are only snapshots, you can simply skip it.

git rebase --skip

If all conflicts go away, then

git push origin --force <your branch>

Why does this work? As we mentioned earlier, our CI runs test-runner.ts on every push, so we don't really care if these images are conflicted as they are regenerated after you push to your branch.

Deployed Preview

You can spin up a real deployed PostHog instance to test your branch by adding the hobby-preview label to your PR. This uses the hobby (Docker Compose) self-hosted setup under the hood.

How it works:

  1. Add the hobby-preview label to your PR
  2. CI creates a DigitalOcean droplet and deploys PostHog with your branch
  3. A comment is posted to the PR with the preview URL (e.g., https://hobby-pr-12345.posthog.dev)
  4. The droplet persists across commits so you can iterate
  5. Remove the label or close the PR to clean up the droplet

When to use it:

The workflow also runs a smoke test (health check) automatically on PRs that touch deployment-related files.

Reviewing code

PRs can be written by humans or by agents (like PostHog Code). Either way, every PR needs a review before merging, and a human always merges.

Who should review depends on who wrote the code (see Creating PRs):

We encourage the use of AI review agents (Codex, Copilot, Greptile, etc.) on PRs. Their comments and suggestions don't count as an approval, but they catch things humans miss and speed up the review process.

When reviewing a PR, we look at the following things:

Things we do not care about during review:

Merging

Merge anytime. Friday afternoon? Merge.

Our testing, reviewing and building process should be good enough that we're comfortable merging any time.

Always request a review on your pull request (or leave unassigned for anyone to pick up when available). We avoid merging without any review unless it's an emergency fix and no one else is available (especially for posthog.com).

Once you merge a pull request, it will automatically deploy to all environments. The deployment process is documented in our charts repository. Check out the #platform-bots Slack channel to see how your deploy is progressing.

We're managing deployments with ArgoCD where you can also see individual resources and their status.

Deploy notification bot

After your PR is deployed to an environment, a bot automatically comments on the merged PR with the deployment status. The dev deployment triggers the initial comment. As prod-us and prod-eu finish deploying, the bot updates the same comment in-place rather than posting new ones.

If you don't see a comment on your PR after a deploy, give it a few minutes -- the notification runs after ArgoCD finishes syncing. If it still hasn't appeared, check the deploy workflow in PostHog/charts for failures.

Verifying your deployment

After merging, your code should deploy automatically. If you need to verify your changes are live (or troubleshoot why they're not), here's how:

1. Check state.yaml (what should be deployed)

The charts repository state.yaml is the source of truth for what ArgoCD is trying to deploy. Find your service (e.g., ingestion, posthog) and check the commit SHA listed.

2. Check running pods (what's actually deployed)

If you have cluster access, verify what's running:

# Find your service's pods
kubectl -n posthog get pods | grep <service-name>

# Get the image/commit running on a pod
kubectl -n posthog get pod <pod-name> -o jsonpath='{.spec.containers[0].image}'
3. Verify your commit is included

Use git merge-base to check if your commit is an ancestor of the deployed commit:

git fetch origin
git merge-base --is-ancestor <your-commit-sha> <deployed-commit-sha> && echo "Deployed" || echo "Not deployed"
4. Troubleshooting with ArgoCD

If state.yaml shows a newer commit than what's running on pods, check ArgoCD:

  1. Find the specific app - Don't just look at the parent grouping (e.g., ingestion). Drill down to the specific environment app like ingestion-events-prod-us or posthog-prod-eu.
  1. Check sync status - Is it "Synced" or "OutOfSync"? When was the last successful sync?
  1. Check if auto-sync is enabled - Some apps may have auto-sync disabled and require manual syncing.
  1. Look at the diff - Click "DIFF" to see what's different between desired and live state.
Common deployment issues

| Symptom | Likely cause | Solution | |---------|--------------|----------| | App shows "OutOfSync" | Auto-sync disabled or sync error | Check if auto-sync is enabled; try manual sync | | state.yaml updated but pods unchanged | ArgoCD hasn't synced yet | Check ArgoCD app status; may need manual intervention | | Pods running old commit | Rollout stuck or image not built | Check deployment rollout status; verify CI built the image | | Can't find your service in ArgoCD | Looking at wrong app grouping | Search for your specific service + environment (e.g., ingestion-events-prod-us) |

If a deployment appears stuck, reach out in #team-infrastructure.

Documenting

If you build it, document it. You're in the best position to do this, and it forces you to think things through from a user perspective.

It's not the responsibility of either or teams to document features.

See our docs style guide for tips on how to write great docs.

Releasing

There are a few different ways to release code here:

Best practices for full releases

Opt-in betas can have rough edges, but public betas and full releases should be more polished and user friendly.

Engineers should apply the following best practices for _all_ new releases:

When to A/B test

There are two broad categories of things we A/B test:

The former is an optimization scheme; the latter makes sure we don't break things. Just like we create tests in our codebase to make sure new code doesn't disrupt existing features, we also need to do behavioral testing to make sure our new features aren't disrupting existing user behaviors.

A/B tests make sense when:

If you're not sure something should be A/B tested, run one anyway. Feature flags (which experiments run on top of) are a great kill-switch for rolling back features in case something goes sidwways. And it's always nice to know how your changes might move the numbers!

It's easy to just think "this makes more sense, let's just roll it out." Sometimes that's okay, sometimes it has unintended consequences. We obviously can't and shouldn't test everything, but running A/B tests frequently gets you comfortable with being wrong, which is a _very_ handy skill to have.

A/B test metrics

Experiment design is a bit of an art. There are different types of metrics you can use in PostHog experiments. Another benefit of running experiments is forcing yourself to think through what other things your change might impact, which oftentimes doesn't happen in the regular release cycle!

Generally, a good pattern is to set up 1-2 primary metrics that you anticipate might be impacted by the A/B test, as well as 3+ secondary metrics that might also be good to keep an eye on, just in case. If you aren't sure what metrics to be testing, just ask! Lots of people are excited to help think this through (especially #team-growth and Raquel!).

Releasing a new product

We can release alphas and betas both as publicly available, or opt-in. See: Releasing new products and features.

It's always worth letting Marketing know about new betas so they can help raise awareness. The owner should tag them on an issue, or drop a message in the Marketing Slack channel.

Betas are usually announced as milestones on the public roadmap and included in the changelog by Marketing.

Product announcements

Announcements, whether for beta or final updates, are a Marketing responsibility. See: Product announcements.

In order to ensure a smooth launch the owner should tell Marketing about upcoming updates as soon as possible, or include them in an All-Hands update.

It's _never_ too early to give Marketing a heads-up about something by tagging them in an issue or via the Marketing Slack channel.

Self-hosted and hobby versions

We have sunset support for our kubernetes and helm chart managed self-hosted offering. This means we no longer offer support for fixing to specific versions of PostHog. A docker image is pushed for each commit to master. Each of those versions is immediately deployed to PostHog Cloud.

The deploy-hobby script allows you to set a POSTHOG_APP_TAG environment variable and fix your docker-compose deployed version of PostHog. Or you can edit your docker-compose file to replace each instance of image: posthog/posthog:$POSTHOG_APP_TAG with a specific tag e.g. image: posthog/posthog:9c68581779c78489cfe737cfa965b73f7fc5503c

Feature ownership

Engineering | Source: https://posthog.com/handbook/engineering/feature-ownership

Each feature at PostHog has an Engineering owner. This owner is responsible for maintaining the feature (keep the lights on), championing any efforts to improve it (e.g. by bringing up improvements in sprint planning), planning launches for new parts of it, and making sure it is well documented.

For shared developer tooling and infrastructure that cuts across product teams (CI, local dev, builds, migrations, etc.), see the Developer Experience page.

When a bug or feature request comes in, we tag it with the relevant label (see labels below). The owner is responsible for then prioritizing any bug/request that comes in for each feature. This does not mean working on every bug/request, an owner can make the deliberate decision that working on something is not the best thing to work on, but every request should be looked at.

Who can contribute to owned features?

Feature ownership does not mean that the owner is the only person/team who can contribute to the feature. If another team requires something from an existing feature that isn't supported, that non-owning team should build it. The owner team is responsible for reviewing PRs to make sure the code patterns and UX makes sense for the feature overall. After the change is merged, the owner team then owns it (assuming no major bugs from the initial implementation).

For example, web analytics wanted a heatmap insight type to see what times of day people were active. Javier Bahamondes from web analytics opened up the necessary PRs to build this feature. It was reviewed by the , owner of all insight types, who then took responsibility for it after it was merged.

This process does four things:

Feature list

You can also view the list directly in GitHub and filter issues there.

Don't just copy other products

Some of the features we are building may exist in other products already. It is fine for us to be inspired by them - there's no need to reinvent the wheel when there is already a standard way our users expect things to work. However, it is not ok for us to say 'let's copy how X does it', or to ship something with the exact same look and feel as another product. This is bad for two reasons:

Pricing principles

Engineering | Source: https://posthog.com/handbook/engineering/feature-pricing

In an ideal world, Posthog’s pricing enables users and organizations to:

  1. Use PostHog for free if they are hobbyists or pre-PMF.
  2. Experience the product before paying for it.
  3. Start paying when they are ready, on their own, with few hurdles.
  4. Transparently pay for the value they receive.
  1. Make it a no-brainer to pick PostHog over other competitors.

Our goals with these principles are to:

It's important we evaluate all new features, and shifts in our pricing plans, to ensure they align with our pricing values.

In the real world

Sometimes these principles still leave room for questions – what, if anything, should be available in the free tier? What about enterprise customers?

For these types of questions, we've defined a runbook for deciding which plans, and at what limits, features should be assigned to.

We should roughly match the cheapest competitor

In general, we should roughly match the pricing of the cheapest big competitor for that product, so long as the unit economics make sense, to make it a no-brainer to use PostHog. To qualify for this, a competitor must be _making actual revenue_ at significant scale - we won't match the pricing random startups or new products at existing competitors offer, since these products and GTMs aren't mature yet.

We can do this because we can upsell customers multiple of our other products. The total ACV is higher even if the per-product ACV is lower.

It's better for customers because they get all these tools that are well integrated for the cheapest possible price.

While we don't have loss leaders, we accept that we might not fully understand our cost base and make money on every product on day one. We welcome this pressure to do things more efficiently and get the costs down over time.

Every product should be priced separately

Whenever we build a product, like feature flags, or product experimentation, we should have a specific price for that product by itself. Being consistent here is less confusing than randomly combining products for example, even though it will sometimes mean more items to explain to a customer.

It means that customers who want just one product can compare each of our products to our competitors', seeing that we are cheaper everywhere, improving our self-serve top-of-funnel.

This also makes the value of each product more tangible. Usage and value are not the same thing - willigness to pay is the best indicator of the value our customers are getting from each product.

However, when one of our products has a fundamental dependency on another of our products, we should aim to bundle the cost of the dependencies in with the product's pricing so customers only pay once for using a given product.

For example, when someone calls a feature flag, we send a $feature_flag_called event so we can have stats. In this case, we don't charge for those events, as the events are solely related to feature flags.

Features that increase our stickiness should be free (with a reasonable limit)

A good question to ask yourself here is, "If I were to switch away from PostHog, would I feel like I am losing anything by switching?"

For example, if someone were to consider moving from PostHog to some other provider, cohorts would need to be manually recreated in the other provider, which would be tedious. However, something like Web Performance just happens and doesn't require any user involvement, so isn't sticky.

Products should work independently but shine together

Each product should be usable on its own. For example, session replay can be enabled independently of other products. But to get the most value out of it, it's best to use it together with our other products. This enables users to have rich filters using the data from the other parts of PostHog. Similarly, you can use error tracking on its own, but it's a lot more powerful if you also use session replay, enabling you to easily click through to the recording of a session where the error occurred.

Other guidelines

Deciding on a free volume, and making changes to it

How-to access PostHog Cloud infra

Engineering | Source: https://posthog.com/handbook/engineering/how-to-access-posthog-cloud-infra

We've all been there. Something was just merged and now there is a bug that you are having a real hard time pinning down. You hate to do it... but you need to get on a pod or instance to troubleshoot the issue further. _SHAME_

Image: Shame bell

Prerequisite

Make sure you've followed this guide to get AWS access. !!! Please follow the whole document !!!

Connect to a Kubernetes pod

After you got access to the EKS cluster and our internal network:

Note: if you need a Django shell, just run the following after connecting:

python manage.py shell_plus

Connect to an EC2 instance

Please follow this guide to connect via AWS Systems Manager Agent (SSM).

How we review PRs

Engineering | Source: https://posthog.com/handbook/engineering/how-we-review

Almost all PRs made to PostHog repositories will need a review from another engineer. We do this because, almost every time we review a PR, we find a bug, a performance issue, unnecessary code or UX that could have been confusing. Here's how we do it:

Before requesting a review

The best way to get a fast, useful review is to make your PR easy to review.

Have a flick through the code changes

What to look for:
What _not_ to look for:

Run the code yourself

What to look for:
What _not_ to look for:

The emphasis should be on getting something out quickly. Endless review cycles sap energy and enthusiasm.

Turnaround

Aim to respond to review requests within one working day. You don't have to finish the review — even a quick "I'll look at this properly tomorrow" or "this needs someone from [@team-name] to review" unblocks the author and sets expectations. Leaving a PR in limbo for days is worse than a fast "I can't review this."

Requesting a review outside your team

Not every team has someone available to review your PR right away. Posting in #dev-stamp-exchange is a way to ask for a quick-turnaround review from someone outside your team. This is fine — but quick turnaround doesn't mean lower standards.

When this is appropriate:
What's still expected from the reviewer:
When to push back instead of approving:

Review comment conventions

Use prefixes on your review comments so the author knows what actually needs to change before merging:

If a comment doesn't have a prefix, assume it's a suggestion. This avoids the "is this a blocker or just a thought?" ambiguity that slows reviews down.

Handling an incident

Engineering | Source: https://posthog.com/handbook/engineering/operations/incidents

The TL;DR

Raising an incident

Image: alert-example

Incidents are going to happen. If you'd rather watch a Loom, check out an incident drill Loom recording.

Anyone can declare an incident and, when in doubt, you should always raise an incident. We'd much rather have declared an incident which turned out not to be an incident. Many incidents take too long to get called, or are missed completely because someone didn't ring the alarm when they had a suspicion something was wrong. It's _always_ better to sound an alarm than not.

To declare an incident, type /incident anywhere in Slack. This creates a new dedicated channel for the incident and add a few stakeholders. It will trigger an alert in the #incidents channel so everyone else can be aware. Declaring an incident doesn't trigger any external notifications.

Once an incident is raised an automatic workflow begins that will help you summarize the issue and escalate it appropriately.

Some things that should definitely be an incident

Things that _shouldn’t_ be an incident

Planning some maintenance? Check the announcements section instead.

Security-specific guidance

Security incidents can have far-reaching consequences and should always be treated with urgency. Some examples of security-related issues that warrant raising an incident include:

When in doubt, err on the side of caution and raise the incident and escalate early! Better to be safe than sorry.

Need to make a security advisory? We have a page for that with more detail on the process for security vulnerabilities.

Incident severity

Please refer to the following guidance when choosing the severity for your incident. If you are unsure, it's usually better to over-estimate than under-estimate!

Minor

A minor-severity incident does not usually require paging people, and can be addressed within normal working hours. It is higher priority than any bugs however, and should come before sprint work.

Examples

If not dealt with, minor incidents can often become major incidents. Minor incidents are usually OK to have open for a few days, whereas anything more severe we would be trying to resolve ASAP.

Major

A major incident usually requires paging people, and should be dealt with _immediately_. They are usually opened when key or critical functionality is not working as expected.

Major incidents often become critical incidents if not resolved in a timely manner.

Examples

Critical

An incident with very high impact on customers, and with the potential to existentially affect the company or reduce revenue.

Examples

What happens during an incident?

When an incident is declared, the person who raised the incident is the incident lead. It’s their responsibility to:

The incident lead role is not responsible for fixing the incident, they're responsible for managing it. Sometimes that will be the same person. But if it is too much work for one person, hand over the incident lead role to someone else not actively working on the fix.

Sometimes, customer communication is required. In this case, the incident lead can ask for a comms lead to support the responding team. The best way to do this is to ask for support in the incident channel and use the @all-marketers group tag. Don't be shy.

You can find further production runbooks + specific strategies for debugging outages here (internal)

The PostHog status page

Our status page is the central hub for all incident communication. You can update it easily using the /incident statuspage (/inc sp) Slack command.

When updating the status page, make sure to mark the affected component appropriately (for example during an ingestion delay, setting US Cloud 🇺🇸 / Event and Data Ingestion to Degraded Performance). This allows PostHog's UI to gently surface incidents with a "System status" warning on the right. Only users in the affected region will see the warning:

Getting help from a comms lead

Significant incidents such as the app being partially or fully non-operational, as well as ingestion delays of 30 minutes or longer should be clearly communicated to our customers. They should get to know what is going on and what we are doing to resolve it. If the incident is minor this can usually be done by updating the status page, but it may be desirable to do additional customer communications, such as sending an email to impacted customers. When this is required, you should involve a Comms Lead and ensure the Sales team are aware.

The best way to ask for support from a Comms Lead is to post in the incident channel and use the @all-marketers group tag. This will alert the all relevant marketing teams.

When handling a security incident, please align with the incident responder team in the incident Slack channel about public communication of security issues. For example, it may not make sense to immediately communicate an attack publicly, as this could make the attacker aware that we are already investigating. This could it make harder for us to stop this attack for good.

When a customer is causing an incident

In the case that we need to update a _specific_ customer, such as when an individual org is causing an incident, we should let them know as soon as possible. Use the following guidelines to ensure smooth communication:

In the case that we need to temporarily limit a _specific_ customer's access to any functionality (e.g. temporarily prevent them from using an endpoint) as a result of certain usage resulting in an incident, we need to make sure we put an alert on their Zendesk tickets. This will make sure that anyone working on a ticket from the org will know what's happening with the org before replying (even if we've already reached out to the org, some folks at the org may not be aware, and so may open a support ticket.)

You'll just need to set the name of the org in an existing trigger in Zendesk, then reverse that change when the org's full access has been restored:

  1. After Logging into Zendesk, go to the admin center
  2. In the left column, expand Objects and rules and click on Triggers (under "Business rules")
  3. On the Triggers page, expand Housekeeping and click on Add alert for org with special-handling
  4. Under Conditions, the last condition is: Organization > Organization Is PostHog. Change PostHog to the name of organization who has had their access limited as a result of the incident. (Click on "PostHog" and then start typing to filter and find the org name, then click on it)
  5. Scroll to the bottom of the page and click the Save button

Once the org has had their full access restored, repeat the steps above, but this time put PostHog back in the last condition, and remember to Save the change.

When does an incident end?

When we've identified the root cause, implemented a fix, and confirmed all customer-facing services have returned to normal. End the incident by typing /inc close in the incident channel. Make sure to also mark the incident as resolved on the status page.

What happens after an incident?

Once the incident is resolved, the incident lead should step away. Take a walk, go to the gym, have some tea, take a shower. The longer the incident took to resolve, and the more directly customer impacting it was, the more important this is. Bring another team member up to speed, hand off outstanding customer communications, and get your head clear for the post-mortem and followup actions. Anyone else heavily involved in the response should do the same.

In almost all cases, a valid incident will have a post-mortem - check out Post-mortems for more details.

On-call and escalation

Engineering | Source: https://posthog.com/handbook/engineering/operations/on-call-rotation

At PostHog, every engineer is responsible for maintaining their team's services and systems. That includes:

In addition, every engineer regardless is part of the global follow-the-sun on-call rotation.

Escalation schedules

Team schedules

Every team has 2 schedules in incident.io

Global on-call schedule

Schedule in incident.io

💡 You can use @on-call-global in Slack to reach out to whoever is on call! This syncs automatically with the incident.io schedule. This group is also automatically added to all incidents.

PostHog Cloud doesn't shut down at night (_whose_ night anyway?) nor on Sunday. As a 24/7 service, our goal is to be 100% operational 100% of the time. The global on-call is the last line of defense and is escalated to:

This schedule has 3 week day layers:

And 2 weekend layers:

Why is the on-call rotation spread across all engineers?

If you're in a product team, it's tempting to think that service alerts don't apply to you, or that when you're on call you can just hand everything off to the infrastructure team. That's not the case, because it's important that every engineer has a basic understanding of how our software is deployed, where the weak points in our systems are, and what the failure modes look like. This understanding should be all that's needed to follow the runbooks, and if you follow the causes of alerts, ultimately you'll be less likely to ship code that takes PostHog down.

Besides knowledge, being on call requires availability – including weekends. If teams had their own separate rotations, there would be more people on call in total, and each would have to stand by 24/7 as our teams aren't big enough to follow the sun. This would be more stressful because of availability constraints, while being less productive because of the rare alerts being spread across multiple people.

Before going on call

Mindset

Read: Jos Visser: Ten things not to worry about regarding oncall (Worth the read, even if you're an on-call veteran.)

Be prepared

Because the stability of production systems is critical, on-call involves weekends too (unlike Support Hero). More likely than not, nothing will happen over the weekend – but you never know, so the important thing is to keep your laptop at hand.

Before going on call, make sure you have the Incident.io mobile app Android / iOS installed and configured. This way it'll be harder to miss an alert.

TRICKY: We use Slack auth for incident.io and Slack really doesn't like you using the mobile web version. Make sure to choose Sign in with Slack and then use your email to login to Slack, not google auth as that seems to cause redirect issues for some people.

Still having redirect issues signing up with Slack? Create a Slack password instead of using Google SSO, then log in with that password.

To get a calendar with all your on-call shifts from incident.io go to the schedules section, select Sync calendar at the top right and copy the link for the webcal feed. In google calendar, add a new calendar from URL and paste the link in there.

Make sure your availability is up-to-date

If you are unavailable for any of your schedules you need to act!

  1. For your On call: {team} schedule simply click on your name in your layer, click create an override and then remove yourself from the list so it shows No one
  2. For your Support: {team} schedule or On call: {global} schedules click Request cover at the top right. This will notify selected team members automatically to find someone to cover you (you should probably do a shout out in #ask-posthog-anything as well). You can trade whole weeks, but also just specific days. Remember not to alter the rotation's core order, as that's an easy way to accidentally shift the schedule for everyone.

Make sure you have all the access you might need

To be ready, make sure you have access to:

More advanced access

If you are part of a team that looks after more critical infrastructure such as infra, ingestion, workflows, error-tracking etc. then you are expected to dive deeper than the usual on-call engineer.

As well as the above access you should ensure you have access and feel comfortable working with:

Responding to alerts when on-call

Image: alert-example

Critical alerts will trigger per-team escalation policies which go like this:

  1. If available, a member of the team associated with the alert is paged first
  2. If nobody is available or nobody responds within the configured time then the On call: global schedule is paged

If at any point you get paged - always respond! Even if you are unavailable you should respond as such (either via the app or the personal Slack notification). That way the escalation can continue to the next available person.

By default if you are being paged, especially as the global on-call, the alert is considered critical, meaning it almost definitely requires attention.

Every alert should have associated Grafana and Runbook links allowing you to quickly get more visual details of what is going on and how to respond.

When an alert fires, find if there's a runbook for it. A runbook tells you what to look at and what fixes exist. In any case, your first priority will be to understand what's going on, and the right starting point will almost always be Grafana.

Sometimes alerts are purposefully overly-sensitive and might already be fixing themselves by the time you see them. Use your best judgement here. If the linked graph has a spike that is clearly coming down, watch it closely and give it time for the alert to auto-resolve.

If the alert is starting to have any noticeable impact on users or you are not sure whether to raise an incident - go raise an incident. It's that simple.

If you're stumped and no resource is of help, get someone from the relevant team to shadow you while you sort the problem out. The idea is that they can help you understand the issue and where to find how to debug it. The idea is _not_ for them to take over at this point, as otherwise you won't be able to learn from this incident.

Post-mortems

Engineering | Source: https://posthog.com/handbook/engineering/operations/post-mortems

At PostHog, we believe that incidents are learning opportunities. Every incident, whether major or minor, provides valuable insights that help us improve our systems, processes, and response capabilities. Post-mortems are our way of capturing these lessons and ensuring we continuously improve.

Why post-mortems matter

Post-mortems serve several critical purposes:

Post-mortem process

Incidents can be stressful and time consuming but it's equally important that we take the time to learn from them and improve our systems and processes. The longer you wait, the more details you'll forget and the less valuable the post-mortem becomes.

We use incident.io's post-mortem template which helps guide you through the process. They also have hints on what kind of content you should be focusing on in each section.

For major incidents

Major incidents require a team call to discuss and learn together:

  1. Write the post-mortem – Fill out the template in the incident page (you will be prompted to do this when the incident is resolved).
  2. Fill out the Summary and DERP sections in detail – These are the most valuable
  3. Check the timeline is accurate
  4. Schedule the call – Invite engineering@posthog.com and any key stakeholders related to the incident
  5. Review as a group – There may be details and other ideas people come up with in the call – you should be updating the post-mortem as you go to capture these.
  6. Share outcomes – Post the final summary in #incidents (this should happen automatically when you mark the post-mortem as complete)

For minor incidents

Minor incidents can be handled more simply:

  1. Write the post-mortem – Fill out the template
  2. Focus on the summary and DERP – The timeline here is less important.
  3. Share the summary – Post in #incidents channel for visibility (this should happen automatically when you mark the post-mortem as complete)

For false-positive incidents

Sometimes incidents are raised but turn out to be false-positives. In this case you can usually just close the incident and opt-out of the post-mortem process.

But wait! Before you do this you should consider what could have been done better to prevent the incident from happening in the first place. Clearly there was some false-alert or unclear alerting that caused the incident to be raised in the first place. It might be worth a quick post-mortem just to check that we have follow ups in place

💡 Remember: The goal is not to prevent all incidents, but to learn from them and improve our systems and processes. Every incident is an opportunity to make PostHog more reliable and our team more effective.

Public post-mortems

Some incidents require a public post-mortem. We publish these on our public post-mortems page because we believe transparency builds trust, and the wider engineering community benefits from shared lessons. A public post-mortem is needed when an incident:

Process

Public post-mortems go through an internal review before being published. This isn't to hide anything – we're committed to being transparent about what happened and why. The review exists to make sure we don't accidentally expose sensitive information (such as customer data, internal credentials, or infrastructure details that could be exploited) and to ensure the post-mortem is clear and useful to readers.

  1. Write the internal post-mortem first – Complete the normal post-mortem process above. The internal version is where you can freely discuss all details without worrying about what's safe to share publicly.
  2. Draft the public version – Open a PR against requests-for-comments-internal using the public post-mortem template. This gives reviewers a private space to flag anything that shouldn't be public before it lands on the website.
  3. Get review – Have the draft reviewed by relevant stakeholders. Focus on making sure the root cause, impact, and remediation are explained clearly enough to be useful, while removing anything that could compromise security or expose customer information.
  4. Publish – Once approved, open a PR against posthog.com adding the post-mortem to contents/handbook/company/post-mortems/ and update the list on the public post-mortems page.

Support hero

Engineering | Source: https://posthog.com/handbook/engineering/operations/support-hero

Every week, one person in each engineering team is designated the Support Hero. If this is you this week, congratulations!

As Support Hero, your job is to investigate and resolve issues reported by customers. A single case of suspicious data or a show-stopping bug can really undermine one's confidence in a data product, so it's important that we get to the bottom of all issues.

One of the many awesome things about PostHog is that support is being dealt with by engineers and they ship fixes and improvements in real-time when you contact them. It is impossible to overstate how valuable it is for customers when they ask a question and get a shipped feature within a day.

You'll see some teams using a term of endearment for Support Hero, examples being "Infra Hero" or… "Luigi". Don't ask – we don't know.

Our Support Engineers, in the triage tickets for the , , , , , , , and teams, due to the high volume of tickets those teams get. They will resolve tickets if possible, and escalate to the engineering team responsible if they need further help.

When is my turn?

Most engineering teams run a incident.io schedule, check out the escalation schedules.

The schedules consist of contiguous blocks, but that definitely doesn't mean working 24/7 – you should just work your normal hours.

What if I'm scheduled for a week when I won't be available?

Swap with a teammate in advance! Find a volunteer by asking in Slack, then use incident.io schedule overrides. You can trade whole weeks, but also just specific days. Remember not to alter the rotation's core order, as that's an easy way to accidentally shift the schedule for everyone.

I can't assign tickets or make public replies

Everyone has access to view tickets in Zendesk however if you do not reply to tickets often you may find you currently have Light agent permissions. The HogHero app in the right sidebar should allow you to upgrade your user for your support week by clicking Full⬆️ Image: image

What do I do as Support Hero?

Each engineering team has its own list of tickets in Zendesk:

Your job is simple: ship features and fixes, resolve ticket after ticket from your team's list, and respond to open-source PRs assigned to your team.

There are four sources of tickets:

  1. In-app bug reports/feedback/support tickets sent from the Support panel (The Help tab in the righthand sidebar.) They include a bunch of useful links, e.g. to the admin panel and to the relevant session recording.
  2. Community questions asked on PostHog.com.
  3. Slack threads that have been marked with the 🎫 reaction in customer support channels.
  4. Reports in the #papercuts Slack channel that relate to your team's area.

Shipping features

Some tickets ask for new features. If the feature is useful for users matching our ICP, then decide whether to just build it. Otherwise, create a feature request issue in GitHub or +1 on an existing one – you can then send a link to the user, giving them a way of tracking progress. Also make sure to let the know, since they will track feature requests for paying customers.

Sometimes a feature already exists, but a user doesn't know about it or how to use it. In this case, you should either send them a link to the relevant docs page or update the docs to make it clearer.

Fixing bugs

Others tickets report bugs or suspected bugs. Get to the bottom of each one - you never know what you'll find. If the issue decidedly affects only that one user under one-in-a-million circumstances, it might not be worth fixing. But if it's far-reaching, a proper fix is in order. And then there are "bugs" which turn out to be pure cases of confusing UX. Try to improve these too.

If not much is happening, feel free to do feature work – but in the case of a backlog in Zendesk, drop other things and roll up your sleeves. When you're Support Hero, supporting users comes first.

It might be an intense week, but you're also going to solve so many real problems, and that feels great.

Papercuts

Check the #papercuts Slack channel during your rotation and pick up any reports that relate to your team's area. For each one, pick one of the following:

Papercuts are also routed to the Signals inbox, so before you start work, check whether an auto-generated PR is already waiting – it may save you most of the effort.

Responding to external PRs

When capacity allows, the support hero serves as the first point of contact for external (open-source) PRs that affect your team's product. While we want to be good open-source citizens, customer support always takes priority — if you're dealing with a heavy support load, it's acceptable for PR reviews to be delayed or handled more briefly.

How external PRs are assigned

External PRs typically reach your team through one of two methods:

Best practices for handling external PRs

These are guidelines to aim for when you have bandwidth after handling customer support. Adapt them based on your workload:

Initial response (when possible)
Review approach
Communication tips
Common blockers to address upfront (when doing a full review)
When to escalate or defer
Consider rewarding with merch
Managing expectations

The reality is that support hero weeks vary significantly in intensity across teams and time periods. Some weeks you might have capacity to thoroughly review several PRs; other weeks, you might barely have time to acknowledge them. That's okay. The goal is to engage with external contributions in good faith within your available bandwidth, not to maintain a perfect response rate at the expense of customer support or your well-being.

If you find yourself overwhelmed, remember:

The key principle: We want to be responsive to our open-source community when we can, but not at the cost of our primary support responsibilities or team sustainability.

What about SDK support?

The SDK Support Hero rotation is owned by the . See the dedicated SDK support rotation page for details on how the rotation works, including how to prioritize time and handle mobile SDK issues.

Don't ask users to do work that you can do!

If folks are asking us for help, then we know the product already didn't meet their needs. Asking them to do leg-work that we could do is adding insult to injury.

For example don't ask them what version of posthog-js they're using or what their posthog config is when you can find out for yourself. Or visit their website and check the console instead of asking them if they had any errors.

If you do then have to ask them to do something, make sure you explain why you need it and what you're going to do with it.

How do I communicate?

There are two valid modes (which overlap!)

  1. excited, like a labrador puppy, to discover a new way to improve the product
  2. clinical and clear

Excited like a labrador puppy

The first is great for when you're talking to someone with feedback or who doesn't seem frustrated. It's important because every single support interaction is an opportunity to ship a fix or an improvement. And the excitement is how we show enough interest to properly hear the feedback.

example: "You can't do that right now, but it sounds super useful. Out of interest what does it unlock for you?"

Clinical and clear

The second is great for when the issue is tricky or the customer seems frustrated. Sometimes this goes as far as communicating in bullet points instead of paragraphs. When something isn't working the person might (quite rightly) have low tolerance for a support interaction.

example: "Ah, I see what you mean, that's not ideal! Sorry. I'll dig in to that now and let you know what I find by the end of tomorrow."

General tone

As an engineer, when answering a question, your first instinct is to give them an answer as quickly as possible. That means we often forget pleasantries, or will ignore a question until we've found the answer. So, the following guidelines:

If you have any questions about how or when to communicate with users, you can always ask the for help.

How do I prioritize?

As a business we need to ensure we are focusing support on our paying customers, as such this is the prioritization order you should apply as Support Hero. At the end of your rotation you need to ensure that any items in 1-5 are resolved or passed to the next Support Hero _as a minimum_.

  1. Any requests where you are tagged by the Customer Success team in a dedicated Slack channel, as there will be some urgency needed.
  2. Open, escalated Zendesk tickets for your team that have Sales/CS Top 20\* priority.
  3. Open, escalated Zendesk tickets for your team that have High priority.
  4. Open, escalated Zendesk tickets for your team that have Normal priority.
  5. New and Open\\ (non-escalated) Zendesk tickets for your team that are nearing breach or have breached SLAs
  6. Open Zendesk tickets for your team that have low priority.

\* Try to be especially responsive to any customers marked as Sales/CS Top 20. This set of customers is regularly reviewed by the sales team, and this priority is applied to those customers we'd like to have an especially fantastic support experience.

\\ Due to the way we're using Pylon, "new" tickets from high prio customer Slack channels only appear as New in Zendesk for a few seconds, then a webhook updates the ticket and quickly changes it to Open.

What if I need to confirm priority by checking a customer's MRR?

You've got a couple of options. By order of quickness:

  1. Use the VIP Lookup Bot:

In any Slack channel, type @VIP Lookup Bot [Customer] (without the brackets.) 'Customer' can be the organization name (case-sensitive), or their organization ID. It does work, but the results take up to 30s to load.

  1. In Zendesk:

Click the org name near the upper-left of the ticket. The left sidebar opens. There you'll see which plan they're on. If they've already paid some bills, you'll also see MRR there.

How will I know if a ticket is nearing a breach of our SLA targets?

Alerts are posted to Slack for every team which has a "group" in Zendesk. The alerts are posted to the support- channel for the team (or the team- channel for the team if the team has no support- channel.)

Alerts are posted for a ticket 3 hours before it breaches the next SLA. If the ticket remains untouched an hour later, another alert will be posted at 2 hours before it breaches an SLA, and again 1 hour before it breaches an SLA. The maximum number of alerts that will be posted for a single ticket is 3. (You can remove the sla-warning tags from a ticket if you want the alerts to be sent again for that ticket.)

How should I handle self-hosted setups?

It's fine to politely direct users to the docs for self-serve open-source support and ask them to file a GitHub issue if they believe something is broken in the docs or deployment setup. We do not otherwise provide support for self-hosted PostHog.

How should I handle organization ownership transfers?

In case a user requests for organization permissions to be altered (e.g. the only member with owner membership left the company) follow these steps:

  1. The ticket should be assigned ideally to Platform features
  2. Ask the user to get the current owner to log in and update ownership.
  3. If the owner left and they can get access to the current owner’s email, ask them do a password reset and then login as the owner and perform the action themselves.
  4. If not, we should email the account owner’s email to see if we get a bounce back. Also check how long it is since they logged in.
  5. If accessing the current owner's email is not an option, we should have the person requesting access verify their domain ownership by providing a TXT record example for posthog verification.
  6. Once verified, membership can be updated for the request.
  7. Note, if they’re on a paid plan we might also need to switch the contact on Stripe via a separate request to billing @posthog.com

How should I handle 2FA removal?

  1. Send the following email to the account owner:
Subject: Confirmation Required: Removal of 2FA on your PostHog Account

Hi [name],

According to ticket #XXXX, you mentioned wanting to remove the current 2FA method. Could you please confirm this by replying to this email?

If you haven't requested this change, please let me know immediately.

Best,
[your name]
  1. After the user responded and confirmed the change, delete their TOTP device (EU link).

How do I use Zendesk?

We use Zendesk Support as our internal platform to manage support tickets. This ensures that we don't miss anyone, especially when their request is passed from one person to another at PostHog, or if they message us over the weekend.

Zendesk allows us to manage all our customer conversations in one place and reply through Slack or email.

Zendesk is populated with new tickets when issues are sent via the in-app Support panel (the Help tab in the righthand sidebar), from people outside the PostHog GitHub organization adding issues to the posthog and posthog.com repos, and new community questions. High priority customers also have Slack channels they can post support questions in. We can create Zendesk tickets from Slack questions via Pylon.

The Zendesk tickets will include links to the GitHub issue, Slack thread, or the community question so we can answer in the appropriate platform. After replying to a community question, make an internal note on the Zendesk ticket confirming that you've replied outside of Zendesk, and set the ticket status accordingly when submitting the internal note.

Accessing Zendesk

You can access the app via posthoghelp.zendesk.com.

The first time you sign into Zendesk, please make sure you include your name and profile picture so our users know who they are chatting with!

Using Zendesk

You’ll spend most of your time in the Views panel, where you’ll find all tickets divided into different lists depending on who they are assigned to, and whether they have been solved or not.

Tips:

Image: Opening side conversations

Creating tickets on behalf of users or from existing tickets

Sometimes users will contact us over Twitter, or email, asking support questions. Sometimes they will respond to old, solved ticket threads with new problems, or tickets will spiral into multiple issues. In both situations it's best to create a new ticket for the user so we can apply the correct SLAs and keep issues distinct for easy assignment.

You can ask a user to create a new ticket themselves, but it's best if we do it for them. The easiest way to do this correctly is to login to PostHog as the user, and then create a fresh ticket on their behalf using the information you have. This will ensure the correct tags, SLAs, and so on are automatically applied.

If the user raised the issue in a public forum, such as Twitter, it can be a good idea to tell them you've opened a ticket on their behalf. If the user was replying to an old, already solved ticket, you should mark the old issue to Closed.

Avoiding duplication of effort in Zendesk

Each team handles Zendesk queues (views) in slightly different ways. Check in with your team about whether or not to assign tickets to yourself, or keep them assigned to the team/group level. Support team folks, who work on tickets from multiple queues, often assign tickets to themselves, (and when escalating, will assign the ticket back to the team/group.)

For unassigned tickets, keep an eye out for whether someone else is already viewing a ticket (will appear in the upper-left of a ticket you're viewing, with their name, avatar and also viewing.) Use those as clues to avoid working on a ticket that someone is already working on (and communicate with each other when in doubt. Err on the side of making sure the ticket gets responded to within SLA/response target times.)

Also, avoid cherry-picking tickets. Pick the ticket that is closest to breaching our response targets.

Ticket Status

When responding to a ticket you should also choose an appropriate status according to the following:

Temp orgs for free email users

To reduce some unintended consequences of Zendesk's unavoidable use of email address domain names to associate users with organizations, we have Zendesk orgs for common free email providers.

An example of these orgs: Gmail user - please assign to correct org

When we get a ticket from a user with an @gmail.com address who has not already been manually assigned to an existing Zendesk org, that user will be assigned to the Gmail user - ... org (unless their PostHog org doesn't exist in Zendesk yet, in which case the correct org will be created in Zendesk.)

When you see a user assigned to a free email org on a ticket, and it is not a 'community question' ticket, please assign the user to their correct org, which is found on the Admin info line in the body of the ticket:

  1. Click on the user's name, to the right of the org name
  2. Click in the Org. field to change the org name
  3. Click anywhere outside the field to save the change

Tickets which have been set to Pending will auto-solve after 7 days. Customers can also respond within 20 days to a Solved ticket to re-open it. After 20 days, responses will create a follow-up ticket with a link to the original ticket.

Tickets that have been Solved will receive a CSAT survey the next day.

Content Warnings

We have a clear definition of who we do business with which means that customers who track adult or other potentially offensive content aren't automatically excluded. To avoid team members inadvertently seeing anything offensive when impersonating a customer we will automatically tag tickets from Organizations known to have this type of content with a content_warning tag.

This looks at the Content Warning field on the Zendesk Organization, and adds the tag if there is text in that field. If you see this tag on a ticket and want to understand more then click on the Organization name in the top left corner of the Zendesk UI and scroll down the list of fields on the left.

If you do discover any potentially offensive content in a customer account then please update this field on the Zendesk Organization so that other team members are aware of the content.

Pylon to create Zendesk tickets from Slack posts

We use Pylon to create Zendesk tickets from Slack posts. To do so, add the :ticket: (🎫) emoji reaction to the post that you want to create a Zendesk ticket from.

Adding the :ticket: emoji reaction will cause Pylon to add a couple of replies in a thread under the post. The last of those replies includes options for the Zendesk ticket you're creating: Use the Group menu to send the ticket to the appropriate team, and the Severity menu to set the severity flag on the Zendesk ticket, then hit the Submit button.

Zendesk tickets created this way will normally be marked as high priority tickets. You can respond to them either in Zendesk or Slack, as there is a two-way sync.

Adding new teams to Zendesk.

When we've added a new team, or 🪓 split an existing team into two or more, we'll need to get them set up in Zendesk. Here's an overview of the steps:

(Note: The built-in tool for testing webhooks in ZD has been flakey while the UI has been changing lately. Failed tests don't always mean the hook won't work. 🫤)

Community questions

At the end of every page in the docs and handbook is a form where visitors can ask questions about the content of that page. (These questions also appear in the relevant category in the PostHog community.)

Community questions appear in Zendesk and tickets are closed automatically if an answer is picked as a solution on the website. Ideally, the original poster is the one who marks a response as the solution. If they don't, feel free to close the ticket in Zendesk once you've replied.

How do I answer community questions?

When a question is posted, it'll appear in Zendesk with a direct link to the question. A notification is also sent to the #community-questions channel in Slack. (You can also receive notifications about specific topics in your own small team's Slack channel. Ask the Website & Docs team for help in setting this up if you like.)

You can answer a question directly on the page where it was asked. When a reply is posted, the person who asked the question will receive an email notification. (Important: Don't reply to community questions directly from Zendesk.)

The first time you answer a question, you'll need to create a community account. (You'll be prompted to do this after answering a question, as posting/responding requires authentication.)

Ask in #team-website-and-docs to be upgraded to a moderator. This will also give you access to moderator controls available for each question.

_Note: The PostHog.com community currently uses a separate authentication system from PostHog Cloud. There are plans to support other types of authentication so a visitor doesn't have to create a separate account for asking questions._

How do I handle a bug report or feature request?

For feature requests from low priority users, give them this link and suggest they open a feature request.

For bug reports from normal and high priority users (assuming you've confirmed it's a bug, and that there's not already an open bug report):

  1. Open a bug report on our GitHub repo
  2. Be sure to include a link to the insight (or other), below the repo steps
  3. Include "From: https://URL_for_Zendesk_ticket" in the additional info section of the bug comment (where the URL is for the Zendesk ticket where the customer reported the bug)
  4. Reply to the user to thank\* them for alerting us to the bug. Let them know you've opened a bug report and provide a link to it.
  5. Let them know they can follow the bug report on GitHub for updates.
  6. When sending the reply, change the ticket from Open to Pending
  7. In Slack, go to the team channel for the team that handles the feature that the bug report applies to (e.g. #team-product-analytics) and alert them with a post like "New bug report from a high priority customer: https://github.com/PostHog/posthog/issues/nnnnnn"

* consider sparking additional joy with a credit for merch

Steps for feature requests from normal and high priority users are pretty much the same, but use this form instead. If you find that there's already a matching feature request open, reply with a link to the feature request, and let them know they can upvote it by adding a "+1" comment.

How do I handle user requests to delete groups/organizations?

_WARNING:_ Do NOT click the DELETE button! That will delete the entire project!

Just use the Save button after clicking the delete checkbox for the group.

  1. Visit the Django Admin page for the project at https://us.posthog.com/admin/posthog/team/:project_id/change/ (Make sure you use the project ID for the project where the group/org is found)
  2. In the lower part of the page, find GROUP TYPE MAPPINGS and click on SHOW
  3. In the righthand column, check the box for the group(s) to be deleted
  4. Click the SAVE button. (Do NOT use the DELETE button!)
  5. Reply to the user to confirm

Adding a team member

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/add-team-member

The oversees the process of completing their website profile so it's ready to go when they start. (The one exception is that the team lead or the Ops team will add the team member to the small team's page which will automatically add the person to the team page the next time the website is rebuilt.)

When a new team member is created in Deel with their new company email, a community profile is automatically generated for them. It should populate with info like their name, role, and start date.

One thing it doesn't automatically add is their profile photo. Here's the typical process for completing their profile, which is handled by the .

Add the team member's photo

  1. Watch for alerts that the new team member has been created (in the #alerts-deel private Slack channel). It's always worth keeping an eye on new hire announcements in #general in case the webhook doesn't fire.
  2. Verify their preferred name in their onboarding issue in posthog/company-internal, as their legal name automatically gets set by default based on the information added to Deel.
  3. Grab their photo from their LinkedIn or other easily publicly-available source. If they've already started, they may have also uploaded a photo to Slack.
  4. Copy photo to clipboard, visit remove.bg, paste the image, and download the resulting photo.
  5. Crop to square, size similarly to existing images, and make sure the arm on the left side of the photo isn't cropped.
  6. Optimize the image
  7. Add to team member's profile
  1. Set a complementary background color that isn't overly used by other members in the small team
  2. Set their location field to a friendly name or major metropolis if in the US (like "San Francisco, CA") or a major international city (like "Barcelona, Spain") when possible.

Also add the team member's _original_ photo to the Team portraits Figma file where our contract illustrator will pick it up to draw the illustrated version.

Once notified in #portraits that the illustration is ready...

After illustration is drawn...

  1. Ensure proper sizing and positioning in Figma
  2. Export at @2x to PNG
  3. Optimize the image
  4. Add to team member's profile via their profile page
  5. Move the team member's photo to the live page in the Figma file

Notes

Editing the API docs

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/api-docs

The PostHog API docs are generated using drf-spectacular. It looks at the Django models and djangorestframework serializers.

Note: We don't automatically add new API endpoints to the sidebar. You need to add those to src/navs/index.js

You can add a help_text="Field that does x" attribute to any Model or Serializer field to help users understand what a specific field is used for:

class Insight(models.Model):
    last_refresh: models.DateTimeField = models.DateTimeField(blank=True, null=True, help_text="When the cache for the result of this insight was last refreshed.")

class InsightSerializer(TaggedItemSerializerMixin, InsightBasicSerializer):
    filters_hash = serializers.CharField(
        read_only=True,
        help_text="A hash of the filters that generate this insight.",
    )

To add a description to the top of a page, add a comment to the viewset class:

class InsightViewSet(TaggedItemViewSetMixin, StructuredViewSetMixin, viewsets.ModelViewSet):
    """
    Stores saved insights along with their entire configuration options. Saved insights can be stored as standalone
    reports or part of a dashboard.
    """

To check what any changes will roughly look like locally, you can go to http://127.0.0.1:8000/api/schema/redoc/.

To add a description to a specific endpoint, add an MDX file (named after the endpoint ID's name) to the corresponding folder its page would belong to. Then, the content in the MDX file will only appear under the specified endpoint. This is like our MDX setup, except the file name will determine which endpoint the MDX contents appear on.

For example, to add a description to the "list annotations" endpoint, you'd create a new file: contents/docs/api/annotations/annotations_list.mdx

Whatever you add to that file will appear under that endpoint only.

Image: API endpoint description

Insights serializer

The serializer for insight lives here. Each time an insight gets created we check it against these serializers, and we'll send an error to Sentry (but not the user) if it doesn't match, to ensure the API docs are up to date.

Documenting custom endpoints

If you have an @action endpoint or a custom endpoint (that doesn't use DRF) you can still document by providing a serializer for the request and response.

from drf_spectacular.utils import OpenApiResponse
from posthog.api.documentation import extend_schema
@extend_schema(
    request=FunnelSerializer,
    responses=OpenApiResponse(
        response=FunnelStepsResultsSerializer,
        description="Note, if funnel_viz_type is set the response will be different.",
     ),
    methods=["POST"],
    tags=["funnel"],
    operation_id="Funnels",
)
@action(methods=["GET", "POST"], detail=False)
def funnel(self, request: request.Request, *args: Any, **kwargs: Any) -> Response:

Testing API docs locally

To test or develop the API docs locally, you need to create a personal API key (see top of this page) and then export it before running gatsby, in the same terminal window:

export POSTHOG_PERSONAL_API_KEY=yourkey

Uploading assets with Cloudinary

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/assets

We use Cloudinary for asset management (image and video uploads), mainly to reduce website build times (as each image hosted within the repo has to get processed on each build). Offloading assets to Cloudinary saves time and resources.

  1. Sign into your PostHog.com account via profile icon in top right corner
  2. Click the account menu, then under Moderator tools choose Upload media

Image: Profile

  1. Open a folder, select, drag, or paste media to upload. This supports images, gifs, and videos. Cloudinary provides optimized links for images, but you'll want to optimize other formats before uploading.

Image: Upload

  1. Copy the file URL and insert wherever you need it

Uploading assets using the Cloudinary website (don't use this option)

_You shouldn't need to login to Cloudinary directly. Use the website uploader instead._

  1. Go to the Cloudinary dashboard and log in. (Find the login in 1Password.)

_Pro tip: Double-click a folder to drill in, despite the cursor pointer indicating it's a link you only need to click once._

  1. Click 'Upload' in the top right, then drag and drop. The upload button will not appear unless you follow the exact link above into the image folder.
  2. Add the filename (and any folders) to the base asset URL:
    https://res.cloudinary.com/dmukukwp6/image/upload/posthog.com/contents/
  1. Use the full URL in your docs or handbook update

Changelog entries

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/changelog

The changelog publishing process is run by the .

Changelog entries can be created in two ways:

  1. Change the status of an existing roadmap or WIP entry to "Complete"
  2. Create a new changelog entry on the changelog page.

Both options are available when you're signed into posthog.com as a moderator.

For more details on the publishing process, check out the How to publish changelog handbook page by the .

Fill out the proper paperwork

Make sure all fields are filled out correctly before creating an entry.

Date

Set the date to the feature's release date. If updating from a previous roadmap or WIP entry, change the date to the actual release date (rather the date where the team started working on the feature).

Team and author

Select the team that is responsible for the change, and choose the author of the feature. If there's no individual lead on the feature or update, set the author to the team lead.

Categorization

Select the product or product area that the change is relevant to, and set the type of update. These can be used on the front end to filter down to a subset of changes.

Title and description

Be succinct with the title. Check the format of existing entries for inspiration.

Options

The Show on homepage option aggregates the entry to the _"We ship weirdly fast"_ calendar on the homepage. Only select this option if the milestone is impressive enough to be remembered years from now. The point of the calendar is to show the frequency of shipping big features, not to highlight every single update.

Social sharing

The has created an image generator that takes the information from the changelog entry and creates a square image for sharing on social media. Read the blog post to learn how it works.

The customization options are designed to allow you to format the copy and image so it looks as good as it can. If you need suggestions or aren't sure if your changelog image meets our quality standards, don't hesitate to post in #team-website-and-vibes for a second opinion.

Managing cool tech jobs

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/cool-tech-jobs

Applying to get your jobs listed? The Cool Tech Jobs board exists to help people find jobs at companies with similar perks and culture to PostHog, and a strong engineer-led environment. Applications are approved only at our discretion and moderation can take up to 48 hours. If you have a question about an application, please contact our support team.

Create a company/jobs:

Non-moderator flow

Moderator flow

When a company is created, its jobs are automatically scraped based on the job board URL/slug provided. If no jobs are found, the company is still created (appears semi-transparent for moderators), but a warning message appears that suggests verifying the job board URL.

Edit a company

Jobs will be re-scraped when a company is edited.

Delete a company

All jobs associated with the deleted company will be deleted along with the original company record.

Company fields

Unless required conditionally (job board URL/slug), every company field is required.

Scraping

Jobs are scraped hourly based on the provided job board URL/slug. Jobs are individually checked for freshness hourly. If a job URL 404s, it is deleted.

Developing the website

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/developing-the-website

You can contribute to the PostHog documentation, handbook, and blog in two ways:

  1. Create a pull request in GitHub for any page that has an Edit this page link on it. In this situation you must edit the page using the GitHub web editor interface. This method is suitable for text-only edits and basic file manipulation, such as renaming.
  1. Run the posthog.com website in development and make changes there by creating a branch of the master codebase, committing changes to that branch and raising a pull request to merge those changes. This is the recommended method as it allows you to quickly preview your changes, as well as perform more complex changes easily.

Below, we'll explain the two options for option two.

Option 1: Running with Codespaces

Creating/running the Codespace

  1. Open the posthog.com repository in GitHub.
  2. Click the Code button, then the Codespaces tab, then under the ... menu, choose New with options...

Image: New with options...

  1. Under Machine type, choose 4-core.

Image: Configure machine type

  1. When the repo opens in Codespaces, it will install some things automatically.

Image: Codespaces installing dependencies

When completed, press any key.

  1. In terminal, type pnpm install && pnpm start and hit [Enter].

Image: pnpm start

  1. Once you see <code><span class="text-green">success</span> Writing page-data.json files...</code>, you can click the green Open in browser button which will open the site at http://localhost:8001.

You can also click the Ports tab to access the URL where you can preview the site. Cmd + click the URL seen here.

Image: port

Committing and pushing changes

Use the built-in Git tab in VS Code to commit and push your changes.

  1. From the Git source control ... menu, choose Checkout to... to create a new branch.

Image: Checkout to...

  1. Type a new branch name and press enter.

Image: Branch name

  1. Now you can commit changes to your new branch. Type a commit message and use Cmd + Enter (or push the big green button).

Image: Commit message

  1. If you see the dialog below, choose Always to always stage all files you've changed. (Otherwise, you'll need to hit the + button next to each file you want to commit.)

Image: Stage all

  1. Now that your changes are committed, it's time to publish them to GitHub.

Image: Publish changes

Note: After finish changes on your branch, be sure to switch back to master so you don't inadvertently make future changes to your current branch.

Image: Checkout to master > Image: Switch to master

Stopping the server

  1. Place your cursor into Terminal and type Cmd+C to stop the server.
  2. In the bottom left corner of the window, click Codspaces: [your codespace name], then Stop current codespace.

Notes

If you plan on using this codespace frequently, disable Auto-delete codespace in the ... menu under the Code > Codespaces dropdown in the repo.

Option 2: Editing posthog.com locally

Before you begin

In order to run the PostHog website locally, you need the following installed:

If you are unfamiliar with using Git from the command line (or just prefer graphical interfaces), use the GitHub Desktop app.

You may also want to familiarize yourself with these resources:

Cloning the posthog.com repository

The posthog.com codebase is on GitHub at https://github.com/PostHog/posthog.com. To work on it locally, first you need to clone it to your disk:

You can clone the codebase from the command line using the following command:

    git clone git@github.com:PostHog/posthog.com.git

You can also clone the repository with GitHub Desktop installed, from the posthog.com repository page, click the Code button and select Open with GitHub Desktop from the dropdown that appears.

Image: Open in GitHub Desktop

You will then be prompted by the browser to confirm if you want to open the GitHub Desktop application. Select the affirmative action that has text such as Open GitHub Desktop.

Once GitHub Desktop has opened you will be prompted to confirm the repository that is being cloned and the location on disk where you wish the code to be stored.

Image: GitHub Desktop clone to dialog

Click Clone to clone the posthog.com repository to your local disk.

Image: GitHub Desktop cloning to disk

Once the clone has completed the GitHub Desktop interface will change to the following:

Image: GitHub Desktop cloned successfully

To view the code for the website click Open in Visual Studio Code. Dialogs may appear around permissions and trust as you open Visual Studio Code.

Once you have Visual Studio Code open, select the Terminal menu option. Within the dropdown select New Terminal. This will open a new terminal window within Visual Studio Code:

Image: Visual Studio Code terminal

Don't worry! We only need to run a few commands in the command line.

Running posthog.com locally

If you're using an Apple Silicon Mac (M1+) then you'll need to run the following commands before using pnpm:

rm -rf ./node_modules
brew install vips

Type the following into the command line and press return:

pnpm install

This installs the dependency packages used by posthog.com. This may take a few minutes.

After initial setup, use the following command to start the development server:

pnpm install && pnpm start

This runs the local clone of the website, which you can use to preview changes you make before pushing them live. It takes a bit of time for some file processing and compilation to take place, but once it's completed you can access the locally running version of posthog.com via by visiting http://localhost:8001 in your web browser.

Any time you want to preview changes you are making to the local version of the website, all you have to do is run the pnpm start again, wait for the command to finish running and then open http://localhost:8001 in your web browser.

Troubleshooting

If the server fails to start, the first troubleshooting step is to clear cache. You can do this (and start the server again) by running:

pnpm clean && mkdir .cache && pnpm install && pnpm start

Minimal mode

For faster builds, you can run in minimal mode:

pnpm build:minimal

Minimal mode only builds:

Everything else (apps, CDP, templates, jobs, API docs, SDK references, pagination/category/tag pages) won't exist - they'll 404. Next/previous navigation links and GitHub data for roadmaps/jobs will also be absent. Sourcemap generation is disabled.

PR preview deployments (Cloudflare Pages)

Pull request previews on Cloudflare Pages use the same minimal build as above: the workflow sets GATSBY_MINIMAL=true (see .github/workflows/deploy-preview.yml). That keeps preview builds fast.

Implications for content authors:

Environment variables

Our website uses various APIs to pull in data from sites like GitHub (for contributors) and Ashby (our applicant tracking system). Without setting these environment variables, you may see various errors when building the site. Most of these errors are dismissible, and you can continue to edit the website.

If you need a specific environment development, ask in #posthogdotcom.

Finding the content to edit

Once you have cloned the repo, the contents/ directory contains a few key areas:

Inside each of these are a series of markdown files for you to edit.

Posts and blog filtering

There are two ways to filter posts by tag:

  1. Query param — Add a post_tags query param to the URL, e.g., /posts?post_tags=Comparisons. This works on the main posts listing and allows saving/sharing filtered URLs.
  1. Static tag pages — For SEO purposes, we generate static pages at /{category}/{tag}, e.g., /blog/session-replay. These are generated at build time in gatsby/createPages.ts.
Hidden from index

Some categories and tags are intentionally hidden from the main posts index view. They still appear when you filter directly to that category or tag.

Categories hidden from index: customers, spotlight, changelog, comparisons, notes, repost

Tags hidden from index: Comparisons

Posts can also set hideFromIndex: true in their frontmatter to be excluded.

These exclusions are defined in src/components/Edition/Posts.tsx and src/templates/BlogPost.tsx.

Making edits

Creating a new Git branch

When editing locally, changes should be made on a new Git branch. Branches should be given an "at a glance" informative name. For example, posthog-website-contribution.

You can create a new Git branch from the command line by running:

    git checkout -b [new-branch-name]

For example:

    git checkout -b posthog-website-contribution

You can also create a new branch in GitHub Desktop by selecting the dropdown next to the Current Branch name and clicking New Branch.

Image: GitHub Desktop - new branch dropdown

Then, in the dialog that follows, entering the new branch name.

Image: GitHub Desktop - new branch dialog

Once you have a new branch, you can make changes.

Markdown details

Frontmatter

Most PostHog pages utilize frontmatter as a way of providing additional data to the page. Available frontmatter varies based on the template the page uses. Templates are determined based on the folder the file resides in:

Blog

Markdown files located in /contents/blog`

---
date: 2021-11-16
title: The state of plugins on PostHog
rootPage: /blog
author: ["yakko-majuri"]
featuredVideo: https://www.youtube-nocookie.com/embed/TCyCryTiTbQ
featuredImage: https://res.cloudinary.com/dmukukwp6/image/upload/posthog.com/contents/images/blog/running-content.png
featuredImageType: full
category: Guides
tags: ["Using PostHog", "Privacy"]
seo: {
    metaTitle: Overview of PostHog Plugins
    metaDescription: Learn about the current state of plugins on PostHog and get valuable insights into their functionality and performance.
}
---

-

-

Tutorials

Markdown files located in /contents/tutorials

---
date: 2022-02-14
title: How to filter out internal users
author: ['joe-martin']
featuredTutorial: false
featuredVideo: https://www.youtube-nocookie.com/embed/2bptTniYPGc
tags: ['filters', 'settings']
---

-

Docs & Handbook

Markdown files located in /contents/docs and /contents/handbook

---
title: Contribute to the website: documentation, handbook, and blog
---
Comparison pages

Create a table on a "PostHog vs..." page with the following components. (You can see examples of how this is used in this pull request.)

Import the components at the top of the post content (after frontmatter):

Create a table like:

In ComparisonRow:

Customers

Markdown files located in /contents/customers

---
title: How Hasura improved conversion rates by 10-20% with PostHog
customer: Hasura
logo: https://res.cloudinary.com/dmukukwp6/image/upload/posthog.com/contents/images/customers/hasura/logo.svg
featuredImage: https://res.cloudinary.com/dmukukwp6/image/upload/posthog.com/contents/images/customers/hasura/featured.jpg
industries:
    - Developer tool
users:
    - Engineering
    - UI
    - UX
    - Marketing teams
toolsUsed:
    - Funnel Analysis
    - Session Recording
    - Self-Hosting
---
Plain

If the file doesn't reside in one of the above folders, it uses the plain template.

---
title: Example Components
showTitle: false
width: lg
noindex: true
---

You can often refer to the source of existing pages for more examples, but if in doubt, you can always ask for help.

Adding rich media

Add images or videos to your post by uploading them to Cloudinary and including the URL in your Markdown file. Be sure to follow our best practices when adding media.

If you've created a new markdown file (for use in docs or handbook), you should link to it from the sidebar where appropriate.

The sidebar is generated from src/navs/index.js.

Redirects

Redirects are managed in vercel.json which is located in the root folder.

To declare a new redirect, open vercel.json and add an entry to the redirects list:

{ "source": "/docs/contributing/stack", "destination": "/docs/contribute/stack" }

The default HTTP status code is 308 (permanent), but if the redirect should be temporary (307), it can be updated like this:

{ "source": "/docs/contributing/stack", "destination": "/docs/contribute/stack", "permanent": false }

Committing changes

It's best to create commits that are focused on one specific area. For example, create one commit for textual changes and another for functional ones. Another example is creating a commit for changes to a section of the handbook and a different commit for updates to the documentation. This helps the pull request review process and also means specific commits can be cherry picked.

First, stage your changes:

    git add [path-to-file]

For example:

    git add contents/docs/contribute/updating-documentation.md

Once all the files that have been changed are staged, you can perform the commit:

    git commit -m '[short commit message]'

For example:

    git commit -m 'Adding details on how to commit'

Files that have been changed can be viewed within GitHub Desktop along with a diff of the specific change.

Image: Viewing changes in GitHub Desktop

Select the files that you want to be part of the commit by ensuring the checkbox to the left of the file is checked within GitHub Desktop. Then, write a short descriptive commit message and click the Commit to... button.

Image: Making a commit in GitHub Desktop

Push changes to GitHub

In order to request that the changes you have made are merged into the main website branch you must first push them to GitHub.

    git push origin [branch-name]

For example:

    git push origin posthog-website-contribution

When this is done, the command line will show output similar to the following:

    posthog-website-contribution $ git push origin posthog-website-contribution
    Total 0 (delta 0), reused 0 (delta 0), pack-reused 0
    remote:
    remote: Create a pull request for 'posthog-website-contribution' on GitHub by visiting:
    remote:      https://github.com/PostHog/posthog.com/pull/new/posthog-website-contribution
    remote:
    To github.com:PostHog/posthog.com.git
    * [new branch]        posthog-website-contribution -> posthog-website-contribution

This output tells you that you can create a pull request by visiting a link. In the case above, the link is https://github.com/PostHog/posthog.com/pull/new/posthog-website-contribution. Follow the link to complete your pull request.

Once you have committed the changes you want to push to GitHub, click the Push origin button.

Image: Push to origin from GitHub Desktop

Create a pull request

Create a pull request to request that your changes be merged into the main branch of the repository.

Navigate to the link shown when you push your branch to GitHub. For example, https://github.com/PostHog/posthog.com/pull/new/posthog-website-contribution shown below:

    posthog-website-contribution $ git push origin posthog-website-contribution
    Total 0 (delta 0), reused 0 (delta 0), pack-reused 0
    remote:
    remote: Create a pull request for 'posthog-website-contribution' on GitHub by visiting:
    remote:      https://github.com/PostHog/posthog.com/pull/new/posthog-website-contribution
    remote:
    To github.com:PostHog/posthog.com.git
    * [new branch]        posthog-website-contribution -> posthog-website-contribution

With the branch published, click the Create pull request button.

Image: pull request from GitHub Desktop

This will open up a page on github.com in your default web browser.

If you are pushing to an existing branch, navigate to the posthog.com repo and switch to the new branch using the dropdown:

Image: GitHub branch switcher

Then, open the Contribute dropdown and click the Open pull request button.

Make the pull request title descriptive name and complete the detail requested in the body.

If you know who you would like to review the pull request, select them in the Reviewers dropdown.

Preview branch

After a series of checks are run (to ensure nothing in your pull request breaks the website), Vercel will generate a preview link available in the Vercel bot comment. This includes all of your changes, so you can preview before your pull request is merged.

An initial build can take up to 50 minutes to run. After the initial build, subsequent builds should complete in under ~15 minutes. We're limited to two concurrent builds, so if there's a backlog, this process can take longer.

Because Vercel charges per seat, we don't automatically invite all team members to our Vercel account. If your build fails, you can run pnpm build locally to see what's erroring out. If nothing is erroring locally, it's likely the build timed out in Vercel. The Website & Docs team monitors for failed builds, so they'll re-run it for you. If the build is urgent, give a shout in #team-website-and-docs and someone with Vercel access can trigger a rebuild for you.

Image: Preview branch

Note: Checks are run automatically for PostHog org members and previous contributors. First time contributors will require authorization for checks to be run by a PostHog org member.

Deployment

To get changes into production, the website deploys automatically from master. The build takes up to an hour, but can be delayed if other preview builds are in the queue.

Product interest tracking for onboarding

We track which products users have shown interest in by visiting product landing pages or docs. This data is stored using PostHog's cookie_persisted_properties feature, making it available across all posthog.com subdomains (including app.posthog.com) for onboarding personalization.

How it works

When a user visits a product-specific page (like /product-analytics or /docs/session-replay), we record that product's slug using posthog.register() with the property prod_interest. This property is configured in cookie_persisted_properties in gatsby/onPreBoostrap.ts, which means it gets stored in a cross-subdomain cookie automatically.

To read the interests, we use posthog.get_property('prod_interest') which returns an array of product slugs like ["product-analytics", "session-replay"].

We always store the most recent interests last in the array.

Code structure

The tracking is implemented in:

Reading interests on app.posthog.com

Since this uses PostHog's built-in cookie persistence, you can read the interests on any subdomain where PostHog is initialized:

const interests = posthog.get_property('prod_interest') || []
// interests = ["product-analytics", "session-replay", ...]

Expanding usage

Everything is usually automatically handled because our website is well-structured but if you want to start tracking interest for new products you'll need to add a new entry to PRODUCT_SLUGS in src/lib/productInterest.ts

Acknowledgements

This website is based on Gatsby and is hosted with Vercel.

How PostHog.com works

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/how-posthog-website-works

PostHog.com is built and maintained in-house by the . You've probably never seen a Gatsby.js site like this before. Eli Kinsey is the mastermind behind how the site is structured.

For more context, read why we designed our website to look like an operating system.

The "operating system"

  1. At the top level, gatsby-browser.tsx loads ``
  2. `` renders the chrome of the "operating system"
  3. `` – the MacOS-style menu bar
  4. `` – the desktop app icons and desktop background
  5. `` – the chrome for each app and where the content renders
  6. ``
  7. ` loads and `.

The apps

Each "app" is simply a page like a normal Gatsby site. There are a handful of apps:

  1. `` – used for all long-form content like the docs, handbook, blog
  2. `` – a WYSIWYG page editor
  3. `` – an OS-style file explorer
  4. `` – an email-like app
  5. `` – a slide deck

Each app can reference shared components like `` which contains the necessary navigational elements (like the back button, search, and filters).

Let's look at a product page to see how it uses the `` template.

Example: posthog.com/session-replay

This page (/src/pages/session-replay/index.tsx) includes two critical pieces:

  1. `` – the views where the content will display
  2. Defines the PRODUCT_HANDLE
  3. Specifies which slides should appear in this presentation using createSlideConfig

` loads up all the various templates needed (like , , ) and sources the content using the useProduct` hook.

useProduct hook

Each product's data is defined in a JSON file like:

/src/hooks/productData/session_replay.tsx

When the session_replay handle is passed into useProduct, it looks up the product's data like:

Note: The maintains a billing API that contains pricing tiers and entitlements. This is how pricing data and usage tiers stay in sync between the website and product. The plan is to eventually move the product data into the billing API so there's a single source of truth for every product.

---

Services we use

| Service | Purpose | | ------------- | ------------------------------------------------------ | | Vercel | Hosting | | Gatsby | Static site framework | | GitHub | Source code repository | | Ashby (API) | Applicant tracking system | | Algolia (API) | Site search | | Strapi | Headless CMS for community profiles and changelog data | | PostHog | Analytics, feature flags | | Inkeep | AI-powered community answers |

Image: Diagram of PostHog.com

Website content is stored in two places:

  1. Markdown/MDX files (in the GitHub repo) - _most website content_
  1. Strapi - _user-generated content_

Posting a new job listing

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/jobs

Creating a new job

You will now be on the settings page for the newly created job.

Custom fields

We use custom fields to connect various data to each job posting. Below is the description and purpose of each.

Teams

Teams is the only required custom field. The value(s) selected determines pineapple preference, objectives, mission, team lead, and which team members appear in the sidebar. If multiple teams are selected, all selected teams will appear as accordions in the sidebar, and the mission and objectives will be hidden.

Timezone(s)

Determines the preferred timezone for the position. If a value exists, it appears under the title.

Repo

Determines which repo to pull GitHub issues from if using the Issues custom field.

Issues

A comma-separated list of GitHub issue numbers relevant to the position. Queried at build-time and shown in the automatically created Typical tasks section.

Salary

Determines which role to use in the salary calculator. If no value is present, the job title is used. The calculator is not rendered if there is no matching role in the compensation calculator. If the role has been newly added to the compensation calculator, you'll need to add the role as an option to the custom field in Ashby global admin settings.

Mission & objectives

Determines whether the Mission & objectives section is shown on job listings

Creating a new job posting

Pages are only created for listed job postings. While viewing a job in Ashby:

From here, you can create a job description and add automations.

Job descriptions

Each job posting can have a different description. When creating a new description, separate sections by H2 if you would like them to be collapsible. When the site is built, sections that start with an H2 are transformed into collapsible elements and added to the table of contents.

Below is a list of the automatically created sections and how they work.

Salary

This section appears if the job title or Salary custom field matches a job in the SF benchmark file.

Benefits

This section appears on all job postings. The data in this section can be updated in the Careers benefits component.

Typical tasks

This section appears if any GitHub issues are added to the Issues custom field in Ashby. The custom field accepts a comma-separated list of GitHub issue numbers.

Objectives

This section appears if the team has a mission.mdx file in their team folder. Example

Interview process

This section appears on all job postings. The data in this section can be updated in the Job interview process component. To add a custom interview process for a specific job, add a new key to the roleInterviewProcess variable and assign an array of IInterviewProcess objects.

Example:

const roleInterviewProcess: Record<string, IInterviewProcess[]> = {
    'Site Reliability Engineer - Kubernetes': [
        defaultInterviewProcess[0],
        defaultInterviewProcess[1],
        {
            title: 'Technical interview',
            description: `You'll meet with an Engineer who will evaluate skills needed to be successful in your role.`,
            badge: '1 hour',
        },
        defaultInterviewProcess[3],
        defaultInterviewProcess[4],
    ],
}
Custom description for the /careers page

By default, we look for section headers (<h2>) in the job description to show in the summary that appears at the top of the careers page. We're sniffing for these subheaders (in this order):

If none of these are found, the job description will be blank.

If your job description has more creative titles, you can add a short custom description that _only_ appears on this section of the website. (This will take priority over the subheaders listed above.) Add this in the role's settings in Ashby under the _Website description_ field. It requires html, but here's a template you can use:

<p><strong>Things you definitely won't be doing:</strong></p>
<ul class="list-none p-0">
<li>❌ backlog grooming (it always sounded gross anyway)</li>
<li>❌ deciding what we build</li>
<li>❌ shielding developers from users</li>
<li>❌ project management/writing gazillions of tickets, RFCs, or PRDs</li>
</ul>

<p><strong>From you:</strong></p>
<ul class="list-none p-0">
<li>✅ SQL (any technical experience beyond this is a plus) - you must be able to be a self-serve PM, not relying on engineers to do analysis</li>
<li>✅ very proactive/organized</li>
<li>✅ collaborative</li>
<li>✅ several years of experience as a PM talking to users/interviewing</li>
</ul>
Apply

This section appears on all job postings. The input fields here directly reflect the Application Form assigned to the job. The Application Form can be found on the job's settings page.

Getting the job to appear on the site

If the job posting is published, the job will appear automatically the next time the site is rebuilt.

MDX components

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/markdown

There are some nifty MDX components available for use in Markdown content. These components are included globally, so you don't need to do anything special to use them (like renaming .md to .mdx or manually importing them at the top of the file).

Images

Product screenshots

The `` component encapsulates an image with a border and background. It's useful since the app's background matches the website background, and without using this component, it can be hard to differentiate between the screenshot and normal page content. It also optionally supports dark mode screenshots.

You use it by passing image URLs to the imageLight and imageDark props like this:

<ProductScreenshot
imageLight="https://res.cloudinary.com/dmukukwp6/image/upload/posthog.com/contents/handbook/images/tutorials/limit-session-recordings/sampling-config-light.png"
imageDark="https://res.cloudinary.com/dmukukwp6/image/upload/posthog.com/contents/handbook/images/tutorials/limit-session-recordings/sampling-config-dark.png"
alt="Sampling config shown set to 100% i.e. no sampling"
classes="rounded"
/>

Optionally pass zoom={false} if you don't want the image to be zoomable, otherwise it will be zoomable by default.

_Note: If you don't have a dark image, just leave out the imageDark prop and the light screenshot will be used for both color modes._

Image slider

You can create a slider or carousel of images by wrapping them in the <ImageSlider> component like this:

![posthog](https://res.cloudinary.com/dmukukwp6/image/upload/v1710055416/posthog.com/contents/images/screenshots/hogflix-dashboard.png)
![posthog](https://res.cloudinary.com/dmukukwp6/image/upload/v1710055416/posthog.com/contents/images/screenshots/hogflix-dashboard.png)

See an example in our open-source analytics tools post.

Videos

Th `` component works the same as product screenshots (above) for videos uploaded to Cloudinary but supports light and dark videos.

  1. Import the video(s) at the top of the post (directly following the MDX file's frontmatter and dashes):

<!-- prettier-ignore -->

---
export const NewFunnelLight = "https://res.cloudinary.com/dmukukwp6/video/upload/posthog.com/contents/handbook/images/docs/user-guides/funnels/new-funnel.mp4"
export const NewFunnelDark = "https://res.cloudinary.com/dmukukwp6/video/upload/posthog.com/contents/handbook/images/docs/user-guides/funnels/new-funnel-dark.mp4"
  1. Use the component wherever you want the video(s) to appear.

<!-- prettier-ignore -->

<ProductVideo
videoLight={NewFunnelLight}
videoDark={NewFunnelDark}
classes="rounded"
/>

_Note: If you don't have a dark video, just leave out the videoDark prop and the light video will be used for both color modes._

Embedding Wistia videos

This can be used in articles like tutorials or blog posts for longer-form videos (where the asset exceeds 20 MB and can't be uploaded to Cloudinary).

Embedding YouTube videos

While not an MDX component, a reminder that when embedding a YouTube video, you should do two things:

  1. Use the -nocookie variant of the YouTube URL. eg:
https://www.youtube-nocookie.com/embed/{VIDEO_ID}
  1. Add the allowfullscreen attribute to the iframe so users have the option to watch the video in fullscreen (useful for reading code snippets).

Example:

<iframe
    src="https://www.youtube-nocookie.com/embed/2jQco8hEvTI?start=375"
    className="rounded shadow-xl"
/>

Code blocks

The PostHog website has a custom code block component that comes with a number of useful features built-in:

Basic codeblock

Codeblocks in PostHog are created by enclosing your snippet using three backticks (\\\`) or three tildes (\~\~\~), as shown below:

{ "name": "Max, Hedgehog in Residence", "age": 2 }

This will produce the following codeblock:

{
"name": "Max, Hedgehog in Residence",
"age": 2
}

Adding syntax highlighting

Syntax highlighting can be added by specifying a language for the codeblock, which is done by appending the name of the language directly after the opening backticks or tildes as shown below.

{ "name": "Max, Hedgehog in Residence", "age": 2 }

This will produce the following output:

{
"name": "Max, Hedgehog in Residence",
"age": 2
}

Using tabs

You can use the `` component to create tabs in your code blocks. This is useful for showing multiple code snippets or examples in a single code block.

<Tab.Group tabs={[ 'Preview', 'Markdown']}> Preview Markdown

console.log('Hello, world!')

console.log('Hello, world!')

Supported languages

Here is a list of all the languages that are supported in codeblocks:

Frontend

| | | | ----------------- | -------------- | | HTML | html | | CSS / SCSS / LESS | css / less | | JavaScript | js | | JSX | jsx | | TypeScript | ts | | TSX | tsx | | Swift | swift | | Dart | dart | | Objective-C | objectivec |

Backend

| | | | ------- | ----------- | | Node.js | node | | Elixir | elixir | | Golang | go | | Java | java | | PHP | php | | Ruby | ruby | | Python | python | | C / C++ | c / cpp |

Misc.

| | | | -------- | ----------------- | | Terminal | bash or shell | | JSON | json | | XML | xml | | SQL | sql | | GraphQL | graphql | | Markdown | markdown | | MDX | mdx | | YAML | yaml | | Git | git |

Note: If you want syntax highlighting for a snippet in another language, feel free to add your language to the imports in languages.tsx and open a PR.

Multi-language code blocks

You can use the <MultiLanguage> component to show code blocks in multiple languages.

<Tab.Group tabs={[ 'Preview', 'Markdown']}> Preview Markdown

console.log('Hello, world!')
print('Hello, world!')

console.log('Hello, world!')

print('Hello, world!')

Multiple code snippets in one block

With PostHog's MultiLanguage component, it's possible to group multiple code snippets together into a single block.

console.log('Hello world!')

<div>Hello world!</div>

Note: Make sure to include empty lines between all your code snippets, as well as above and below the MultiLanguage tag

This will render the following codeblock:

console.log('Hello world!')
<div>Hello world!</div>

Specifying which file a snippet is from

You can specify a filename that a code snippet belongs to using the file parameter, which will be displayed in the top bar of the block.

cloud: 'aws' ingress: hostname: <your-hostname> nginx: enabled: true cert-manager: enabled: true

Note: Make sure not to surround your filename in quotes. Each parameter-value pair is delimited by spaces.

This produces the following codeblock:

cloud: 'aws'
ingress:
hostname: <your-hostname>
nginx:
enabled: true
cert-manager:
enabled: true

Code highlighting

Especially in long tutorials, you can highlight the important differences between steps using highlighting comments. It's much easier to read visual diffs than reading through the code block line by line.

| Comment | Effect | Usage | | -------------- | ---------------- | ---------------------------------------- | | // + | Green highlight | Represents additions in diffs | | // - | Red highlight | Represents removals in diffs | | // HIGHLIGHT | Yellow highlight | General emphasis without special meaning |

<Tab.Group tabs={[ 'Preview', 'Markdown']}> Preview Markdown

const a = 1
const b = 2
const c = a + b // +

console.log(a + b) // -
console.log(c) // +

console.log('end') // HIGHLIGHT

const a = 1 const b = 2 const c = a + b // +

console.log(a + b) // - console.log(c) // +

console.log('end') // HIGHLIGHT

Collapsed code blocks

In some cases, such as large nested config files, you need readers to focus on a specific part of the code block while maintaining the context. You can do this by adding focusOnLines= to the code block. This collapses the code block and only shows the lines of code you specify.

<Tab.Group tabs={[ 'Preview', 'Markdown']}> Preview Markdown

{
"projects": {
"my-app": {
"architect": {
"build": {
"builder": "@angular-devkit/build-angular:application",
"options": {
"sourceMap": {
"scripts": true, // +
"styles": true, // +
"hidden": true, // +
"vendor": true // +
}
}
}
}
}
}
}

{ "projects": { "my-app": { "architect": { "build": { "builder": "@angular-devkit/build-angular:application", "options": { "sourceMap": { "scripts": true, // + "styles": true, // + "hidden": true, // + "vendor": true // + } } } } } } }

Mermaid diagrams

Code blocks can also be used to show mermaid UML diagrams. When using these diagrams, make sure to include a text description of the diagram afterwards for accessibility and LLMs.

<Tab.Group tabs={[ 'Preview', 'Markdown']}> Preview Markdown

sequenceDiagram
Alice->John: Hello John, how are you?
John-->Alice: Great!
Alice->John: See you later!

sequenceDiagram Alice->John: Hello John, how are you? John-->Alice: Great! Alice->John: See you later!

Product list

Use ` to render a list of products sourced from useProduct hooks. It links each product to /{slug}` by default using the product's icon, color, and name.

Auto-source from a product data field (e.g. every product where wizardSupport is set):

<!-- prettier-ignore -->

<ProductList
    sourceField="wizardSupport"
    sourceValues={[true, { value: "In development", color: "red" }, { value: "Coming soon", color: "yellow" }]}
/>

<ProductList className="grid gap-4 grid-cols-2 not-prose" sourceField="wizardSupport" sourceValues={[true, { value: "In development", color: "red" }, { value: "Coming soon", color: "yellow" }]} />

Products are grouped in sourceValues order. Plain values (true, "some string") filter without an indicator. Object values ({ value, color }) also render a colored dot with tooltip text.

Manual list of products:

<!-- prettier-ignore -->

<ProductList className="grid gap-4 grid-cols-2 not-prose" products={["product_analytics", "web_analytics", "session_replay"]} />

Manual list with field-based filtering and indicators:

<!-- prettier-ignore -->

<ProductList
    products={["product_analytics", "web_analytics", "feature_flags", "llm_analytics"]}
    sourceField="wizardSupport"
    sourceValues={[true, { value: "Coming soon", color: "yellow" }]}
/>

<ProductList className="grid gap-4 grid-cols-2 not-prose" products={["product_analytics", "web_analytics", "feature_flags", "llm_analytics"]} sourceField="wizardSupport" sourceValues={[true, { value: "Coming soon", color: "yellow" }]} />

Only the products whose wizardSupport value matches a sourceValues entry will render. Other props: urlPrefix (default /), className, itemClassName, iconSize.

Wizard command

Use `` to render a copyable install button for the PostHog wizard CLI. Clicking the button copies the command to the clipboard and shows a toast notification.

The command automatically includes --region eu or --region us based on the user's feature flags.

Props:

| Prop | Type | Default | Description | | --- | --- | --- | --- | | latest | boolean | true | Appends @latest to the package name | | slim | boolean | false | Hides the "Learn more" link below the button | | className | string | '' | Additional classes for the button element |

Slim mode (button only, no "Learn more" link):

<!-- prettier-ignore -->

Without @latest (used on the homepage and wizard page):

<!-- prettier-ignore -->

Call to action

Adding `` to any article will add this simple CTA:

Don't overuse it, but it's useful for high intent pages, like comparisons.

Feature comparison tables

When comparing features between two or more products, use the ` component which sources data from the src/hooks/competitorData/` directory and lets you compare specific features across multiple competitors.

<!-- prettier-ignore -->

<ProductComparisonTable
competitors={['posthog', 'amplitude']}
rows={['product_analytics']}
/>

Read more in the product & feature comparisons handbook page.

Captions

You can add captions below images using the following code:

Add you caption copy here

Here's an example of what it looks like:

Image: PostHog webshare pricing experiment

Adding the 'Buy Now' call to action and adjusting the text enabled Webshare to boost conversion by 26%

Customer quotes

Add a styled quote component using the following code:

Product-specific quote

<!-- prettier-ignore -->

<OSQuote
customer="significa"
author="tomas_gouveia"
product="web_analytics"
/>

Generic quote

<!-- prettier-ignore -->

<OSQuote
customer="lovable"
author="viktor_eriksson"
quote={0}
/>

We mainly use them in customer stories and some product pages.

Quotes are sourced from the useCustomers hook and can reference product-specific quotes or general quotes by someone at a company. Be sure to add the customer's information to the useCustomers hook in src/hooks/useCustomers.tsx.

Example

<!-- prettier-ignore -->

quotes: {
viktor_eriksson: {
name: 'Viktor Eriksson',
role: 'Software Engineer',
image: {
thumb: 'https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/viktor_00c779a706.jpg',
},
quotes: [
"PostHog is super cool because it is such a broad platform. If you're building a new product or at a startup, it's a no-brainer to use PostHog. It's the only all-in -one platform like it for developers.",
],
},
},

Collapsible sections

The combination of <details> and <summary> components enables you to add a collapsible section to your page. Useful for FAQs or details not relevant to the main content.

<details>
<summary>Can I specify some events to be identified and others to be anonymous for the same users?</summary>

Not if you already identified them. Once a user is identified, all _future_ events for that user are associated with
their person profile and are captured as identified events.
</details>

Tabs

Tabs enable you to display different content in a single section. We often use them to show different code examples for different languages, like in installation pages.

To use them:

  1. Import the Tab component.
  2. Set up Tab.Group, Tab.List, and Tab.Panel for each tab you want to display. The tabs prop in Tab.Group should be an array of strings, one for each tab. This enables you to link to each tab by its name.
  3. Add the content for each tab in the Tab.Panel components. You should use snippets for readability, maintainability, and to avoid duplication, but you can use multiple snippets in a single tab.

For example, here's how we set up the tabs for the error tracking installation page:


Error tracking enables you to track, investigate, and resolve exceptions your customers face. Getting this working requires installing PostHog:

<!-- prettier-ignore -->
Web
Next.js
Python

You can default to a specific tab by passing the tab name in the query string like:

/docs/product-analytics/installation?tab=web

Linking internally

Use Markdown's standard syntax for linking internally.

[Link text](/absolute-path/to/url)

Be sure to use _relative links_ (exclude https://posthog.com) with _absolute paths_ (reference the root of the domain with a preceding /).

| | | | -------------------- | ------------------------------------------ | | Correct syntax | /absolute-path/to/url | | Incorrect syntax | https://posthog.com/absolute-path/to/url |

Open a new PostHog window

To open a link in a new window within the PostHog.com OS interface, use state={{ newWindow: true }} like:

<!-- prettier-ignore -->

Link text

Linking externally

The ` component is used throughout the site, and is accessible within Markdown. (When used _internally_, it takes advantage of <Link to="https://www.gatsbyjs.com/docs/reference/built-in-components/gatsby-link/" external>Gatsby's ` features like prefetching and client-side navigation between routes).

While that doesn't apply here, using it comes with some handy parameters that you can see in action via the link above:

Example:

click here

Sometimes we link to confidential information in our handbook. Since the handbook is public, it's useful to indicate when a link is private so visitors aren't confused as to why they can't access a URL (like a Slack link or private GitHub repo). Use the `` component for this. See an example (on our share options page.)

click here

Private links will always open in a new browser tab.

Mention a team member

Use this component to mention a team member in a post. It will link to their community profile and appears like this: Cory Watilo

There's also a photo parameter which will inline their photo next to their name like this: Cory Watilo

Mention a small team

Use this component to mention a small team in a post. It will link to their team page and appears like this:

The default version shows the team's mini crest and name in a bordered "chip" style. There's also a noMiniCrest parameter to omit the mini crest and border for inline usage like this:

Both versions will show the full team crest on hover. Clicking the tooltip will open the team page in a new window.

Embedded posts

You can embed what looks like ~~a Tweet~~ an X post using the <Tweet> component. It's used on the terms and privacy policy pages, but was componentized for use in blog posts to break up bullet points at the top of the post.

_Note: This does not actually embed an X post ; it's just styled to look like one._

Here's what a post looks like. It's designed to have a familiar look that makes it easy to scan.

If you show multiple posts in a row, they'll be connected by a vertical line to make it look like a thread.

Usage

Be sure to change the alert message which appears if you click one of the action buttons (reply, repost, like).

<!-- prettier-ignore -->

<Tweet
className="mx-auto"
alertMessage="Gen Z? Don't get distracted. You're here to read our exciting embedded post component."
>
If you show multiple posts in a row, they'll be connected by a vertical line to make it look like a thread.

You can optionally center the post with the mx-auto class (shown in the example code, but _not_ used in the preview above).

MDX setup

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/mdx-setup

What better way to document MDX than with MDX?

Rationale

There were a few moving parts involved with setting up MDX so it might make sense to have them written down.

What's MDX?

Not in scope here - but it's essentially React in Markdown.

How do we make it work?

Page creation

Website pages are automatically created for all MD and MDX files using Gatsby's createPages API. Slugs are automatically generated based on the file title, and the design template for each page is determined based on the folder the file resides in.

Design templates

There are currently 5 templates:

Each template is passed a unique automatically generated ID that is used to query the data contained inside of the post.

The GraphQL query inside of each template will return everything we need, from content to frontmatter, and we use the component MDXRenderer to render the body, and MDXProvider to pass some context that is available to all MDX pages.

In this case, we pass references to components that can then be used without imports directly on MDX pages, like this hedgehog:

Because of the components passed to MDXProvider, I can include this hedgehog by just adding `` in my MDX file - no import needed.

However, if I want to include something from a module, I can also do so. Here's how one would insert a Transition component from Headless UI:


## Some Markdown

{/* ... */}

Currently, almost every component on the site is available automatically. This will eventually change because it causes some performance issues. For now, if you need a reference for which components you should be using in your MDX, check out our MDX components handbook page.

mdxImportGen

The mdxImportGen.js script handles global MDX imports automatically. This is currently a quick implementation that can improve and be made more robust in the pre-commit process. Essentially, it prepares a file based on all the components in our src/components directory which is then used to pass the components to MDXProvider, making them available everywhere.

Doing globally available imports this way was important for 3 main reasons:

Merch store development

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/merch-store

Read this primer on how our merch store works.

Adding new products

Products need to be first created in Brilliant, then added to Shopify.

Create the product in Brilliant

Brilliant handles adding products in inventory. Once the product appears in inventory, it needs to be linked to the product in Shopify.

After the product is created, you'll need to find the variant_id. If the product has no variants (ie: a sticker), you'll only need to enter one variant_id to Shopify. If the product has variants (ie: a t-shirt with sizes like S, M, L, etc.), you'll need to enter one variant_id per variant.

To find the variant_id, click Download CSV from the inventory page.

Create the product in Shopify

  1. Give the product a name
  2. Description appears when the product sidebar is opened
  3. Add photos
  4. Set the product category
  5. Set the product status to Active
  6. For sales channels, make sure it's available in Shop, Headless PostHog Merch Store, and Shopify GraphQL API.
  7. Set the price
  8. Uncheck Track quantity as this is handled through the Brilliant API.
  9. Under Metafields, add a Product subtitle. This appears in the index view for the product.
  10. Save the product
  1. Reference the CSV downloaded from Brilliant and look for the variant_id column.
  1. Add the product to the Home page collection.
  2. Save the product
  3. Note: the website needs to be rebuilt for the product to appear. Run the /rebuild-website command in Slack. The site is typically rebuilt within 20 minutes.

Running the merch store locally

You'll need to set environment variables to source products from Shopify and build the merch store.

SHOPIFY_APP_PASSWORD=
GATSBY_SHOPIFY_STOREFRONT_TOKEN=

We don't include these by default as sourcing the products from Shopify takes an absurd amount of time. Ask the if you need these values.

About PostHog.com

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/overview

The is responsible for the PostHog.com website, but it takes a village to keep it running smoothly.

If you're new here, you might be interested in reading why our website looks like a desktop operating system.

| What | Who | | ---------------------- | ---------------------------------------------------------------------------- | | Design & copy | Cory Watilo | | Technical architecture | Eli Kinsey | | Graphic design | Lottie Coxon | | Docs | | | Pricing data | | | Job listings | |

Our website was featured on Dive Club shortly after it launched in September 2025. You can watch the interview with Cory Watilo below:

<iframe src="https://www.youtube-nocookie.com/embed/9GLzf6VCfuA?si=ZdZ9oDg-gH5cTJdk" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" className="rounded" allowfullscreen

</iframe>

Custom presentations

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/presentations

Custom presentations are PostHog's version of landing pages.

They're designed to look like product slideshows, but can be more easily customized to include slides with custom content that span multiple products.

They can be used to tailor content for:

This is an MVP and ultimately needs to be refactored a bit. Check with Cory Watilo if you'd like to use this feature.

The example below shows a presentation that is personalized for a particular company and includes the person within PostHog assigned to that account.

Image: Custom presentation example

Features

How it works

Presentations are accessed via the URL pattern:

/for/{persona}
/for/{company-domain}/{role-or-id}

Examples:

/for/product-engineers
/for/engineering-managers
/for/product-managers
/for/engineering-managers
/for/engineering-directors
/for/example.com/product-engineers
/for/example.com/123456 ## see "Overriding default content" below

The system will:

  1. Look up company data from Clearbit (using the company's URL)
  2. Load the appropriate presentation JSON
  3. Render slides with company-specific data
  4. Display sales rep information if available
src/
├── components/Presentation/
│   ├── Templates/          # Slide templates
│   │   ├── StackedTemplate.tsx
│   │   ├── ColumnsTemplate.tsx
│   │   ├── ProductTemplate.tsx
│   │   ├── PricingTemplate.tsx
│   │   └── BookingTemplate.tsx
│   └── Utilities/          # Shared utilities
└── presentations/          # JSON configuration files
    ├── default.json
    ├── product-managers.json
    ├── product-engineers.json
    ├── product-directors.json
    ├── engineering-managers.json
    └── dream-customers/
        └── example.com.json

Presentation structure

Here's a general structure for a presentation using the various templates. You can add multiple product slides by adding additional entries. See a full example on GitHub in product-engineers.json.

{
    "name": "Product Engineers",
    "config": {
        "thumbnails": false,
        "notes": false,
        "form": true,
        "teamSlug": "sales-cs"
    },
    "slides": {
        "overview": {
            "template": "stacked",
            "name": "Overview",
            "title": "Title goes here",
            "description": "<p>You can add the {companyName} and it will get inserted when enriched with Clearbit.</p>",
            "descriptionWidth": "@2xl:w-3/5"
        },
        "error_tracking": {
            "template": "product",
            "name": "Error Tracking",
            "handle": "error_tracking",
            "screenshot": "home",
            "title": "Title goes here",
            "description": "<p>Description</p>"
        },
        "all_in_one": {
            "template": "columns",
            "name": "All-in-one",
            "title": "Multiple products on one slide",
            "description": "Supports multiple columns of content",
            "content": [
                {
                    "handle": "product_analytics",
                    "title": "Product Analytics",
                    "description": "<p>Description goes here.</p>",
                    "screenshot": "funnelVertical"
                },
                {
                    "handle": "session_replay",
                    "title": "Session Replay",
                    "description": "<p>Description goes here.</p>",
                    "screenshot": "home"
                },
                {
                    "handle": "error_tracking",
                    "title": "Error Tracking",
                    "description": "<p>Description goes here.</p>",
                    "screenshot": "errorsCropped"
                }
            ]
        },
        "pricing": {
            "template": "pricing",
            "name": "Pricing",
            "title": "Pricing",
            "description": "PostHog offers usage-based pricing. This means you only pay for what you use, and you can set billing limits so you never get an unexpected bill.",
            "image": "/images/products/product-analytics/screenshot-billing.png"
        },
        "cta": {
            "template": "booking",
            "name": "Get a demo",
            "title": "Get a demo",
            "description": "<p><strong>No demos required</strong> – you can try PostHog without ever talking to us. But if you'd like personalized demo, book a time.</p>"
        }
    }
}

Templates

Different templates support different features.

stacked

Content is stacked top to bottom and optionally supports an image which replaces the default Hogzilla background image.

Image: Stacked slide template

The above example does not use the image prop, thus Hogzilla is included.

"overview": {
    "template": "stacked",
    "name": "Overview",
    "title": "Title goes here",
    "description": "<p>You can add the {companyName} and it will get inserted when enriched with Clearbit.</p>",
    "descriptionWidth": "@2xl:w-3/5"
},

product

This imports the useProduct.ts hook to fetch product data based on the handle passed in. It allows it to access things like the product name, icon, color, and array of screenshots.

Image: Product template screenshot

In the above example, it uses the "home" screenshot from the llm_analytics product:

"llm_analytics": {
    "template": "product",
    "name": "LLM Analytics",
    "handle": "llm_analytics",
    "screenshot": "home",
    "title": "LLM Analytics",
    "description": "Understand how your users consume AI in your product, and monitor performance and cost when using different models."
},

columns

This is a multi-column layout that supports multiple products or features side-by-side.

There is currently no logic to wrap items, so it works best for 2-4 columns for now.

Image: Columns template example

"all_in_one": {
    "template": "columns",
    "name": "All-in-one",
    "title": "PostHog apps have great synergy",
    "description": "Identify a trend, see what happened, assign issues – all in one place.",
    "content": [
        {
            "handle": "product_analytics",
            "title": "Product Analytics",
            "description": "<p>Easily uncover user friction by following the drop-offs in a funnel.</p>",
            "screenshot": "funnelVertical"
        },

        {
            "handle": "session_replay",
            "title": "Session Replay",
            "description": "<p>Watch session recordings to understand friction in the user experience.</p>",
            "screenshot": "home"
        },
        {
            "handle": "error_tracking",
            "title": "Error Tracking",
            "description": "<p>Assign issues to engineers to get user problems solved quickly.</p>",
            "screenshot": "errorsCropped"
        }
    ]
},

pricing

Image: Pricing slide screenshot

"pricing": {
    "template": "pricing",
    "name": "Pricing",
    "title": "Pricing",
},

booking

Image: Booking slide screenshot

"cta": {
    "template": "booking",
    "name": "Get a demo",
    "title": "Get a demo",
    "description": "<p><strong>No demos required</strong> – you can try PostHog without ever talking to us. But if you'd like personalized demo, book a time.</p>"
}

Customization

Overriding default content

You can create an entirely personalized presentation, use the dream-customers folder. Set the json filename to the company's domain name and inside the file, set an arbitrary ID that will be used in the URL, like:

/for/hasura.io/123456

Reference content from any persona file with inherit, or override the content by adding your own.

  "overview": {
      "template": "stacked",
      "title": "Hey John!",
      "description": "We've prepared this custom presentation specifically for the Hasura team to show how PostHog can accelerate your product development."
  },
  "error_tracking": {
      "inherit": "product-engineers",
      "slideKey": "error_tracking"
  },
  "feature_management": {
      "inherit": "engineering-managers",
      "slideKey": "feature_flags"
  },

Display options

Use the config object in the JSON file that supplies the content for the presentation to customize how the presentation renders. This can be done for a persona, a specific company, or an individual.

All properties
"config": {
    "thumbnails": false, // hides slide thumbnails column
    "notes": false, // hides presenter notes drawer
    "form": true,  // shows the lead form
    "teamSlug": "sales-cs" // specifies which Small Team appears in the form
}

The thumbnails, notes, and form values can be overridden in the query string (independently), like:

/for/product-engineers?thumbnails=false&notes=false&form=true

These configuration options are remembered when using the _Share your windows_ link generator in the _Active windows_ pane. This is useful for sending a link to someone that will open multiple windows _and also_ remember the display options of a presentation.

Product presentations

The above properties also work for product presentations, like:

/llm-analytics?thumbnails=false&notes=false&form=true

Lead form

The lead form is hidden by default but can be enabled in a persona's JSON file, or displayed manually using the query param &form=true.

"config": {
    "form": true,
}

Non-personalized (industry-specific) landing pages show avatars of the .

Image: Default small team

Company-specific landing pages show the by default.

Image: No assignment in Salesforce

This is because different URL patterns are intended for different purposes.

| Path | Purpose | Team | | -------------------------- | -------- | ----------------------- | | /for/{company}/{persona} | Outbound | New Business Sales Team | | /for/{persona} | Inbound | Product-Led Sales Team |

Small team

The small team in the config object can be overridden for any persona, company, persona within a specific company, or completely custom landing page.

"config": {
    "form": true,
    "teamSlug": "sales-cs"
}

This can also be overridden to show a specific small team using &t={id} using mappings in the TEAM_QUERY_MAP in src/components/Presentation/index.tsx and works for product presentations, use case landing pages, personalized landing pages, and custom presentations.

| ID | Small Team | | --- | ------------------- | | 1 | sales-cs | | 2 | sales-product-led |

On landing pages personalized to a specific company, we check if the account is assigned in Salesforce. This takes priority over any small team assignment in JSON and the t query param.

Image: Account assigned in Salesforce

Product & feature comparisons

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/product-comparisons

Keeping product comparison charts up-to-date across a large website with multiple products is tricky, so we've built a way to source data from a single place. That way, if a competitor adds a new feature (or updates an existing one), we can update the data in one place and have it automatically reflected across the entire website in existing product comparison tables, blog posts, and other documentation.

To do this, we need a source of record for:

By standardizing all features across all products and competitors, we can generate a comparison table without any hard-coded data.

Example

This is not an ordinary Markdown table. (In fact, it's not Markdown at all!)

<OSTabs padding triggerDataScheme="primary" tabs={[ { label: 'Output', value: 'table', content: ( <ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { label: 'Some product summaries' }, 'product_analytics', 'experiments', { label: 'Cherry-picked rows about Product Analytics' }, { path: 'product_analytics.pricing.free_tier', label: 'Free usage', description: 'Custom description for the pricing row', }, 'product_analytics.features.autocapture', 'product_analytics.insights.sql_editor', 'product_analytics.cohorts', ]} /> ), }, { label: 'Code', value: 'code', content: (

<code className="language-mdx">{<ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { label: 'Some product summaries' }, 'product_analytics', 'experiments', { label: 'Info about Product Analytics' }, { path: 'product_analytics.pricing.free_tier', label: 'Free usage', description: 'Monthly free tier', }, 'product_analytics.features.autocapture', 'product_analytics.insights.sql_editor', 'product_analytics.cohorts', ]} />}</code>

), }, ]} />

See more examples in the PostHog vs Amplitude blog post. All tables are dynamically rendered from data sourced from json arrays.

Product & platform features

Feature definitions for PostHog products are stored in:

/src/hooks/featureDefinitions/{productName}.tsx // individual products
/src/hooks/featureDefinitions/platform.tsx // overall platform

Session Replay example:

/src/hooks/featureDefinitions/session_replay.tsx

Features can live in the features node, or nested inside in a logical grouping. (This is a truncated example.)

export const sessionReplayFeatures = {
    summary: {
        name: 'Session Replay',
        description: 'Watch real user sessions to understand behavior and fix issues',
        url: '/session-replay',
        docsUrl: '/docs/session-replay',
    },
    features: {
        canvas_recording: {
            name: 'Canvas recording',
            description: 'Capture canvas elements in your app',
        },
        chat_with_recordings: {
            name: 'Chat with your recordings',
            description: 'Discover useful recordings using AI-powered chat',
        },
    },
    platform_support: {
        description: 'Record on web and mobile across major frameworks',
        features: {
            web_app_recordings: {
                name: 'Web app recordings',
                description: 'Capture recordings from single-page apps and websites',
            },
            mobile_app_recordings: {
                name: 'Mobile app recordings',
                description: 'Capture recordings in iOS and Android apps',
            },
            ios_recordings: {
                name: 'iOS recordings',
                description: 'Record sessions from iOS mobile apps',
            },
        },
    },
}

Competitor (& PostHog) data

Competitor data is stored in:

/src/hooks/competitorData/{competitorName}.tsx
/src/hooks/competitorData/posthog.tsx

Amplitude example:

/src/hooks/competitorData/amplitude.tsx

Feature-level data for competitors is stored in the same format, with the exception being that products are namespaced under the products node in a single file instead of being spread across multiple files for each product.

There's also a platform node below the product array.

export const amplitude = {
    name: 'Amplitude',
    key: 'amplitude',
    assets: {
        icon: '/images/competitors/amplitude.svg',
        comparisonArticle: '/blog/posthog-vs-amplitude',
    },
    products: {
        session_replay: {
            available: true,
            pricing: {
                free_tier: '1,000 recordings',
            },
            features: {
                canvas_recording: false,
                chat_with_recordings: false,
                clickmaps: false,
                conditional_recording: false,
            },
        },
    },

    platform: {
        deployment: {
            eu_hosting: true,
            open_source: false,
            self_host: false,
        },
        pricing: {
            free_tier: true,
            transparent_pricing: false,
            usage_based_pricing: true,
        },
    },
}

Referencing data

There are several ways to assemble competitor tables. It uses the ` component which uses ` internally.

Compare products between competitors

This will list out the top-level product names and descriptions.

<OSTabs padding triggerDataScheme="primary" tabs={[ { label: 'Output', value: 'table', content: ( <ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { product: 'product_analytics' }, { product: 'web_analytics' }, { product: 'session_replay' }, ]} /> ), }, { label: 'Code', value: 'code', content: (

<code className="language-mdx">{<ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { product: 'product_analytics' }, { product: 'web_analytics' }, { product: 'session_replay' } ]} />}</code>

), }, ]} />

Render all items within a node

Use features to render all items inside the node.

This is helpful for comparing all features within a product without having to reference them individually.

<OSTabs padding triggerDataScheme="primary" tabs={[ { label: 'Output', value: 'table', content: ( <ProductComparisonTable competitors={['posthog', 'amplitude']} rows={['product_analytics.features', { label: 'Optional custom section header' }, 'dashboards']} /> ), }, { label: 'Code', value: 'code', content: (

<code className="language-mdx">{<ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ 'product_analytics.features', // includes "Features" section header { label: 'Optional custom section header' }, 'dashboards' // only renders true/false for 'available' and sources text from dashboards.tsx using the 'summary' node ]} />}</code>

), }, ]} />

Compare specific features between competitors

If you want to cherry-pick specific features, just reference the key directly. (This is useful for blog posts that compare specific features between competitors in a manually set order.)

<OSTabs padding triggerDataScheme="primary" tabs={[ { label: 'Output', value: 'table', content: ( <ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { path: 'product_analytics.pricing.free_tier', label: 'Free usage', description: 'Monthly free tier', }, { label: 'Core features' }, 'product_analytics.features.autocapture', 'product_analytics.insights.sql_editor', 'dashboards', ]} /> ), }, { label: 'Code', value: 'code', content: (

<code className="language-mdx">{<ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { path: 'product_analytics.pricing.free_tier', label: 'Free usage', description: 'Monthly free tier', }, { label: 'Core features' }, 'product_analytics.features.autocapture', 'product_analytics.insights.sql_editor', 'dashboards', ]} />}</code>

), }, ]} />

Override label/description but source values from competitor files

This is useful when referencing a global feature but want to tailor the label or description to be more personalized to the product or feature.

_Example: If there's a global data retention for 7 years but in reference to heatmaps, you might want to say "Heatmap data retained for 7 years."_

{
    label: 'Custom label',
    description: 'Custom description about heatmaps',
    path: 'heatmaps.features.heatmaps',
},

Add custom line items with arbitrary values

If you need to add a custom row that doesn't exist in the competitor data, you can use the values property to specify a value for each competitor.

<OSTabs padding triggerDataScheme="primary" tabs={[ { label: 'Output', value: 'table', content: ( <ProductComparisonTable competitors={['posthog', 'fullstory']} rows={[ { label: 'In-app prompts and messages', description: 'Send messages to users in your app', values: [true, false], }, { label: 'Custom pricing tier', description: 'Special pricing available', values: ['Enterprise only', 'All plans'], }, ]} /> ), }, { label: 'Code', value: 'code', content: (

<code className="language-mdx">{<ProductComparisonTable competitors={['posthog', 'amplitude']} rows={[ { label: 'In-app prompts and messages', description: 'Send messages to users in your app', values: [true, false], }, { label: 'Custom pricing tier', description: 'Special pricing available', values: ['Enterprise only', 'All plans'], }, ]} />}</code>

), }, ]} />

The values array should have the same length as the competitors array, with each value corresponding to a competitor in order.

Section headers

Add section headers to organize comparison tables into logical groups. Headers only require a label property:

<ProductComparisonTable
    competitors={['posthog', 'amplitude']}
    rows={[
        { label: 'Core Features' }, // Section header - no description needed
        'product_analytics.features.autocapture',
        'product_analytics.features.cohorts',
        { label: 'Advanced Features' }, // Another section header
        'product_analytics.insights.sql_editor',
        'product_analytics.group_analytics',
    ]}
/>

Headers automatically span across all columns and are styled with a border to visually separate sections.

Product page overrides

Excluding sections

Product pages list out all sections within a product's feature set by default, but in some cases it doesn't make sense to do so.

For example, showing the platform.integrations section might make sense for the Product Analytics comparison, but not for LLM Analytics comparison where that product doesn't really integrate with the tools that are otherwise integrated across the PostHog platform.

If you want to exclude a section from rendering, you can use the excludedSections property.

<ProductComparisonTable
    competitors={['posthog', 'amplitude']}
    rows={['product_analytics.features']}
    excludedSections={['platform']}
/>

For product pages, this is handled by the excluded_sections property in the product's feature definition file.

/src/hooks/productData/llm_analytics.tsx:

export const llmAnalytics = {
    ...
    comparison: {
        companies: [
            {
                name: 'Langfuse',
                key: 'langfuse',
            },
            {
                name: 'Langsmith',
                key: 'langsmith',
            },
            {
                name: 'Helicone',
                key: 'helicone',
            },
            {
                name: 'Braintrust',
                key: 'braintrust',
            },
            {
                name: 'PostHog',
                key: 'posthog',
            },
        ],
        rows: ['llm_analytics'],
        excluded_sections: ['platform'], // or an individual node like 'platform.integrations'
    },
}

Excluding rows with missing data

By default, the component will show rows where a competitor's cell doesn't have a value. This can be overridden on a per-product basis by setting require_complete_data: true in the product's feature definition file.

/src/hooks/productData/product_analytics.tsx:

export const productAnalytics = {
    ...
    comparison: {
        rows: ['product_analytics'],
        require_complete_data: true,
    },
}

Managing the company roadmap

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/roadmap

Creating a new roadmap item

Image: plus button at top

Roadmap fields

Title

The title of the roadmap item. Self-explanatory.

Description

A brief description of what the roadmap item intends to accomplish.

Projected completion date / Date completed

The projected completion/completion date of the roadmap item. If the “Complete” checkbox is checked, this field label will change from “Projected completion date” to “Date completed”.

This field also controls where the roadmap item appears on the roadmap page.

Category

Used to group roadmap items together. We only use this field to categorize the milestones on the homepage.

Add GitHub URL

If a GitHub issue is relevant to the roadmap item, you can paste the entire URL here. There is no limit on how many issues you can attach.

This field is used to display GitHub issue titles and links on roadmap items under the “Under consideration” section. It also determines the progress of the roadmap items under the “In progress” section.

Image

Used to show album art for roadmap items under the “In progress” section. Images should be square, at least 200 x 200 pixels, and not contain any borders or shadows. Images are optional. If you need a new image, request it through the normal process.

Complete

Used to determine if the roadmap item is complete. If checked, the date field will change from “Projected completion date” to “Date completed”.

Milestone

If checked, the roadmap item will appear on the roadmap section of the homepage.

Beta available

If checked, buttons under the “In progress” section change from “Subscribe for updates” to “Get early access”.

Notify subscribers

Only appears when editing an existing roadmap item. See the notifying roadmap subscribers section for more info on how to use this feature.

Notifying roadmap subscribers

You’ll see a couple of new fields.

Subject

The subject of the notification email.

Content

The body of the notification email. Supports Markdown.

This field will auto-populate with any changes you’ve made to the roadmap item. For instance, if you check the “Beta available” field before clicking “Next”, the content will be auto-populated with “Beta is now available”.

View subscribers

Click this button to see all subscribers who will receive the email.

Once your email body and subject are ready, click “Update & notify subscribers” to add the emails to the Customer.io email queue. Any emails sent from Squeak are automatically placed in “Draft”.

To send the emails, you must log in to Customer.io > Click “Broadcasts” > Click “API Triggered Broadcasts” > Click “Squeak! Roadmap item” > Click “Drafts”. From here, you can select and manually send all roadmap notification emails.

Note: This will change in the future. Once we determine this works as intended, we can automatically send emails directly from Squeak and cut out the Customer.io steps.

Managing small teams

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/small-teams

Small team pages are managed in a few places:

  1. MDX files in the repo under /contents/teams/{team-name}
  1. Team records in our CMS, with all fields editable directly on the small team page
  1. Small team FAQs

Team page content

Any MDX files in the repo will display below the team members and _recently shipped_ sections.

Quarterly goals

We're moving toward having quarterly goals in their own MDX files, like 2024-Q1.mdx. This will allow us to show the current team goals, while displaying previous goals in an accordion.

Until then, when adding quarterly goals for a new quarter, add them to the index.mdx file and move the previous goals into a section below them (rather than deleting):

## Previous goals

<details>
<summary>Q3 2024 goals</summary>

### Goal 1

- Details
- More details

### Goal 2

- Details
- More details

</details>

See the 's page as an example of how it will look.

---

Team management

Creating a new small team

Website steps
  1. Make sure you're logged in to your community account
  2. Navigate to /teams
  3. Click the _New team_ button
  4. Fill in all fields
  5. Click _Save & publish_
Repo steps
  1. Create a directory for the team in /contents/teams/{team-name} and duplicate index.mdx and objectives.mdx from another team as a starting point

The new team will be added to the _Teams_ page on the next build.

Editing a small team

  1. Make sure you're logged in to your community account
  2. Navigate to the small team page you wish to edit
  3. Click the _Edit_ button (top right corner of the app window)
  4. Edit the desired fields
  5. Click _Save_
Add content to the small team's page

Visit the new small team page to:

  1. Add team members (Login, click the Edit button)
  2. Assign the team lead (click the crown icon)
  3. Update the team photo and caption (click the team photo to upload)
  4. Update the team's mission
  5. Update the team crest (click _Edit crest_ to customize your team's crest)
Other tasks
  1. Request a custom team crest from Lottie. Describe your team with a few adjectives, maybe physical tools that can be used in an illustration, and a sentence or two about what you do. She'll create two versions: a large one (for your small team's page) and a mini crest used in other places (like the careers page).
  2. Create a new team on GitHub and remove the new members from their previous team. If moving a previous team lead, remove their team lead status from the previous team first.
  3. Give that newly-created team Direct Access with Write permission to the posthog and posthog.com repositories, as well as any other repos they will be contributing to frequently. This allows team members request review from their team instead of having to tag members individually.
  4. Create the new feature/team-{team-name} labels on GitHub.
  5. Add the team's feature ownership to the feature list.
  1. On Slack, create a new channel called #team-{team-name}. Add a new People > User group with the handle @team-{team-name}-folks. Add / Remove people from other groups as necessary.
  2. If there are existing forum topics or roadmap items, re-assign them to the new team.
  1. Update small team names in Ashby. These are used to categorize jobs by team on the careers page.

Managing a small team

To manage content on the small team page, see the Add content to the small team's page section above.

Renaming a small team

This requires coordination with the team, as updating team names involves changing slugs which will break builds if not done in the correct order. Ask in #posthogdotcom team for help.

Removing a small team

Ask in #posthogdotcom team for help.

PostHog.com site architecture

Engineering | Source: https://posthog.com/handbook/engineering/posthog-com/technical-architecture

PostHog.com doesn’t behave like a normal website. Instead, it runs inside a desktop-style environment where every page is a draggable window. This guide explains how that system works under the hood.

Core architecture

PostHog.com runs on Gatsby with a custom windowing system built using React context providers. The entire site operates inside a desktop-like environment where traditional page navigation is replaced by window management.

At a high level, every page is wrapped in the App Provider, which manages global state and window logic. The Wrapper renders the desktop interface, and each page is displayed inside an AppWindow component on the Desktop.

Key components

How pages become windows

Every page in the site is wrapped using Gatsby's wrapPageElement API in gatsby-browser.tsx:

export const wrapPageElement = ({ element, props: { location } }) => {
    return (
    )
}

When Gatsby loads a page, it passes:

These get passed to the App Provider, which converts them into windows.

The App Provider system

Located at src/context/App.tsx, the App Provider is the core of our windowing system.

Window state management

The App Provider maintains an array of active windows in state:

const [windows, setWindows] = useState<AppWindow[]>([])

Each window object contains:

Core functions

Key window management functions include:

Window routing behavior

The App Provider decides whether to create, focus, or replace a window based on navigation state:

  1. New window – If newWindow: true is passed in location state, or no existing window matches the path
  2. Focus existing – If a window with the same path already exists, bring it to the front instead of creating a duplicate
  3. Replace – For standard navigation without newWindow: true, replace the content of the focused window

This prevents duplicate windows for the same route while still allowing intentional multi-window behavior.

App settings configuration

Window behavior is controlled by the appSettings object in src/context/App.tsx. Each route can have custom settings:

const appSettings: AppSettings = {
    '/': {
        size: {
            min: { width: 700, height: 500 },
            max: { width: 800, height: 1000 },
            fixed: false,
        },
        position: {
            center: true,
            getPositionDefaults: (size, windows, getDesktopCenterPosition) => {
                // Custom positioning logic
                // Currently only offsets the homepage window
                // so the default background is always fully visible
            },
        },
    },
    // More route configurations...
}

Configuration options

The Wrapper component

src/components/Wrapper/index.tsx handles the actual desktop rendering:

export default function Wrapper() {
    const { windows, constraintsRef } = useApp()

    return (
        <div className="fixed inset-0 size-full flex flex-col">
            <div ref={constraintsRef} className="flex-grow relative">
                    {windows.map((item) => (
                    ))}
            </div>
        </div>
    )
}

It renders:

It also provides drag constraints for window movement via constraintsRef.

Window implementation

Individual windows are implemented in src/components/AppWindow/index.tsx using Framer Motion for animations and drag interactions. Each window is wrapped in a Window Provider so that child components can access the current window object via the useWindow hook.

Key features

Window lifecycle

  1. Creation – New AppWindow object added to state
  2. Mounting – Component mounts with entrance animation
  3. Interaction – User can drag, resize, minimize, close
  4. Unmounting – Exit animation before removal from state

Experience modes

The site supports two experience modes controlled by siteSettings.experience:

During development you can manually force boring mode by setting siteSettings.experience = 'boring'. This is useful for debugging.

Keyboard shortcuts

Global keyboard shortcuts are handled in the App Provider:

Navigation and search

Appearance

Window control

SEO compatibility

Despite the desktop interface, the site maintains full SEO compatibility:

Development workflow

When working on the windowing system:

  1. Test window creation – Ensure new pages create windows properly
  2. Check positioning – Verify windows open in expected locations
  3. Test interactions – Drag, resize, minimize, close functionality
  4. Verify animations – Smooth entrance and exit animations
  5. Mobile compatibility – Ensure fallback to boring mode works

Common debugging

This architecture allows PostHog.com to feel like a desktop operating system while maintaining the benefits of a static website for performance and SEO.

Product design process

Engineering | Source: https://posthog.com/handbook/engineering/product-design-process

No product design within small teams

We encourage engineers to act like feature owners, carrying a project from ideation to completion. We maintain a design system in Storybook, so engineers can build high-quality features independently, as much as possible.

Because engineers choose their sprint tasks near the beginning of a sprint (and product doesn't plan tasks _for_ engineers in advance), our process doesn't allow for us to have a product manager and a designer to work closely together before a task gets selected by an engineer.

In our process of short, 2-week sprints with no pre-planning, design would become a blocker to an engineer quickly iterating on a feature. Thus, engineers don't get support from product designers. Product designers should deliver high quality components. The product teams should have people in them that can ship good-enough quality interfaces using those components. If that's not true, we should hire or move people around.

Learn more about how we decide this in our guide to working with product designers, for engineers.

Requesting artwork and brand materials.

Need some custom artwork? Read the art and branding request guidelines.

Portfolio

Product Design, for Engineers

Engineering | Source: https://posthog.com/handbook/engineering/product-design

We believe that everyone is a designer. Because we hire generalists, there is no expectation that every project should start by running through design _first_. It is up to you when to involve our product designers in your work.

You should start by identifying the stage and goals of your project.

v0.1 or v2?

As the feature owner, you should make a choice if you're building a very basic first iteration of something, or if you're improving the experience.

There are two paths for creating the first version of a product: v0.1 or MVP (even earlier).

v0.1

If we're attempting to reach parity on a product or feature with other competitors in the space – and there's a clear path toward how a product should work or look – there's no need to loop in a designer. You should make your best judgement, while leveraging our design system to build your feature.

MVP

If you're shipping an entirely new feature (i.e. SQL for PostHog), then you should figure out if any users even care (!), which usually means creating an MVP and releasing it behind a feature flag to some friendly users. (Pro tip: make friends by being support hero.)

During both of the above approaches, designers are happy to provide light recommendations that will improve the user experience without becoming a blocker to shipping.

v2

If you're improving an _existing_ feature that is popular, you are probably creating v2. Typically when we decide to ["Nail [a specific feature]"](/blog/product-360#4-we-created-two-very-basic-frameworks), it's worth working closely with design to figure out how we can _10x_ our product vs. competitors.

However you're building, please _communicate_ to product design what your expectations are!

Feature Complexity

The more complex a feature is to implement, the more likely it is that involving product design will make you faster.

Your design skill

We generally hire full stack engineers, but some people think more like designers than others. This is fine - you should play to your strengths.

The less strong you are at design, the more we'd encourage you to involve a product designer.

If you're unsure about your skill level, ask a product designer for direct feedback. This is a book we'd recommend if you want to learn the mindset.

Scenarios for looping in product design

If you built something and just need some polish...

Feel free to share a link (or screenshot) of what you've built. We can provide UX or design feedback for your consideration.

If you built something and realize it needs some UX love...

Share a link (or screenshot) of what you've built. Depending on the state of the project, we can either go back to the wireframe stage to rethink some things, or figure out a phased approach to incremental improvement.

If you designed your own wireframes or mocks...

Sometimes if you have domain knowledge or have been thinking about a project for a while, it might make more sense for you to start the design process. Feel free to share with us for a second opinion, or if you think certain UIs or flows are suboptimal.

Need help brainstorming a flow? Pair with a product designer

If you'd like the help of a product designer on an MVP/v0.1-type project, a 30-60 min Zoom working session is a great way to brainstorm and sketch out ideas. Since our design team is small, we try to avoid too much "homework".

Usually during quick syncs like this, it's enough to help an engineer work through complex UX issues. Reach out to Cory if you're interested in a synchronous session like this.

Product design capacity

Sometimes product design may push back if they simply don't have capacity. It's subjective when this may happen, and it'll usually be in cases where they feel they won't be as helpful based on the above.

Read more about how product design works at PostHog - _it's very unique!_

How to do product, as an engineer

Engineering | Source: https://posthog.com/handbook/engineering/product-engineering

Good product engineers, bad product engineers

Good product engineers:

Bad product engineers

How to

Validate ideas

Despite what the industry tells you, it's debatable how well you can validate ideas up front (see: the number of startups that think they'll succeed based on user interviews then find they can't get any users). Just shipping is often the best way to validate an idea. When we built PostHog, Tim and James had to pivot 5 times – despite getting positive feedback on new ideas almost _every time_. Talking to users upfront can probably help remove totally stupid ideas fast, but for the majority of ideas "this could work", it only has a limited amount of benefit in our experience.

This gives you the best evidence (do people _actually_ use it, and what do they think), but _potentially_ at the highest cost as you have to build it! The challenge with this approach is making sure you de-scope the first version of the product or feature enough that users will at least try to use it, so you get enough signal that they care, without damaging our brand because the experience is so poor.

So, when you ship something:

Just shipping makes sense when it's very obviously in line with our company strategy (which is generally proven), and you can descope it successfully. This is almost _everything_ that you may ever build here. The key is to manage the rollout carefully.

Products at PostHog generically go through three phases, and considering your phase is important when you ship new features:

1. Pre PMF at PostHog
2. Figuring out PMF at PostHog
3. Post PMF at PostHog

There are plenty of other techniques, that you can do in parallel to get a signal on a new idea:

Ship things iteratively and follow up

Iterate with users

A note on attitude first - any kind of feedback, bug report, complaint or usage is a gift from users. It's easy to get dismissive or frustrated when people don't "do what we want"! Worst case scenario is that we get ignored.

Handling users well is really important. If we do a good job responding to feedback:

  1. The product improves because we do a better job at building what users want.
  2. We get marketing benefits because the user will be impressed and will tell their friends.
  3. We get more feedback because it teaches people that we listen and that we care.

Tone matters a lot. Whenever you are messaging a user, please consider:

So, how do you make yourself compelling to engage with?

The tone is your starting point. Send something informal and human. You are explicitly trying to avoid sounding like a mega corporation that treats users like numbers. You are a human, your users are human. Be friendly, light hearted and fun. Make it clear the message isn't automated if you can.

If you _must_ automate messages for whatever reason, make them quirky and informal and human. "Yo I'm Manoel, my job at PostHog is making sure mobile users are happy. It looks like this includes you! I build X, Y, Z here – is there any chance we could talk about X new thing? Here's my calendar or respond and we'll find time!" sort of vibe. Don't make messages long if you want people to do something – one or two sentences.

The medium matters. The easier something is to spam, the harder it is to get hold of people. For example, email gets ignored far more than Slack or X.

Response times are very important. If you can catch someone _whilst_ you're top of mind, you are likely to get 20x the response rate. That means within a minute or two of receiving a message. There is a huge drop off if you don't respond for 30+ minutes. Obviously this isn't always possible, but take opportunities if you happen to be online at the same time as someone you need feedback from. I once ran a call center – if we phoned someone who made an enquiry within 5 minutes, it was 9x more likely we'd get hold of them.

Closing the loop is the final point. If a user gives feedback or asks for something, you should ultimately respond with:

Closing the loop with the above shows people we've listened and considered their points carefully, and that we respect their opinion. This means they will continue to give us feedback.

Talk to users

If you're talking to a user, there are some basic principles if you want to be a good product engineer.

Use a lot of open ended questions. Ask things like:

Look for evidence that users have _actually done anything_ about the problems they say they have.

People want to be likable, so they'll often say they want what you're working on, even if they don't. Lots of features or products are nice-to-have versus must have. When something is a nice-to-have, people will act interested but won't get around to actually using it.

Ask things like:

Write down every interview. This helps us come back as the rest of the team, or you, consider other products or features in future.

Revenue and forecasting

Engineering | Source: https://posthog.com/handbook/engineering/revenue-and-forecasting

The maintains the revenue dashboards and queries that are used to understand:

  1. What our historical revenue record looks like
  2. What our revenue is expected to be this month
  3. What our churn, growth, expansion, and contraction look like
  4. Which customers have done the above activities
  5. etc

Currently, all revenue dashboards can be found in Metabase (though we hope to have them all in PostHog's own data warehouse soon 👀).

Important dashboards

(these require internal access)

FAQ

How is revenue attributed to a certain month?

Revenue is attributed to a given month based on the end-date of the invoice period. For example, an invoice that has a period of 2023-01-12 to 2023-02-12 will be counted in the revenue for 2023-02.

Some invoices cover multiple months. In this case, in the invoice_with_annual table (which is what all our dashboards use) we take the total amount of the invoice and divide it by the number of months the invoice covers, which gives us the MRR. We then generate a row for each month with that MRR so we can count that revenue into our monthly ARR/churn/expansion/etc calculations.

When is it a "forecast" vs a real, complete number?

As soon as any given month starts, we start closing invoices. As the month goes on and customers' invoice periods end, we close more invoices. This means that as the month goes on, we get more and more confident about what our revenue will be for that month. The month's revenue can still change after the month is over, however, due to delayed payments. This is generally not a hugely significant number, but it is something to be aware of.

How do we forecast a customer's revenue?

Our revenue is based on usage, so we do some basic math to make an educated guess about how much usage a customer will have in the current period.

How are cancelled bills handled? Are those forecasted?

As soon as someone cancels their account, their invoice is immediately closed. The revenue from that invoice immediately goes into the "completed" pile.

When are invoices updated?

A task is run nightly to sync the last 2 months of completed invoices, as well as all upcoming invoices for all customers. After the task is complete, the invoice_with_annual view is updated with the fresh data.

SDK guidelines

Engineering | Source: https://posthog.com/handbook/engineering/sdks/guidelines

These are living guidelines, and they're meant to help us make better tradeoffs, not to be a gatekeeper. If a guideline doesn't fit your SDK, don't treat it as a blocker. Talk about it, write down the decision, and move on with context for the next person.

The big idea: PostHog SDKs run inside customer applications. That means customers lend us trust every time they install one. Our job is to be useful, boring in production, and safe to run in places we don't control.

Make the default experience excellent

Most users never read every option. They install the package, copy the quickstart, and hope it works.

Good defaults matter more than lots of configuration. Capture the right context by default, batch sensibly, retry carefully, and avoid making users learn PostHog internals before they see value. Configuration is still important, but it should feel like customization rather than a requirement to get a safe baseline.

A clean install should get developers to first value quickly: install the package, initialize PostHog, and capture a test event or evaluate a feature flag in a few minutes. Quickstarts should work from a new project without hidden setup.

In general, features should be enabled by default unless there's a good reason not to, such as privacy risk, compatibility risk, performance risk, or platform limitations. Users should be able to opt in and out of features, and ideally high-level feature controls should also exist in PostHog project settings through remote config. When the reason is privacy-sensitive data, see Treat privacy as a product feature.

Don't break the host application

The SDK should never be the reason a customer's app crashes, slows down dramatically, fails to build, or starts behaving strangely.

Prefer graceful degradation over cleverness. If feature flag polling, replay capture, networking, storage, or background work fails, the app should keep running. In most cases, failing silently with useful debug logging is better than surprising the customer at runtime. The logger is our friend here: use it to explain what happened without making the host app pay for it.

Silent failure is not always the right default. Initialization errors, invalid configuration, unsupported hosts, project token problems, and explicit customer-called APIs should surface clear, idiomatic errors when the customer can act on them.

Retries and timeouts should be bounded, predictable, and documented. Avoid retry behavior that surprises customers, creates duplicate work, or hides persistent failures.

If an SDK can't support a platform, framework version, or runtime, make that clear at install time or startup. Don't make customers discover it through confusing production errors.

Keep dependencies boring

Every dependency adds size, security surface area, licensing questions, maintenance work, and compatibility risk. It can also introduce malware or compromised packages, naming clashes, dependency resolution surprises, runtime breakage from a transitive version change, and noticeably larger binaries or bundles. Use dependencies when they clearly improve the SDK, but be skeptical of adding them to the core path.

A good rule of thumb: basic event capture should work with as little extra machinery as the platform reasonably enables. Optional integrations can have optional dependencies, but the base SDK should stay lean.

If a specific feature needs a specific dependency, consider making that feature a separate module or package. For example, Session Replay may need image or video encoding dependencies, and iOS Error Tracking may need crash reporting dependencies. Users should have a clear way to opt out of that feature and dependency when they don't need it, can't ship it, or need a smaller binary.

That said, dependencies are sometimes the right choice. Some platforms don't provide safe basic primitives, such as an HTTP layer, storage, or concurrency tools. In those cases, use a boring, well-maintained dependency. If dependency risk is high but the code is small and stable, vendoring can also be a reasonable option. Document the tradeoff either way.

Respect the platform

Each SDK should feel natural in its language and ecosystem. Follow platform naming conventions, package manager expectations, async patterns, error handling style, logging conventions, and test tooling.

Consistency across PostHog SDKs is useful, but not at the cost of making a Ruby SDK feel like JavaScript, or a Swift SDK feel like Python. Prefer a small shared vocabulary – capture, identify, alias, flush, shutdown – and let the platform shape the details.

At the same time, don't make SDKs different for the sake of it. Users move between SDKs, and LLMs often help port examples from one language to another. Keep names, concepts, method behavior, and configuration shapes as close as the platform reasonably enables.

Be careful with resources

SDKs often run in hot paths, mobile apps, serverless functions, CLIs, browsers, background workers, and long-lived servers. Resource usage needs to be boring too.

Watch for memory growth, unbounded queues, aggressive timers, excessive network calls, large payloads, lock contention, startup cost, and battery usage. Add backpressure where possible. If the SDK holds data in memory, try to provide a clear maximum size for that data or queue, and make it configurable when customers may need to tune it.

If a customer reports one of these problems, consider adding a stress test or regression test with a safe threshold. The goal isn't to make performance tests flaky. It's to catch future PRs that clearly bring back the same class of problem.

Keep identity and state boring

Identity is one of the easiest places to confuse customers and corrupt data. distinct_id, anonymous IDs, identify, alias, reset/logout, group state, feature flag state, and persisted properties should behave predictably and, where possible, consistently across SDKs.

Be explicit about whether an SDK is stateful or stateless. Browser and mobile SDKs usually own local state because they persist anonymous IDs, queued data, flags, and replay/session context. Many server-side SDKs should be more stateless by default because one process can handle many users, tenants, requests, or jobs at the same time.

Don't accidentally make a stateless SDK stateful by storing per-user data globally. If state is needed, make the boundary obvious: request-scoped client, explicit context object, local storage, cookie, in-memory queue, or whatever is idiomatic for the platform.

Make SDKs thread-safe

Assume public SDK methods can be called from multiple threads, async tasks, workers, callbacks, request handlers, or lifecycle hooks. Queues, identity state, remote config, feature flag caches, loggers, and shutdown paths should be safe under concurrent access.

If a platform has a single-threaded runtime, still think about re-entrancy and async ordering. If something is not thread-safe, document it loudly and provide a safe path for normal usage.

Treat privacy as a product feature

PostHog helps customers understand users, but our SDKs should not collect sensitive data casually.

Be explicit about anything that can include personal data, request/response bodies, headers, screen contents, console logs, or exception context. Prefer opt-in for high-risk data, make masking and redaction easy, and document what leaves the device or server.

Think about security beyond privacy

Privacy is not the whole security story. SDKs should avoid exposing secrets, storing sensitive data unnecessarily, weakening TLS defaults, trusting unvalidated remote input, or making supply-chain risk worse.

Use platform-sandboxed storage where possible, such as app-scoped storage, Keychain/Keystore-style APIs for sensitive values, browser storage with the right assumptions, or restricted file permissions on servers. If data is only needed temporarily, prefer memory over durable storage.

Releases are part of the security model too. SDK publishing should be automated through CI and protected by an approval process, as described in the SDK release process. Avoid local machine publishing for official releases when CI can do it, because CI gives us clearer provenance, fewer long-lived credentials, and a better audit trail.

Design APIs for forward compatibility

SDK APIs live for a long time. Once a pattern is copied into thousands of apps, changing it gets expensive.

Keep the public API small, boring, and hard to misuse. Use options objects for things likely to grow. Avoid exposing internal concepts unless customers need them. Public APIs and configuration options should be unique: don't offer two or more ways to do the same thing unless there's a strong compatibility reason. Duplicate paths confuse humans, documentation, support, and LLMs.

Agents can help spec and drive SDK changes, especially repetitive cross-SDK work. Public APIs, configuration, defaults, and behavior that affects customers still need human review for ergonomics, platform fit, and long-term support cost.

Be careful not to expand the public API by accident. Exported helpers, leaked internal types, undocumented options, and test-only hooks can become APIs customers depend on. Keep internals private where the platform enables it. If something is experimental, say so clearly and consider keeping it behind an internal API until we're confident it should be public.

Prefer additive API changes over breaking ones. It's much easier to add a new method, option, or type than to remove one later. When you need a breaking change, respect the SDK's versioning scheme, make the migration obvious, document it clearly, and release it intentionally.

For larger migrations, write a migration doc and, where useful, an agent skill that can help apply the change across customer codebases. Try to batch breaking changes into a single major version instead of shipping a new breaking change every week.

Deprecate before removing

Use semver, or the ecosystem's closest equivalent, for public API changes. Removing a public method, field, configuration option, package, or behavior should usually wait for the next major version.

Before removing something, deprecate it first. Keep the deprecated method or option working until the major release, route it to the new implementation where possible, and log a clear runtime warning when it is used. The warning should say what changed, what to use instead, and where to find the migration guide.

Deprecation warnings should be useful, not noisy. Avoid logging the same warning thousands of times in a hot path if you can log it once per process, session, or call site.

Write less SDK code when the server can do it better

SDKs should collect useful context and send high-quality data. They should avoid owning complex business logic that can live safely on the server.

Server-side logic is easier to change, observe, roll back, and fix globally. SDK-side logic ships into customer apps and can take weeks, months, or years to update. Put logic in the SDK only when it needs local state, local performance, platform APIs, or offline behavior.

Make debugging humane

When something goes wrong, customers and support engineers need a path to answers.

Provide debug logging that can be enabled without rebuilding the world. Include enough information to understand initialization, dropped events, retries, network failures, feature flag decisions, and queue state. Avoid logging secrets. Project tokens are public identifiers, but other credentials are not.

Remember that SDKs often run on customer devices or infrastructure where we don't have access to logs. When it helps support and debugging, include minimal, high-value SDK state in captured data, recordings, or diagnostics. Session Replay is a good example: a small amount of SDK health context can make production issues much easier to investigate. Keep this data minimal, documented, and privacy-aware.

Test the boring paths and the weird paths

The happy path matters, but SDK bugs often hide in shutdown, retries, offline mode, old runtimes, ad blockers, proxies, clock skew, app backgrounding, forked processes, serverless cold starts, and partial initialization.

Prefer tests that match how customers use the SDK. Add small example apps where they help. For mobile and browser SDKs, remember that customers can't always roll out fixes quickly, so a little extra caution before release is worth it.

Treat docs and examples as part of the SDK

An SDK without good docs is only half shipped. Keep the quickstart current, show idiomatic examples, and explain common production setup: flushing on shutdown, identifying users, using custom hosts, handling feature flags, and enabling debug logs.

Each SDK should have a troubleshooting page for common install, build, configuration, network, and runtime errors.

Examples should be boring, copy-pasteable, and close to how customers write real production code in that ecosystem.

Public methods, configuration options, and types should have documentation comments in the platform's standard style, such as JSDoc, docstrings, KDoc, or Swift documentation comments. Write them for humans, but remember that LLMs and IDEs parse them too. A good comment explains what the method or option does, when to use it, defaults, side effects, and any privacy or performance caveats.

The public API reference should be complete and current. It should cover public methods, types, configuration options, defaults, side effects, return values, errors, and examples where they help.

Release like people depend on it

Because they do. Use semver or the ecosystem's closest equivalent, keep changelogs readable, call out breaking changes loudly, and follow the SDK release process. Official releases should be automated through CI and sit behind an approval process to reduce supply-chain risk.

Release cadence is a balance. Giant releases are hard to review, hard to debug, and hard to roll back, but releasing every tiny change can also create noise and upgrade fatigue. Prefer coherent releases: small enough to understand, grouped enough to be useful, and clearly documented so customers know whether they should care.

The right cadence also depends on the platform and ecosystem. Web and server SDK users can often upgrade quickly through a package manager, but mobile, desktop, game engine, and enterprise customers may deal with app store review, slow adoption, long release trains, or internal approval processes.

Document decisions and sharp edges

If an SDK supports only certain platform versions, has unusual threading behavior, drops events under pressure, stores data locally, or handles privacy-sensitive data, write it down.

Most of the time, a code comment near the decision is enough. For bigger decisions, write an RFC or add the guidance here if it applies across SDKs. This isn't bureaucracy. It's how we avoid the next contributor rediscovering the same tradeoff six months later.

SDKs

Engineering | Source: https://posthog.com/handbook/engineering/sdks

There is now a small, dedicated SDK team at PostHog (@PostHog/team-client-libraries) that helps drive direction and coordination. However, SDK development and maintenance remains a collaborative effort across the engineering organization. @PostHog/client-libraries-approvers exists in GitHub to coordinate the collaboration for those who are more interested than normal in the development of these SDKs.

How SDK work gets done

SDKs are maintained by engineers across different teams who either:

This distributed model means SDKs get attention from engineers with diverse expertise, and ownership is shared across the company.

Want to get more deeply involved?

If you're interested in contributing to our SDKs — whether that's fixing bugs, adding features, or improving documentation — drop a message in #team-client-libraries. We're always happy to have more people helping out, and it's a great way to learn about different parts of the PostHog ecosystem.

You can also sign up for the SDK support rotation to get hands-on experience with SDK issues.

Slack channels

What are our SDKs?

There are too many to have a static list here. Check the libraries docs to learn more about them.

SDK releases

Engineering | Source: https://posthog.com/handbook/engineering/sdks/releases

This guide documents our semi-automated release process for PostHog SDKs. Each SDK repository uses a GitHub App with restricted permissions to handle releases securely, requiring team approval before any release is published. SDK repositories also require all commits to be signed by their author.

Almost all SDKs have been migrated to this process already, but there are still some SDKs who haven't caught up.

If you're creating a new SDK/repo that must be published you MUST implement this approach.

How it works

Our SDK release process uses a dedicated GitHub App per repository that can push directly to the main branch (bypassing branch protections) while still requiring human approval through GitHub Environments. This gives us:

Setting up releases for a new SDK

When creating a new SDK, or migrating an existing one to the new workflow, follow these steps to set up the release infrastructure.

Most of these steps require super administrator privileges on GitHub. Make sure you have the appropriate permissions to work on this.

1. Create a GitHub App

Create a new GitHub App:

  1. Name: Releaser (<sdk_name>) (e.g., Releaser (posthog-go))
  2. Description: Should be "Used to release new versions of posthog-<sdk_name> (e.g. "Used to release new versions of posthog-go.")
  3. Homepage URL: Point to the SDK's docs page on posthog.com (e.g., https://posthog.com/docs/libraries/go)
  4. Webhook: Disable (uncheck "Active")
  5. Permissions: Under "Repository permissions", set only:

Note: If your app needs to open PRs in other repositories and assign teams or members as reviewers (e.g., the posthog-js upgrader opens PRs from posthog-js to posthog and assigns the client-libraries and client-libraries-approvers teams), you also need to add under "Organization permissions":

- Members: Read-only

  1. Where can this GitHub App be installed? Keep it restricted to "Only on this account"
  2. Click Create GitHub App

After creating the app:

  1. Download this image and upload it as the app icon

Image: SDK Releaser bot icon

  1. Set the background color to #D97148
  2. Click the big "Generate a private key" button to generate a private key and save it locally — you'll need it later
  3. Also save the "App ID" number - you'll need it later
  4. Go to Install App in the sidebar
  5. Install the app in the PostHog organization, restricting it to only the SDK repository

2. Expose proper access to client libraries teams

In your SDK repository settings:

  1. Verify that both @PostHog/client-libraries-approvers and @PostHog/team-client-libraries teams have at least read access to the repository. This is required for them to be able to approve release workflows.
  2. Access "Collaborators and teams"
  3. Make sure both teams are added as collaborators with at least write access

3. Create a release environment

In your SDK repository settings:

  1. Go to Environments and create a new environment named Release
  2. Configure protection rules:

Image: Protection rules)

  1. Remember to click "Save protection rules" to enforce them
  2. Add environment secrets:

Replace <SDK_NAME> with your SDK name in uppercase with underscores (e.g., GH_APP_POSTHOG_GO_RELEASER_APP_ID, GH_APP_POSTHOG_GO_RELEASER_PRIVATE_KEY)

Image: Environment secrets

4. Add app to bypass lists

The GitHub App needs to bypass certain protections to push release commits directly.

CodeQL bypass
  1. Access the CodeQL ruleset
  2. Under Bypass list, click Add bypass
  3. Select your newly created GitHub App (Releaser (<sdk_name>))
  4. Click the three-dot menu and choose Exempt
  5. Save the ruleset

Image: CodeQL bypass exemption

Repository PR bypass
  1. Go back to your SDK repository settings
  2. Navigate to RulesRulesets
  3. Open the ruleset that requires PRs (may have various names)
  4. If this ruleset doesn't exist, create one requiring PRs and reviews from codeowners which should be @PostHog/client-libraries-approvers for all files
  5. Under Bypass list, click Add bypass
  6. Select your GitHub App (Releaser (<sdk_name>))
  7. Click the three-dot menu and choose Exempt
  8. Save the ruleset

Image: Repository PR bypass

5. Grant access to organization secrets

The release workflow needs access to shared organization secrets. Grant your SDK repository access to the below organization secrets in the organization settings:

Secrets:

Variables:

6. Add the release workflow

Important: Our release workflows use GitHub Actions OIDC tokens for secure authentication with package registries. Make sure your workflow uses a version that supports OIDC for your registry:

- npm: Node.js v22+

Copy the release workflow from an existing SDK (e.g., posthog-rs) and adapt it:

  1. Update the environment variable prefix to match your SDK name
  2. Modify the changelog generation logic if needed for your language's conventions
  3. Update the version bumping logic for your package manager (npm, pip, etc.)
  4. Update the publishing steps for your package registry
npm packages: set up trusted publishing before enabling the workflow

This applies only to npm publishing (not other package registries).

If your SDK publishes to npm using OIDC trusted publishing and the package has never been published before, run this initial setup once before allowing your GitHub Actions workflow to publish:

npx setup-npm-trusted-publish @posthog/<package-name>

If the package has already been published, you can configure trusted publishing directly in npm package settings instead.

This bootstraps npm trusted publishing for the package so future automated releases can publish successfully.

7. Update the README

Add a section to your SDK's README explaining that releases are semi-automatic and link to the #approvals-client-libraries Slack channel where approval requests are posted.

8. Create required labels

Make sure the repository includes the release label, it's used to trigger new releases.

If you're not using something like changesets or sampo that automatically generates version bump labels, create the following labels as well to indicate the type of release:

9. Open a PR

Create a PR with the new release.yml workflow and request a review from @PostHog/client-libraries-approvers. There is now a small, dedicated SDK team at PostHog (@PostHog/team-client-libraries) that helps drive direction and coordination. However, SDK development and maintenance remains a collaborative effort across the engineering organization.

Triggering a release

Once set up, releases are triggered by having a release label added to the PR alongside a changesets(or matching bump-* tag). Once a PR is merged, the environment workflow will kick up and someone from the @PostHog/client-libraries-approvers team will have to approve it on #approval-support-libraries.

We're slowly migrating all SDKs to use sampo. This is a language-agnostic version of the famous changesets library.

If you're feeling inspired, I highly recommend you build an adapter for Sampo for the language you're working on. We'll all thank you for that.

Troubleshooting

Access token expired or revoked when running npm publish

If you see the error "Access token expired or revoked. Please try logging in again" when publishing with npm publish — even though your credentials and tokens are correctly configured — the issue may be with npm's token handling itself.

Solution: Migrate your project to use a pnpm workspace and publish with pnpm publish instead. pnpm handles authentication differently and isn't affected by this issue.

SDK support rotation

Engineering | Source: https://posthog.com/handbook/engineering/sdks/support-rotation

The SDK Support Hero rotation is managed by the . Each week, one member of the team is designated the SDK Support Hero. The schedule is managed in incident.io.

Your primary responsibility is to make sure SDK questions get some love — across all SDKs, including mobile. During the rotation, please keep an eye on:

How should I prioritize my time?

Firstly, try to stay on top of new escalated Zendesk tickets and GitHub issues, and make sure that issues related to a specific team are routed to them. If there is a relevant team (e.g. the issue is related to session replay in posthog-js), you can assign the Zendesk ticket to that team, and use the team's label in GitHub. If there is no relevant team for a GitHub issue, please label with SDK Support Hero. Feel free to try to fix things yourself before tagging the team.

Next, please work on SDK tickets in Zendesk, and GitHub issues labelled SDK Support Hero (and unlabelled, but please label these!). You can use your own judgement to decide which issues to work on but please consider effort / reward / urgency / your skill set. For example, posthog-js usually has the most issues, but if you're a Python expert, you might want to focus on posthog-python.

For mobile SDK issues, prioritize accordingly — rolling out fixes on mobile apps may take weeks or even months, so faster turnaround on these is important. Make sure, however, to validate changes carefully, avoid breaking changes and think through edge cases before shipping, since our ability to correct mistakes after release is significantly constrained.

At the end of the week, please write a public handover message in #support-client-libraries, to let the next person know what work is in progress, let the team know how the support rotation is going in general, and to share any learnings or feedback.

Security Best Practices

Engineering | Source: https://posthog.com/handbook/engineering/security

GitHub

SSH Keys

Connecting to GitHub requires an SSH key (unless using HTTPS). Traditional SSH keys live as text files on your filesystem, making them vulnerable to theft or misuse by malware. We explicitly prohibit the use of SSH keys stored on your filesystem.

Use Secretive or 1Password to generate and store your SSH key. We have a slight preference for Secretive because it stores your key in the macOS Secure Enclave, ensuring the key can never be exported or extracted, even by malware. Always use ECDSA or Ed25519 — don't use RSA.

Setting up with Secretive
  1. Open Secretive and click the + button to create a new key.
  2. Name your key "GitHub SSH" and select Notify in the Protection Level dropdown.
  1. Go to Secretive > Integrations in the menu bar.
  2. Select your shell on the left side set the SSH_AUTH_SOCK environment variable as instructed. For zsh, add the following to your ~/.zshrc:
   export SSH_AUTH_SOCK=~/Library/Containers/com.maxgoedjen.Secretive.SecretAgent/Data/socket.ssh

Then run source ~/.zshrc to apply it.

  1. Click on your new key in Secretive and copy the public key.
  2. Go to your GitHub SSH keys settings and add a new SSH key. Paste your public key and set the key type to Authentication Key.
  3. Test it by running:
   ssh -T git@github.com

You should see a message like "Hi username! You've successfully authenticated".

Setting up with 1Password

Follow the 1Password SSH key management guide.

Commit signing

A git commit's Author field is completely user controllable and can be forged. Signing your commits cryptographically proves you authored them, preventing impersonation and confusion.

You can sign commits with either Secretive or 1Password. We have a slight preference for Secretive because it stores your key in the macOS Secure Enclave, ensuring the key can never be exported or extracted, even by malware.

Setting up with Secretive
  1. Open Secretive and click the + button to create a new key.
  2. Name your key "Git signing key" and select Notify in the Protection Level dropdown.
  3. Go to Secretive > Integrations in the menu bar.
  4. Click Git Signing and select "Git signing key" from the Secret dropdown.
  5. Copy and paste the ~/.gitconfig and ~/.gitallowedsigners snippets into their respective files
  1. Select your shell on the left side of Secretive and set the SSH_AUTH_SOCK environment variable as instructed. For zsh, add the following to your ~/.zshrc:
   export SSH_AUTH_SOCK=~/Library/Containers/com.maxgoedjen.Secretive.SecretAgent/Data/socket.ssh

Then run source ~/.zshrc to apply it.

  1. Your ~/.gitconfig now has a signingkey pointing to a file. Copy your public key to the clipboard:
   cat <path-from-signingkey> | pbcopy
  1. Go to your GitHub SSH keys settings and add a new SSH key. Paste your public key and set the key type to Signing Key.
  2. Test it by creating an empty commit on a new branch:
   git commit --allow-empty -m "test signing"

Push the branch to GitHub — you should see a green Verified badge on the commit.

Image: Signed commit

Setting up with 1Password

Follow the 1Password git commit signing guide.

After setup

Once commit signing is configured, enable the option in your GitHub Profile to "Flag unsigned commits as unverified".

Troubleshooting

GitHub Actions

Great care should be taken when writing or modifying a GitHub Actions workflow. Actions can access (and exfiltrate) secrets scoped to the repo. We scan workflows with Semgrep and CodeQL for common misconfigurations.

Authentication

Most Actions use the default GITHUB_TOKEN, whose permissions can be scoped via the permissions property. However, GITHUB_TOKEN cannot trigger other workflows — so commits or PRs created by an Action won't run CI, leaving PRs unmergeable without manual intervention. The workaround is a Personal Access Token (PAT) or GitHub App. We use GitHub Apps because PATs are tied to an individual user and break when that user leaves PostHog.

Scope each GitHub App to its use case and ideally a single repo. Prefer creating a new App over expanding an existing one's permissions, otherwise every Action using that App inherits permissions it doesn't need.

Send a message in #team-security if you need help setting up a new GitHub App.

External contributors

In public repos, Actions may run against PRs written by external contributors. These PRs should be reviewed thoroughly before approving workflows to run against them. Otherwise, a malicious PR could gain access to and steal all of the secrets available to the repo.

Managing secrets

AWS

Application secrets are stored in AWS Secrets Manager. To modify an app's secrets, use our secrets tool.

GitHub

Secrets used by GitHub Actions are stored in GitHub secrets. All secrets should be stored in our GitHub org rather than in an individual repo. This allows us to more easily reuse secrets across repos, and also provides a holistic view of all of our secrets. The org secret should be scoped to the specific repos that need it.

Reporting a security issue

If you believe we've been hit by a security issue, raise an incident. In the best case, it'll mean security folks look at it ASAP. In the worst case, it's a false positive and we can close the incident.

Session replay architecture

Engineering | Source: https://posthog.com/handbook/engineering/session-replay/session-replay-architecture

Session recording architecture: ingestion → processing → serving

1. Capture (client-side)

PostHog-JS uses rrweb (record and replay the web) to:

Events include metadata: $window_id, $session_id, $snapshot_source (Web/Mobile), timestamps, distinct_id

2. Ingestion pipeline

Phase 1: Rust capture service (recordings mode)

rust/capture/src/router.rs:235 and rust/capture/src/v0_endpoint.rs:342

Kafka sink (rust/capture/src/sinks/kafka.rs):

Phase 2: Blob ingestion consumer (Node.js/TypeScript)

plugin-server/src/main/ingestion-queues/session-recording-v2/

SessionRecordingIngester consumes from Kafka and:

  1. Parses gzipped/JSON messages (kafka/message-parser.ts)
  2. Batches by session via SessionBatchRecorder
  3. Buffers events in memory per session using SnappySessionRecorder:
    public recordMessage(message: ParsedMessageData): number {
        if
   ...
   133| {
            this.endDateTime = message.eventsRange.end
        }

        for (const [windowId, events] of Object.entries(message.eventsByWindowId)) {
            for (const event of events) {
                const serializedLine = JSON.stringify([windowId, event]) + '\n'
                const chunk = Buffer.from(serializedLine)
                this.uncompressedChunks.push(chunk)

                const eventTimestamp = event.timestamp
                const shouldComputeMetadata = eventPassesMetadataSwitchoverTest(
                    eventTimestamp,
                    this.metadataSwitchoverDate
                )

                if (shouldComputeMetadata) {
                    // Store segmentation event for later use in active time calculation
                    this.segmentationEvents.push(toSegmentationEvent(event))

                    const eventUrl = hrefFrom(event)
                    if (eventUrl) {
                        this.addUrl(eventUrl)
                    }

                    if (isClick(event)) {
                        this.clickCount += 1
                    }

                    if (isKeypress(event)) {
                        this.keypressCount += 1
                    }

                    if (isMouseActivity(event)) {
                        this.mouseActivityCount += 1
                    }

                    this.eventCount++
                    this.size += chunk.length
                }

                rawBytesWritten += chunk.length
            }
        }

        this.messageCount += 1
        return rawBytesWritten
    }
  1. Flushes periodically (max 10 seconds buffer age or 100 MB buffer size)

Persistence (sessions/s3-session-batch-writer.ts):

Metadata written to ClickHouse via Kafka:

3. Storage schema

ClickHouse tables

session_replay_events (primary, v2):

session_id, team_id, distinct_id
min_first_timestamp, max_last_timestamp
block_first_timestamps[], block_last_timestamps[], block_urls[]
first_url, all_urls[]
click_count, keypress_count, mouse_activity_count, active_milliseconds
console_log_count, console_warn_count, console_error_count
size, message_count, event_count
snapshot_source, snapshot_library
retention_period_days

session_recording_events (legacy):

PostgreSQL

PostgreSQL writes happen when:

  1. User pins to playlist → Immediate write
  2. User requests persistence → Immediate write + background LTS copy task
  3. Auto-trigger on save → Background LTS copy task (via post_save signal)
  4. Periodic sweep → Finds recordings 24hrs-90days old without LTS path, queues background tasks

Note: Regular session recordings (not pinned/persisted) do NOT write to PostgreSQL - they only exist in ClickHouse session_replay_events table until explicitly pinned or persisted as LTS.

posthog_sessionrecording model:

S3 object storage

4. Playback/Retrieval

API Flow (posthog/session_recordings/session_recording_api.py)

GET /api/projects/:id/session_recordings/:session_id/:

  1. Loads metadata from ClickHouse session_replay_events or Postgres
  2. Returns: duration, start_time, person info, viewed status

GET /api/projects/:id/session_recordings/:session_id/snapshots: Two-phase fetch:

  1. Phase 1: Returns available sources: ["blob"] or ["blob", "realtime"]
  1. Phase 2: Client requests ?source=blob

Source resolution:

Query (queries/session_replay_events.py):

SessionReplayEvents().get_metadata()  # metadata
SessionReplayEvents().get_block_listing()  # S3 blob locations

Returns block listing:

[{
  "blob_key": "s3://bucket/path?range=bytes=0-1000",
  "first_timestamp": "...",
  "last_timestamp": "...",
  "first_url": "...",
  "size": 1000
}, ...]

Frontend playback

frontend/src/scenes/session-recordings/player/

  1. sessionRecordingPlayerLogic fetches snapshot sources (only blob_v2 now, except for hobby)
  2. For each snapshot source fetches snapshots
  3. Decompresses Snappy blocks
  4. Parses JSONL: [windowId, event] per line
  5. Feeds to rrweb-player for DOM reconstruction
  6. Renders in iframe with timeline controls

Metadata (playerMetaLogic.tsx):

Key optimizations

Data flow summary

Browser (rrweb)
  → POST /s/ with $snapshot events
  → Rust Capture validates & produces to Kafka
  → Node.js Blob Ingestion buffers & compresses
  → Writes to S3 (session blocks) + ClickHouse metadata (via Kafka)
  → Frontend fetches metadata from ClickHouse
  → Frontend fetches blocks from S3 via pre-signed URLs
  → rrweb-player reconstructs & renders

Tech talks

Engineering | Source: https://posthog.com/handbook/engineering/tech-talks

We encourage engineers to give tech talks on topics they're interested in/knowledgeable about.

Recording links are only accessible by the PostHog team.

Here are our talks so far:

How we track and manage usage

Engineering | Source: https://posthog.com/handbook/engineering/usage_reports

Tracking and managing usage is one of the core responsibilities of the . If we do it wrong, we don't get paid.

Each organization's usage is calculated once per day and saved in a usage report. This usage report is sent to the billing service, which saves the report and sends the usage along to Stripe for the customer's subscription, if one exists.

Usage reports

Usage reports are largely generated within posthog/posthog - because that's where the usage happens. Every day at midnight BST a cron job runs in each instance (US and EU) to calculate usage for every single organization in the instance.

Occasionally the cron will get interrupted - when this happens the billing service won't receive or store any of the reports, and usage won't be sent to Stripe. You'll notice that usage reports have failed in two ways:

  1. When looking at the Revenue dashboard in Metabase, you'll see that there are fewer reports than previous days, and one of the instance (generally US) will show 0 reports sent.
  2. When looking at the Usage report insight on the Growth dashboard you'll see a big dip in an otherwise steady trend.

We don't currently have a way to automatically re-run failed usage reporting, so we have to do it manually. To do so, you'll need to follow the instructions to connect to PostHog Cloud infra. Once you do so you can run a management command to re-run the usage reports for a specific date:

python manage.py send_usage_report --date YYYY-MM-DD --async 1

where the date is the day that the usage report would have been run, so is one day past the date where usage reports are missing. For instance, if we had 0 usage reports on May 11, the date you'd use in the command is actually May 12 (because usage reports are reporting usage for the previous day).

It is recommended to run async using the --async 1 option so you don't need to wait for all the billing requests to be completed synchronously. If you use this option, it'll finish with Done!. When using this option, it's important to go back and ensure it is completed and there are no errors / Clickhouse timeouts.

If you run the command without the async option it can take a while to run, and if it gets interrupted (eg because pods were turned over with a deploy) it'll fail again with command terminated with exit code 137. Simply reconnect and try again. If it's successful, you'll get a log like 21262 Reports sent!.

Visiting customers as an engineer

Engineering | Source: https://posthog.com/handbook/engineering/visiting-customers

As a product engineer, you’re encouraged to visit customers at their offices to gather feedback and ship features or improvements on the spot.

While PostHog is fully remote - we optimize for async work, write things down, and talk to users remotely – the reality is that occasional in-person time with customers is highly valuable. Some products are hard to dogfood properly, and it can be tricky to fully grasp specific workflows or see how high-scale users actually operate.

In-person visits let you notice things that don’t surface on calls: team dynamics, tools they rely on day to day and small but important friction points that get lost in remote conversations. People also tend to hold back or polish their feedback when writing it down – they might dismiss a detail as unimportant, or assume you already know something when you don’t. Seeing it all unfold in real life can surface insights you’d never get otherwise. All of this makes a customer visit time well spent.

Which customer should you visit? Sometimes, when interacting with a customer on Slack, you’ll notice an obvious click with someone. You’ll know when this happens – they’re friendly, proactive with feedback, and genuinely interested in making the product better. They might also be driving heavy usage, with multiple power users, or even adoption across different teams. If you come across a customer like this and feel curious to dig deeper into how they use PostHog, that’s a strong sign they’d be a good candidate for a visit. Of course, this makes the most sense for customers whose teams are at least partly in-office – otherwise you won’t get the real benefit of seeing how they work together day to day.

Here’s one way to organize a great customer visit. None of this is set in stone, so feel free to adapt, and pay attention to what the customer is comfortable with.

1. Identify the biggest areas for improvement

Review the customer's Slack channel and pull together a list of the most pressing issues from the past few months. For larger customers, there may also be lots of context in BuildBetter and past recorded calls, as well as information from their sales or CS person. Focus on understanding what they’re struggling with most and which upcoming features matter the most to them.

2. Lock in dates and the point of contact

Find a single point of contact to help organize the visit. Share with them the list of topics and pain points, and explain that you’d like to meet in person to get a deeper understanding. Don’t underestimate this step – even if the company is very engaged on Slack, people are busy and organizing something optional like this isn’t always their top priority. Having one person on their side makes it much easier to get dates confirmed, which is the main thing you need at this stage. You can sort out the details and a more precise agenda later.

As for the duration, two to three days is usually a sweet spot – enough time to spend quality time with the team without overstaying or being a distraction in their office. Use your best judgment here and agree on the timing with your point of contact. A nice option can be to tie the visit onto a small-team offsite – if the team is already traveling, extending for a couple of days can allow some extra time for this.

For large customers with an account manager assigned, it’s super valuable to bring them along. They often already have a strong relationship with the customer, can ask additional questions you might not think of, and can keep things like scheduling, follow-ups, and expectation-setting smoother.

3. Book the travel

Book flights and tickets as early as possible to avoid high prices. Reach out to the People & Ops team to get a budget approved.

4. Plan the agenda

Work with your point of contact to set the agenda. Aim to get dedicated time with several people who use the product. A one-hour session is usually enough to go deep and uncover useful insights. Also offer other formats, like a company-wide training or Q&A session, and let your POC guide you on what’s most valuable for their team.

5. Deep prep

A few days before traveling, take a deep dive into how their users are actually using the product. Watch session recordings, look at the kinds of records they create, and try to spot patterns. For example, in the case of experiments, check which types of metrics they use most often, how many they create per week, and whether there are obvious points of friction. Alongside this, prepare a list of questions that can help uncover deeper insights during the sessions.

At the onsite

Run the sessions! Let users show you how they use the product in real time and as naturally as possible. Give them space to talk, ask questions to dig deeper, and don’t be afraid to let the conversation go off on tangents, as those often reveal the most interesting insights.

In between the sessions you’ll have opportunities to code. It makes sense to prioritize small improvements you can ship and demo on the same day – this kind of quick turnaround leaves a strong impression. It’s also common for team members to approach you with questions, sometimes even unrelated to your product area. Be ready for this and go out of your way to help. Solving problems in real time is one of the best parts of being there in person.

After the onsite

Revisit the transcripts of all sessions – you should record everything in Buildbetter or a similar tool. Share what you’ve learned with your team and discuss if any quarterly goals need to be re-prioritized based on the learnings. Summarize the features or improvements you shipped during or right after the visit and send a thank-you message to the customer's Slack that lists them clearly.

It's important for visibility to notate on the account that a customer visit took place. We can do this in Vitally, by creating a new note on the account, with the "On-site" category and copying any known people on the account to be aware of the note content. This can be helpful in many ways even if no primary person at PostHog is managing their account today. BuildBetter calls should automatically attach to the account along with any existing email conversations.

Just as important is following up in the weeks after. Customers should feel the visit was worth their time, and a big part of that is quickly actioning the items they raised, even if it’s just small fixes or clear updates on progress. This builds excitement and goodwill, shows immediate impact, and shows them it was worth investing their time with you. Most importantly, you’ll walk away with a much deeper understanding of how your product is really used.

Working with PostHog AI

Engineering | Source: https://posthog.com/handbook/engineering/working-with-max-ai

PostHog AI lets users interact with PostHog's products through a chat interface and other shortcuts throughout the platform. The is responsible for building and maintaining the AI platform.

What is PostHog AI?

PostHog AI enables users to:

We want Max to work with _every_ PostHog product, so that we can at some stage make a chat (or even complete automation with approval steps as necessary) the default experience for people using PostHog, then resorting to clicking the UX as the backup option for _most_ tasks.

How to work with the PostHog AI team

Products already integrated with Max always have a supporting engineer assigned on the Max team.

This naturally distributes AI knowledge throughout the organization, while ensuring high-quality AI features that integrate properly with the platform. Implementing something new, missing features in the Max API, or seeing failures? Your supporting engineer is your go-to! Tag them directly on your PRs and questions.

If your team is integrating AI features for the first time – the PostHog AI team will do their best to assign a supporting engineer.

Just message about your plans in the #team-max-ai channel in Slack.

| Product | Supporting engineer on the PostHog AI team | | ------------------- | ------------------------------------------------------------------------------------------------------------- | | Product analytics | Emanuele Capparelli | | Data warehouse | Michael Matloka | | Session replay | Alex Lebedev | | CDP | Georgiy Tarasov | | \[Insert your team] | Shoot #team-max-ai a message! |

Getting started

If you need AI capabilities for your product area:

  1. Reach out early: Contact the PostHog AI team lead at #team-max-ai in Slack to discuss your requirements
  2. Define the use case: Be specific about what AI functionality you need, or consult with us if you're trying to flesh out ideas
  3. Plan the collaboration: Work with the PostHog AI team to determine the best approach (e.g., sending an engineer to your team for a sprint or a few sprints vs. building the feature in PostHog AI directly without your involvement, or you just being able to do it solo)
  4. Coordinate sprints: Align on timing and resource allocation if needed. This shouldn't feel like a heavyweight process, if it is - we should change it

Best practices

Resources

Contact

For questions about working with PostHog AI, ask in the #team-posthog-ai Slack channel.

Writing docs (as an engineer)

Engineering | Source: https://posthog.com/handbook/engineering/writing-docs

Product engineers are responsible for writing and maintaining documentation for their products. This page is a guide to help you do this.

Ownership

High-quality docs require the expertise and context of the engineers building them, which is why you own your product's documentation.

Docs are extra important in the age of AI. All of our docs eventually make their way into the training data of newer foundation models. The quality and accuracy of your docs directly affect how people discover your product through LLMs.

AI search is our fastest-growing channel for user signups by far. So remember to update your docs and keep them up to date.

What about the so-called Docs & Wizard team?

The can help you, but they can't write docs for everyone. They are responsible for improving the docs as a knowledge base. This means:

If you want their input, hit them up in #team-docs-and-wizard or tag @team-docs-wizard in GitHub.

They've also created a comprehensive self-serve guide on how to write product docs for you to use.

When should I start writing product docs?

Three great times to write docs:

  1. When you ship a new user-facing product or feature. Write docs for big product launches before they release (during early access or beta). Smaller features and updates can wait until after they are shipped.
  1. When you recognize a confusion or gap in users' understanding of your product. This could be based on support tickets, sales requests, or just user feedback.
  1. When you update product behavior or interfaces. Check if the docs need to be updated with new information or instructions.

Basically, if users could self-serve and use your product, but aren't, you should write docs to help them do so. Write the obvious docs before users start asking you obvious questions.

What about features behind a feature flag? If you are releasing a product to users, even a small number of them, you should write docs for it. Include what stage the feature is at (private alpha, beta, etc. is fine). This helps ensure users can successfully use it and provides the added benefit of drumming up demand from those who discover it.

What should I write docs about?

Docs should help people:

  1. Get started with your product or feature. Installing it, setting it up, and finding it in PostHog.
  1. Understand what your product does, including an as complete as possible list of features and their details.
  1. Make the most of your product by detailing common use cases, concepts related to your product, answering common questions, and more.

Write the docs you would want to read if you were a user.

The has created a guide on how to write product docs for you to follow. It walks through how to structure and write your product docs in detail. Start there.

Where do docs live?

Nearly all our docs live on posthog.com/docs. You can find the repo to add and edit docs in the contents/docs directory of posthog.com. It uses file-based routing, so the folder and file structure is the same as the URL path. You can learn more about developing the website here.

Most docs should go somewhere in your product's section. Product docs usually have sections on installation, basic set up, key features, troubleshooting, common questions, and more. Docs for platform features like SDKs, data types, and PostHog AI live in the Product OS section.

Don't know where a doc or feature should go? Ask in #team-docs-and-wizard.

What about internal docs?

If you can make something public, you should. Being open source is a core value of PostHog. We try to avoid "internal" docs as much as possible.

If it deals with private information, like security, customer data, or competitor analysis, use one of our internal repos like product-internal.

You can learn more about this in the communications handbook entry.

How do I document code?

Code should be self-documenting. If it's complicated to figure out, you probably need to make it simpler. This is especially important for APIs and interfaces that other teams will interact with.

For cases where code isn't self-documenting or easy to understand, include a README.md file in the directory that is closest to the entry point of the code.

This README should:

For an example, see the PostHog AI README.

Further reading

All-hands topic of the day

Exec | Source: https://posthog.com/handbook/exec/all-hands-topics

James presents a topic of the day each week in the company all hands. The main objective of this is to repeat and reinforce key messages:

An element of repetition is important because a) we are regularly adding new people to the team, and b) just hearing a message once is not enough for it to stick. 'Repetition' does not mean literally saying the same words over and over again - it's more about finding examples of things people are doing or working on, and showing how those tie back to the bigger picture of what's important at PostHog.

We generally avoid using the topic of the day to announce new things, as these should be done async. Sometimes he will go deeper on a recent announcement, e.g. why we cut pricing for X.

Important topics to revisit regularly

Annual planning process

Exec | Source: https://posthog.com/handbook/exec/annual-planning

This is the schedule for how we run various planning processes at PostHog throughout the year, together with explanations for what each thing is and who takes part. We intentionally keep things as light as possible, but have started introducing some slightly more structured processes around longer lead things like hiring and deciding which products to build.

Besides _very_ high level financial forecasting, we don't plan out any further than 12 months, because things change and we don't want to feel locked into something that doesn't make sense. All 12 month plans are rolling, and we update them every three months at least.

Changes to the plan can happen outside of this schedule - this is a rough guide, not a strict timetable.

| Month | Week 1 | Week 2 | Week 3 | Week 4 | | ------------- | ------------------------ | --------------------- | -------------------- | ----------------------- | | January | Monthly accounts review | 12 month product plan | 12 month hiring plan | Monthly accounts review | | February | Board meeting | | | Monthly accounts review | | March | | Q2 goal setting | | Monthly accounts review | | April | | 12 month product plan | 12 month hiring plan | Monthly accounts review | | May | Board meeting | Whole company offsite | | Monthly accounts review | | June | H2 financial re-forecast | Q3 goal setting | | Monthly accounts review | | July | | 12 month product plan | 12 month hiring plan | Monthly accounts review | | August | Board meeting | | | Monthly accounts review | | September | | Q4 goal setting | | Monthly accounts review | | October | | 12 month product plan | 12 month hiring plan | Monthly accounts review | | November | Board meeting | | | Monthly accounts review | | December | Financial forecast | Q1 goal setting | | Holidays - keep empty |

What each meeting does

12 month product plan

What: We update our rolling 12 month plan, which tells us what products to build next. This then tells us who we need to hire to support the plan. The output of this plan feeds into quarterly goal setting (below).

Who:

12 month hiring plan

What: We update our rolling 12 month hiring plan, which tells us who we need to hire beyond the current quarter. The hiring plan lives in Pry.

Who:

Board meeting

What: Quarterly meeting to update the board on progress and talk through 1-2 strategic topics. Board packs are stored in Google Drive.

Who: , occasional guest presenter

Org tidy up

What: Go through all the small teams, make sure everyone is happy and in the right place, make any changes needed to support new products/general scaling.

Who:

Financial forecast

What: Review the 3 year financial forecast, add another year. Tweak based on past performance, then check it is realistic, and keeps us on track. The forecast lives in Pry.

Who: Fraser,

H2 financial reforecast

What: Midway through the year check that the 3 year forecast makes sense, tweak if necessary

Who: Fraser, Charles

Monthly accounts review

What: Review last month's management accounts against budget. November's accounts review happens in January, due to the holidays in December, as we typically get our monthly accounts around the 21st of the following month.

Who: Fraser, Charles

Pay reviews

What: We run these 3 times a year. Not in the calendar as the times shift year to year and we want flexibility as we grow.

Who:

Quarterly goal setting

What: Blitzscale pre-meeting, then individual teams meet to run their own processes.

Who: , then team leads

Whole company offsite

What: Hopefully somewhere warm.

Who: Everyone

Blitzscale responsibilities

Exec | Source: https://posthog.com/handbook/exec/responsibilities

This page lists which teams each Blitzscale team member is responsible for.

| Person | Teams | |------|-------| | James Hawkins | PostHog AI, Signals, LLM Analytics, Conversations, Website, Code | | Tim Glaser | DevEx, Growth, People & Ops, Talent, Billing, Support | | Paul D'Ambra | Product Analytics, Analytics Platform, Web Analytics, Replay, Client Libraries, Platform UX, Query Performance (currently forming) | | Ben White | Batch Exports, Infrastructure, Workflows, Ingestion, Error Tracking, ClickHouse, Logs, Feature Flags, Flags Platform, Security | | Raquel Smith | Experiments, Managed Warehouse, Surveys, Platform Features, Modeling, Data Tools, Warehouse Sources | | Charles Cook | Marketing, Demand Gen, Product Led Sales East, Product Led Sales West, New Business Sales, Customer Success, Onboarding, Docs Wizard, Editorial, YouTube, Forward Deployed Engineering, IRL Events |

Meetings

Getting Started | Source: https://posthog.com/handbook/getting-started/meetings

We are anti-meeting by default.

However, while we default to written and asynchronous communication, we find that having a few regular touch points for the whole team to come together on a call useful for sharing certain types of information, strengthening our culture and discussing more dynamic issues in real time.

We keep these minimal in terms of time expectation - no more than 2hrs total per week. They are usually scheduled around 8.30am PDT/4.30pm GMT to allow people across multiple timezones to attend more easily. We default to cameras on, it’s nice to see real faces since we don’t get many in-person moments. If you need yours off, just give the team a quick heads-up why.

You should have been invited to any relevant meetings as part of your onboarding.

Weekly schedule

The all-hands

The Monday all-hands features a few regular sections and is recorded in this document.

How to give a good demo

Demos are a great way to share what you've been working on and keep everyone in the loop. A little prep goes a long way toward making them useful and respectful of everyone's time. It also stops the meeting length getting out of hand!

When in doubt: a short, clear demo that explains the "so what?" beats a long one that leaves people wondering why it matters.

No meeting days (Tuesdays & Thursdays)

We try to keep these days focused on deep work. Therefore, we run no planned meetings on these days.

However, speaking ad-hoc to your teammates on this day is fine - especially:

If ad-hoc meetings are regularly happening, consider improving the agenda of another regular meeting so there isn't as much context switching in people's days.

People in customer-facing roles where being on calls is a bigger part of your job don't need to stick to this as much, but please don't loop in engineers to customer calls on these days if you do by default.

Sprint planning

Each small team runs its own sprint planning meetings on whatever schedule you feel is most useful. Some teams do this on a Monday, others on a Wednesday, and sprints are usually 1-2 weeks long. We split into Small Teams for these. If a product team, your team's exec will also attend.

All sprint planning meetings are open to anyone to attend - if you are not a member of that small team, then we ask that you sit in as a non-speaking observer only.

Customer billing configurations

Growth | Source: https://posthog.com/handbook/growth/billing/customer-billing-configurations

This document outlines all possible billing configurations for customers at PostHog. The goal is to ensure the team is on the same page with the different configurations we support to ensure things move smoothly as we scale. We want to ensure they support the billing repo, dashboard, usage reports, revenue reporting, etc.

Below are the main 5 configurations we support right now. Each outlines how the Stripe accounts are setup and billing and how we account for revenue on them.

  1. Free plan customers
  1. Paid plan customers
  1. Start-up plan customers
  1. Enterprise customers (yearly credit purchase)
  1. Enterprise customers (amortized credit payment)
Legacy configuration

Note: this above list is focused about the creation of new customers going forward - there are many existing configurations not covered directly in this document.

Unsupported configurations

While we don't currently fully support these, we would like to soon:

Cross sell motions

Growth | Source: https://posthog.com/handbook/growth/cross-selling/cross-sell-motions

Problem statement

We haven't had a specific playbook/motion/plan on how to do cross sell, until now!

CS & TAM managed accounts historically have only been slightly better than average when it comes to product adoption. We can change this. We have the technology. We have the power.

Described here are some of the current goals and tactics to improve effective cross-selling measures when working with customers accompanied with specific how-to guidance.

Goals

The main objective is to get existing PostHog customers to adopt more of the platform. We firmly believe that adopting more products leads to a better experience and higher satisfaction with PostHog.

For a TAM today, a quantitative goal is to move from an average 4.8 products adopted currently to 6 products adopted for AM managed accounts over the next two quarters. Accounts may be promoted to CSM coverage and continue with the adoption plan.

AEs and CSMs goals might be polar opposites. Where AEs may want wide exposure, it's also important to establish the right time for product adoption, rather than overselling something that may not apply to the initial implementation.

CSMs are not here to push new products and features; CSMs are here to ensure customers successfully use PostHog and get the most value for their business. Remember, we want to help our champions look like heroes at their companies!

Results

Cross-selling has clear expected outcomes:

  1. Increase Revenue / product usage
  2. Increase stickiness
  3. Offer real value of the "platform" to users

Measuring success

Successful expansion strengthens customer relationships and increases account stickiness. Each additional product that delivers value makes PostHog more integral to the customer's operations. There are a number of things we can look at that deliver these results.

Since we have different folks at different stages of ramp and onboarding, instead of making these metrics flat or percentage based, we are looking for an increase quarter over quarter.

Accounts that are a good fit

As a part of this process you are determining whether they are a good opportunity for cross-sell motions and if they are representative of an ideal candidate for growth. Here are some key qualities we've found to date:

Optimal timing for discussions

Why Now? All good opportunities have this and are timeline driven.

Ideal moments:

Times to avoid or pause cross-selling:

Optimal timeline if a customer is on an annual contract:

Hypothetical approach

One way of approaching this that we have seen work is a research -> QBR -> recommended cross-sell

  1. Account research and understanding the business
  2. Some sort of engagement (QBRs?) to understand business priorities and tie them to PostHog
  3. Make specific recommendations around what to adopt and how it will help with business priorities

For example, customer B2C SaaS has a business model selling subscription plans. Digging in to understand the differentiators of the plans and reviewing their custom events to ensure they are collecting the appropriate data. Come to the QBR ready to discuss the particulars of their situation. You may or may not have the info you need to make a recommendation on the call, but at the very least, you should have a direction to suggest. You could recommend customer/revenue analytics, experiments for plan adoption, and surveys for user feedback given what you know about their business model.

This doesn't always need to be a formal QBR process. Some form of research -> discovery/interaction/recommendation is the basic flow here.

The why evolve framework for cross-selling

  1. Document Results
  1. Highlight Evolving Pressures
  1. Share Hard Truths
  1. Emphasize Risk of No Change
  1. Describe Upside Opportunity

The new cross-sell motions playbook

You've been here awhile and just want the script. Wet get it. The following sections describe the actual approaches that fit well within PostHog for a cross-sell motion, and can be pitched in grouping by feature or by user needs.

Remember that where possible, we're providing solutions and outcomes rather than features. Today we have clear examples with the Error tracking product where customers have found success, with more direct playbooks in the works.

Common cross-sell and expansion paths: Adoption paths are good to frame products as point in time or natural progression to their implementation. For example:

Here are some other known examples that aren't necessarily 0 to 1 linear adoption, or working backwards from what a customer is using outside of PostHog:

Bundle "Features"

Bundling is another good way to position products by customer type and stage. The following product stacks match certain types of user needs with value.

The "early stage growth" stack

Products: Analytics + Session Replay + Surveys Value story: "You know what users do, see how they struggle, and can ask them why" Ideal for: B2C companies with conversion optimization focus

The "ship fast without breaking" stack

Products: Feature Flags + Error Tracking + Experiments Value story: "Roll out safely, catch issues instantly, measure impact scientifically" Ideal for: High-velocity teams with continuous deployment

The "revenue optimization" stack

Products: Analytics + Experiments + Revenue Analytics (via Data Warehouse) Value story: "Track user behavior, test pricing changes, measure revenue impact" Ideal for: B2B businesses focused on LTV/CAC

The "vibey AI startup" stack

Products: Analytics + Flags + LLM Analytics + Error tracking Value story: "Tie user behavior to run cost, launching features that are both user requested and revenue generating" Ideal for: AI-focused startups optimizing for cost efficiency and user engagement

What to look for when cross selling

We've already seen general indicators that are worth paying attention to when it comes to success cross-selling and here is an expanded list with what to do next.

  1. New PostHog product launch - did we launch a product that is a good fit for their use case? Did we add a new data pipeline source or destination?
  2. Reach with details on the new product
  3. Offer to credit them back their first month of usage so they can try it out risk free
  4. Raising funding - did the customer just raise money?
  5. Congratulate the founder on the raise
  6. Lead with product that can capitalize on their opportunity to bring in more revenue / usage
  7. i.e. if they are B2C, pitching experiments to maximize conversion
  8. PostHog price change - did we change pricing to make adoption more palatable?
  9. Let your main point of contact know how much they will save with the new pricing if they currently use the product
  10. If they don't suggest adoption based on the new rate and offer credits to offset learning curve
  11. Revenue increase - is the customer seeing an increase in revenue?
  12. Depending on how you know about it, either congratulate them (or don't)
  13. Recommend a product that would capitalize on that revenue
  14. Error tracking to clean up issues
  15. Feature flags to launch new user features
  16. New customer product launch - is the customer launching a new product that could benefit from additional PostHog goodness?
  17. Check out the product yourself (if applicable)
  18. Congratulate them on the new launch
  19. Suggest products that would help with the success of the new launch
  20. i.e. surveys for feedback, feature flags for new features
  21. Competitor drops (or lacks) SDK support - does a competitor lack critical support or have they dropped support?
  22. Reach out proactively to main technical contact if there is overlap
  23. Mention our support (and lack of competitor support)
  24. Send any pertinent docs
  25. Follow up regularly with status updates and additional resources
  26. Eng/marketing hiring - is the customer hiring more technical roles? Could we do this through LinkedIn?
  27. Prep PostHog onboarding for new user
  28. Offer call / support for getting them up to speed
  29. Suggest products that make that new hire's life easier
  30. i.e. error tracking to figure out where the gremlins are
  31. New users from other business units - are we aware of / seeing people from other parts of the business asking about (or even using) PostHog?
  32. Make note of who the new users / units are
  33. Ask for a warm intro from current main point of contact
  34. Reach out 1:1 to new users to get feedback / offer help
  35. Customer expanding into new geography / territory - is the customer moving into a market they weren't previously in?
  36. Ensure they are capturing the correct custom events / properties
  37. Pitch products that help with differentiating location experience
  38. i.e. feature flags for unique features based on GeoIP
  39. When an owner leaves PostHog or a new owner is added - is the new owner open to other products that can help solve the problems they care about?
  40. Reach out to new owner to understand their priorities
  41. Hit any products that were previously suggested to the other owner
  42. Offer credits for adoption of the new product
  43. Shift in customer business model - is the customer introducing a new type of subscription, going from on-prem to cloud, changing their fundamental offering?
  44. Dig in to understand the changes
  45. Suggest flags / experiments as a good way to get feedback / modify the experience for the new model

Alerts

What alerts would be helpful to have that would indicate good cross sell opportunities. Continue to question what would be useful to follow in order to positively influence timing.

  1. Could we use our PostHog to flag when an account's revenue is increasing on their end? (not spend with PostHog, but their actual revenue)
  2. Could we use signals in Vitally / PostHog to notify about new power users?
  3. Could we get an alert when an account tries a new product for the first time?

Discovery through conversation

Effective discovery focuses on understanding customer challenges rather than pushing products.

Example questions to ask

The questions below are designed to spark thoughtful conversations with customers. They help uncover how teams are currently solving problems and whether there might be simpler or more effective ways to do so using PostHog.

Use these questions in preparing for calls and use them as examples for developing your own questions. Each includes the question, the pain revealed, and the PostHog advantage.

High-value discovery questions for upsell/cross-sell:

These questions will naturally surface use cases for session replay, feature flags, experiments, and other products. We should also identify opportunities programmatically through the other data sources we have to supplement the conversation approach. It's not a recommendation to ask each and every one of these questions on a call. These are simply a guide and an example of the types of questions that will help surface opportunities.

Error tracking

| Question | Pain Revealed | PostHog Advantage | | --- | --- | --- | | When an error occurs, how easy is it for you to see exactly which user actions led up to it and how it affected the experience? | Debugging relies often relies on reproducing error | Error Tracking tied directly to replays makes root cause and impact obvious. | | If you’ve built your own error tracking, how much effort goes into maintaining and correlating it with analytics? | Time wasted maintaining infra, blind spots in analysis. | Lightweight SDK that's tightly integrated with other products. | | How do you decide which errors to fix first? | Prioritizing by gut feeling or frequency, not business impact. | Error Tracking + Product & Revenue Analytics can show which errors have the greatest impact. |

For more recommendations, look at the Error tracking motions

Session replay

| Question | Pain Revealed | PostHog Advantage | | --- | --- | --- | | When debugging, how often do you rely on logs or secondhand reports to reconstruct what happened? | Time lost piecing together events. | Session Replay shows exact user journey, reducing guesswork. | | How do you confirm if a bug is isolated or widespread across users? | Hard to prioritize fixes without scope clarity. | Replays + analytics show impact | | How do you identify user friction today? | Lacks visibility into real interactions without PM background. | Session Replay gives direct user perspective for product calls. |

Feature flags

| Question | Pain Revealed | PostHog Advantage | | --- | --- | --- | | When launching a new feature, how do you manage risk of rollouts failing? | “Big bang” releases increase risk + stress. | Feature Flags enable safe, gradual rollouts & rollbacks. | | How do you measure whether users actually engage with a feature once it’s enabled? | No feedback loop between rollout and usage metrics. | PostHog connects flags directly to analytics & experiments. | | What’s your process for debugging an experiment if users drop off unexpectedly? | Experiments may fail without clarity on root cause. | Session Replay + Error Tracking pinpoint where the experience broke down. | | How do you currently measure the business impact (e.g., revenue, retention) of an experiment? | Results limited to engagement metrics, missing real business outcomes. | Revenue Analytics + Product Analytics + Data Warehous show both engagement and business impact. |

Customer/Revenue analytics

| Question | Pain Revealed | PostHog Advantage | | --- | --- | --- | | How do you measure the direct revenue impact of your features? | Work disconnected from business outcomes. | Revenue Analytics ties feature usage to revenue & LTV. | | How do you weigh roadmap decisions against revenue impact today? | Guesswork in prioritization. | Revenue Analytics reveals which features drive business outcomes. |

LLM analytics

| Question | Pain Revealed | PostHog Advantage | | --- | --- | --- | | When your LLM-driven features underperform, how do you pinpoint why? | No clear visibility into model errors or user friction. | LLM Analytics shows usage, performance, and cost data together. | | How do you know which LLM features are helping vs hurting users? | No clear way to measure LLM impact on user behavior or business outcomes. | LLM Analytics + Session Replay shows which interactions drive value vs cause drop-offs. | | How do you evaluate your LLM analytics in the context of broader product goals? | Standalone tools miss product context. | Integration ties LLM performance to actual product outcomes. |

Surveys

| Question | Pain Revealed | PostHog Advantage | | --- | --- | --- | | When analyzing survey responses, how easy is it to connect them to specific user behaviors or outcomes? | Responses are siloed, making it hard to correlate feedback with analytics or events. | Surveys integrate natively with Product Analytics and Session Replay, linking responses to user journeys and metrics. | | How do you target surveys to the right users without manual segmentation or guesswork? | Less targeted surveys lead to low relevance and response rates. Custom targeting requires dev time. | Display conditions use cohorts, feature flags, and events to show surveys only to specifics users, with built-in response limits. |

Expansion within existing product usage - and up-selling

It's worth calling out a question again: are we selling more of the thing, a more expensive thing, or a new thing? Cross-sell and expansion opportunities can have significant overlap in product plays.

If we're planning expansion, the best way to do this is to replicate usage of existing product with _new_ teams at the same company. This is a bit more straightforward conceptually, and may be harder to execute because you're likely to be starting with a new team from scratch.

You may want to consider expanding usage of the same product within the same team if there is obvious scope to do so here. This can also be difficult as it depends on the individual success and growth of their product, which you can't control.

Trial/Evaluation incentives

If we want customers to use more products, we should incentivize new product adoption. This could be in the form of credits for a specific timeframe to cover adoption and usage of the specific product. For example, if a customer wants to try out data warehouse, we offer 2-3 months of credit for any data warehouse usage as they figure out how they would use it and where it provides additional insight.

We have opportunities to get creative with how we incentivize new product adoption with users. A few ideas are:

Error tracking cross sell

Growth | Source: https://posthog.com/handbook/growth/cross-selling/error-tracking-cross-sell

Identifying Error Tracking cross-sell opportunities

Our first example here is for cross-selling Error tracking, which generally has the below requirements. Feel free to copy this as a format for other bundle motions where applicable.

Product specific pre-reqs

Motion

  1. Qualify if an account is suitable for this motion by understanding how they detect, prioritise, and fix errors today. If your stakeholder can't answer or isn't interested, find another stakeholder. If the answer is that don't have a tool to do this, don't link their errors to impact on key user actions within their app or that they prioritise based on error volume or another metric that is equally uncorrelated with impact on UX, this is a good opportunity for this motion.
  2. Suggest that they turn on exception capture for the relevant environment, with a billing limit or free trial set so there is no cost. In exchange offer to find the impactful errors they are missing and help them move towards a UX based methodology for error prioritisation by combining errors with PostHog product analytics data.
  3. Create dashboards of the new error data that correlates errors with drop-offs in conversion events (signups, checkouts, whatever is relevant here)
  4. Share your analysis as a Loom or other low time commitment format for review, emphasising the uplift in conversion events if these errors were prioritised. If required present these findings back to the stakeholders.
  5. If required help your stakeholder build a business case for the additional spend by linking the missed conversion events to value. For example, if the average LTV of a signed up user is 50$, multiply the dropoff in sign up events by 50 to get a rough ROI of finding and fixing these errors.
  6. Pitch this value as a reason to remove the billing limit and expand usage of error tracking.

Product usage signals

Customers don't always ask for Error Tracking directly, but their usage patterns can indicate a potential need. When you review customer accounts and chat with users, look for these signals:

Chat with users

Engage with users. Look for cues that signal gaps that Error Tracking can fill:

Website signals

Discovery questions

When reviewing accounts, ask:

Demonstrate the value

Once you've identified customers who'd benefit from Error Tracking, show them value in ways relevant to them.

Product Analytics

A few good starting points:

  1. Track error trends over time: Create a trends insight for $exception events and create alerts when errors hit specific thresholds. You can get both historical trends and real-time notifications on high-impact exceptions to prioritize engineering work.
  2. See if errors are affecting conversion: Combine errors with funnels to figure out if drop-off is happening because of errors – especially if errors are blocking users from getting through critical flows. You can tie this to customer lifetime value to show potential revenue loss. This is also useful for experiments - you want to make sure your variant didn't underperform because of a bug rather than the actual feature you're testing.
  3. Measure retention impact: Track whether users who hit errors come back less frequently.

For all of these, you can layer on data like $exception_types, $exception_values, or $exception_sources to figure out which errors are most common and how they're impacting users.

Session Replay

Session Replay and Error Tracking work wonderfully together – probably the strongest integration we have. You can watch recordings of what users are doing in your app and get clear signals of errors they're hitting. You can search for specific events, jump straight to a given issue, and see what happened before and after – all of which provide valuable context for debugging.

When viewing a session, use the "Only show matching events" toggle to filter by exception-related events. You can use $rageclick to identify UI frustration that correlate with errors – this helps highlight silent frustrations users are experiencing that otherwise aren't communicated. You can also create dynamic cohorts of impacted users and take actions on them.

Other use cases

PostHog vs other error tracking

Historically, error tracking is something only engineering teams use. With PostHog, there's deliberate value for other teams. For example, marketing can figure out why conversions dipped and look at Session Replays tied to errors. This is incredibly valuable to quickly identify blockers. Other error tracking tools might give you clarity on bugs and errors, but PostHog gives you the complete picture of the user journey.

Common blockers

“This increases costs that we didn’t budget for”

We should proactively give credits so customers can trial a new product. For example:

“My champion doesn’t make decisions on this product”

You should first try to build a relationship with the persona that will be users of the product. For error tracking, this will be engineers who work on areas where exceptions are critical (link to persona page).

Ask your champion how they are currently tackling the common use cases. Identify team members you want an introduction to and ask your champion for a warm connection. You can position it as a learning opportunity, asking for feedback, or a pitch (if you have a really strong understanding of the specific value-add). Help your champion with the internal pitch.

For error tracking, these questions are helpful to start the conversation:

If you’re not sure who the persona should be, ask the product team!

“I don’t have the resource or time to implement error tracking”

Position implementation as simple, especially if you’re asking your customer to try out a product for the first time. This is where you shine as a technical success person. Help your customer cut through the cognitive load of figuring out implementation.

Error tracking can be implemented with one click, or two lines of code(depending on the SDK). Hyperlink to the project settings to enable exception autocapture or share the snippet addition for the SDK they’re using. Follow up with a rough plan that is tied with their needs, such as:

  1. Enable exception autocapture – see events flow through
  2. Assess the errors, issue groupings – decide if you want to customise default properties so you’re getting higher quality signals
  3. Work with errors - update the status, view stacktraces, watch session replays and assign to teammates
  4. Set up alerts

You can also help create dashboards to help your customer understand the value of the product.

Action items

Technical recommendations

How we upsell and cross-sell

Growth | Source: https://posthog.com/handbook/growth/cross-selling/how-we-upsell-and-cross-sell

How to cross-sell and upsell additional PostHog products

Cross-selling is a primary focus across all growth oriented teams. In fact, "cross-sell" is mentioned here as many times as success... applying to customer-facing roles including AEs, CSMs, and TAMs.

Success at PostHog comes from identifying genuine customer needs and demonstrating how additional related products solve real problems. The never ending objective is helping customers extract more value from PostHog, which naturally leads to increased product adoption.

Equally important is cross-selling exposure inherent from teams such as product and marketing. If the product, brand, onboarding, and what we are telling customers day to day is inconsistent, we're going to have a bad time.

What does cross-selling mean?

These descriptions below will describe the overall who/what/why and we will evolve specific motions people have found useful to cover the how/when. For a baseline, we will use these general definitions with specific context following for PostHog:

There are many ways a customer can signal or be primed for growth. All forms we will lay out here which may be in the form of usage, by voicing it directly, or you introducing the right products at the right time.

Wait! What should the relationship look like before you attempt cross-sell?

The relationship is first.

Leading with a cross sell motion is a bit like coming out the gate with offers related to contract billing terms and credits. While there are some limited circumstances where this makes sense, we should almost always start by focusing on the relationship.

The best way to build that relationship is to help the customer. That could be leaning in on a support ticket, offering recommendations around getting more our of PostHog, reducing spend, or even helping clarify docs.

Customers genuinely like PostHog, so engage them like a friendly acquaintance. Being hyper responsive to requests is a great way to build up good will. Another way is to own problems and follow them through to conclusion. Even if support is taking the lead, stay engaged and tie up any loose ends.

Here is a general checklist that should be met before putting a plan to cross-sell. Specific product motions may have additional pre-requsites.

Growth best practices

Do:

Generally, this is all to provide them with a solution that will make their life better (and make them look better!). It’s a win-win.

Don't:

We are friends, now what?

As you build a relationship with the account, learning about who they are, how they make money, and what they care about should naturally happen. Even so, you may need to dig in further, especially if their business is complex. Doing everything you can on your end to understand their business before asking business questions is another way to establish your expertise and build that good will.

There's a balance as any time you put additional burden on your champion or a stakeholder, you are less likely to help them achieve a positive outcome for us or for them. This is common as additional products require additional work to implement.

Then, opportunities!

Identifying growth opportunities

We use a combination of proactive outreach, insights, and automated alerts in tools such as Vitally to identify cross-sell opportunities. Below are some examples and we will go in more detail on specific motions.

You can use these signals which are documented on health-tracking alongside regular customer interactions to prioritize outreach.

The best opportunities connect products to customer outcomes using their terminology and context.

Example signals of cross-sell opportunities

Strong expansion signals

How to run a cross-sell process

You made it here! You have the relationship, and you have the hunch (clear signals) a customer is good for cross-sell. Let's put it in to standard practice by following and building upon cross-selling motions

Here's a taste of what follows:

Pro tip - if a customer isn't using a PostHog product and there is no obvious reason why they shouldn't, ask them directly why they're not using it!

Tracking cross sells

Growth | Source: https://posthog.com/handbook/growth/cross-selling/tracking-cross-sells

Cross-sell opportunity tracking

TAMs create Salesforce opportunities for cross-sell deals they're actively working. This is how we measure whether TAMs are driving multi-product adoption vs. benefiting from organic growth, and how we learn which motions actually work.

This is measurement only - it doesn't change how commission works.

What counts as a cross-sell opportunity?

All four must be true:

If a customer spontaneously starts using and paying for a product, you don't need to retroactively create an opp. This is for intentional motions only.

How to create a cross-sell opportunity

Use the existing Salesforce record type: New revenue - existing customer

Opportunity stages

| Stage | What it means | |-------|---------------| | Discovery | Identified use case, talking to stakeholders about the problem | | Demo | Showing them the product, connecting it to their specific needs | | Trial | Customer is actively testing the product (see trial guidelines below) | | Closed Won | Customer is paying ≥$100/month on the product | | Closed Lost | Didn't convert - document why |

Not every deal needs every stage. If a customer already knows the product and just needs help getting started, skip to Trial. The stages exist for tracking, not bureaucracy.

When closing an opp (won or lost), do it manually even though Vitally may auto-close goals when a revenue threshold is met. Consciously closing the opp shows you're on top of the account and creates a clean intent-to-outcome link in our data.

Trial guidelines

If you're giving a customer extended time or capacity to try a product before paying, use one of these approaches:

Option A: Billing limit

Option B: Trial

Either way

What this is NOT

What we're measuring

Cross-sell metrics are tracked in the sales growth review. After each quarter, we should be able to answer:

  1. How many cross-sell oops did TAMs create?
  2. What was the win rate?
  3. Which products had the highest/lowest conversion?
  4. What was the average deal size?
  5. What was the average cycle time (discovery → closed)?
  6. What reasons are we seeing for Closed Lost?

Growth reviews

Growth | Source: https://posthog.com/handbook/growth/growth-engineering/growth-sessions

Now that PostHog has found product market fit, we hold regular growth reviews where we plan work to accelerate our growth and drive revenue. These sessions are only worth running _after_ you've found product market fit because until then you need to focus on building a solution users feel they really need. Once you've found product market fit, growth sessions can help you optimize.

We've established a successful pattern for running these meetings every four weeks and actions we've taken from them have led to some significant increases in monthly revenue growth.

Attendees

We find it's important to bring a mixture of technical people, and those with wide context of what the business is working on. Regular attendees include...

Running a growth review

Before the meeting

Examples:

During the meeting

After the meeting

Growth session template

## Agenda

* 10 mins to read through / add or edit hypotheses
* 20 mins to review session recordings
* 4 hours to work through hypotheses and to create actions

## Hypothesis

Why aren't we growing faster?

*

## Actions

*

Relevant docs

Per-product activation

Growth | Source: https://posthog.com/handbook/growth/growth-engineering/per-product-activation

Because PostHog offers so many products, and people sign up with all sorts of different needs, we track activation separately for each product.

Every product should have activation criteria - these are used to determine if a user has activated for a specific product yet. If they haven't, and they've showed intent for that product, we can nudge them in the right direction. These are also used to understand what retention looks like for the product, and to figure out what PostHog can do to offer a better experience!

How we track activation, and how to set up an activation query for a new product

This is the basic structure of our activation queries:

  1. An organization triggered a 'product intent' -> This is the 'upfunnel' metric
  2. An organization met the 'activation criteria', usually one, multiple, or a set of qualifying events in a given time period (e.g. 14 days) -> This is the 'downfunnel' metric
  3. An organization triggered an event correlating with product usage 3 months after they showed product intent -> This is the retention / survived metric

Here is an example structure:

Image: image

You can find all per-product activation queries on <PrivateLink url="https://us.posthog.com/project/2/dashboard/130345"> this dashboard</PrivateLink>.

Picking the right activation criteria

The ideal activation metric strikes a balance: enough companies should reach activation (so it's not too restrictive), while those who activate should have high retention (so it's not too easy). To find a couple of potential definitions, you want to look at product usage and think about what behavior could correlate with successful activation (aka the "aha-moment"). This could be things such as

  1. Has done a key event once (such as launched an experiment)
  2. Has done a key event multiple times (such as analyzed 2 insights)
  3. Has done a combination of key events (such as watched 5 recordings, and set a recording filter)

To pick the best activation definion, it's recommended to write the activation queries for multiple potential activation definitions (~5-10), and compare the activation and retention numbers. This leads to a much higher confidence in the activation metric than just picking your best guess.

Which definition is the best indicator for long term retention? You want to pick a definition that gets a sizable number of organizations to activate, but also to retain. But be careful: If you pick a activation definition where only 1% activate, and 100% of those 1% retain, your activation metric is too narrow!

Note on the retention / survived definition: For this, it's recommended you pick whatever tells you they are an active user. It can be the same as your activation definition, or something a bit simpler, as long as it is closely related to the user actually using the product (e.g. in replay, activation is currently defined as analysing 5 recordings AND setting a filter, usage is simply defined as having analysed one or more recordings).

If you haven't already, make sure you also track product intents for your product. It's worth noting that adding new product intents will impact your activation rates (e.g. an existing user intent might be stronger or weaker than an onboarding intent). If you are comparing activation rates historically, it might be worth filtering for intents that rarely change, such as "onboarding product selected".

Read this blog post for a deep dive into how we first came up with our activation definions.

Structure of the SQL query

Our activation SQL queries consist of two parts: A materialised view to count the eligible events, a SQL query on top of the materialised view to count the conversion percentages. We use materialised views to make these queries more performant.

We store the activation logic in SQL queries and not in code to make it easier to see our activation definitions, to experiment with new definitions, and to drill down to understand why a certain bucket might not perform so well.

The following activation logic is stored in the materialised views:

  1. Count only the first product intent per organization (since product usage intents can be triggered multiple times by the same org), as well as filter out cross sell product intents
  2. Check if an organization meets the activation definition within 30 days of showing product intent
  3. Check if an organization meets the retetion / survived definition within 3 months of showing product intent

Here is an example <PrivateLink url="https://us.posthog.com/project/2/sql?open_view=01966c82-9958-0000-7959-1728ad7dd6d4"> materialised view query</PrivateLink>. To write your own, we recommend copying the query and change the product & event filtering criteria as needed.

The following logic is stored in the SQL query:

  1. Check if a organization is both activated AND retained to be counted in retention / survived
  2. Calculate the conversion percentages from product intent -> activation -> retention / survived

To write your own, we also recommend copying one of the existing queries. All our activation queries follow the same structure, which we should also follow for new products. Once you've found a good definion of activation for your product, please do add the final activation query to this dashboard.

Tracking activation in the code

We use SQL queries to analyze activation. In addition, we track product intents and activation in the code. We do this so that in the future we could act on this, e.g. someone showed intent, but they didn't activate? Show them in an-app banner or send them an email.

To add a new product to this, you can add the activation criteria in the product intent model.

This code is run every time an intent is updated. For example, if the activation criteria is "save 4 insights", and we send a product intent every time someone clicks "new insight", we'll also check at that time if they have 4 insights saved, and if so mark them as activated.

Why does this matter?

Tracking activation is important, because it tells us how many companies start using our products successfully each month, and how many retain. Measuring it month over month allows us to see trends, and whether improvements to the product actually made a difference.

If the activation metrics look good, it gives us the peace of mind to focus on new feature development. But if they trend downwards, it's probably a good time to look into our onboarding and "first time user" funnels to see in which areas our UX can be improved.

Product intents

Growth | Source: https://posthog.com/handbook/growth/growth-engineering/product-intents

Because PostHog offers so many products, and people sign up with all sorts of different needs, we track activation separately for each product.

To learn more about what activation is and how we measure it, check out this blog post.

To make sure we're measuring activation properly, we need to know when someone is interested in using a product. If we put everyone who signs up into the activation funnel for each product - without even knowing if they are actually interested in it - then we'd end up with a super large top of funnel, murky metrics, and a dismal activation rate for all products.

So instead of just putting all people into each activation funnel, we try to identify people who are interested in using a product. In other words, we identify people who show _intent_ to use a product, and we record these people (and the moment the intent happened) as events. These types of events are called _product intents_. The product intents mark the top of the funnel for each product's activation funnel.

Product intents are:

When should I start capturing product intents?

As soon as your product is in any sort of public beta you should start tracking product intents. This is _not_ because you should be hyper focusing on your product's activation numbers at this stage - instead it is so that we can start collecting data for later on when we want to determine a good activation metric.

So, collecting product intents should be a precursor to any sort of public release.

What makes a good product intent?

People click around in the UI a fair amount, so generally you want to find something sufficiently deep, or something that happens multiple times, before saying someone has shown intent. Here are a few examples:

Is the onboarding product intent good enough for my product?

Nope. Lots of people join PostHog with a single product in mind - and then later realize that we offer other products they also want to use. Each product should have product intents being recorded somewhere past onboarding, so they aren't missing out on data about these types of post-signup customers.

How do I use these product intent things?

Generally we've made the plumbing such that recording these product intents is quite easy.

  1. Figure out where you think the product intent event should happen.
  2. When someone clicks that button / views that page / does that thing, then simply call addProductIntent in the teamLogic. That fires off an API request that records the product intent in the database and sends the event for you. You don't need to send the event yourself - it's all handled.

It's worth noting that adding new product intents will impact your activation rates (e.g. an existing user intent might be stronger or weaker than an onboarding intent). If you are comparing activation rates historically, it might be worth filtering for intents that rarely change, such as "onboarding product selected".

Cross Sells

As well as understanding what actions users take when trying out a product, it's also useful to encourage users to try out other products that would be helpful for them. If you are using product analytics for example, session replay is a really helpful way to understand why a metric is what it is. If you are creating an onboarding funnel to understand your conversion, running an experiment to improve that conversion would be helpful.

We track cross-sells within the product using the same product intent framework. There is a helper for this in the teamsLogic called addProductIntentForCrossSell, which you can use to track cross sells. You can find these in analytics using the usual event for product intent (user showed product intent) and filtering by type=cross_sell.

Why does this matter?

It's important that we understand if people who are trying to use our product are actually successful in doing so. This is a likely imperfect, but better-than-nothing, way to do that. If people aren't having the success we'd expect for a mature product (ie no large feature gaps with competitors), then we should probably look into why - and this gives us a cohort of people to examine, talk to, and track.

Giving credits to customers

Growth | Source: https://posthog.com/handbook/growth/revops/credits

Sometimes we might want to offer a customer one time credits to cover an upcoming invoice, for example when accommodating a trial for a new product or offering compensation for a recent incident. Here’s how to do that.

Things to keep in mind

Lead assignment tracker

Growth | Source: https://posthog.com/handbook/growth/revops/lead-assignment-ooo

The Lead Assignment Tracker in Salesforce is the source of truth for who's in the round robin, how leads are weighted, and how to manage assignments. This page explains how to use it self serve.

Understanding the tracker

Navigate to the Lead Assignment Tracker section in Salesforce. There you'll see every person who's part of the round robin, along with the following columns:

User, Role, Territory These identify who the person is, whether they're a TAE or TAM, and which region they cover.

Priority This column controls how many leads a person receives relative to others. The default value is 1. If you set someone's priority to 3, it means for every 3 leads the rest of the team receives, this person gets 1 — so a higher number means fewer leads. Use this if you want to throttle lead volume for a specific person (for example, if they're ramping or handling a reduced workload).

Manual Leads Adjustment This column lets you calibrate a person's total so the round robin stays fair. The most common use case is adding someone mid-month: by the time they join, others in the same region may already have leads assigned to them (lets say 50). Without an adjustment, the system would send that new person a flood of leads to "catch up" To prevent this, add 50 to their Manual Leads Adjustment column to bring their baseline in line with the rest of the team so the round robin distributes fairly going forward. This column is also used to rebalance after time off (see below).

Is Active This checkbox controls whether someone is included in the round robin. Uncheck it to temporarily exclude someone. For example, if they want to pause new lead intake outside of a scheduled vacation. Check it again when they're ready to receive leads.

Note: For planned time off, there's automation in place that handles toggling people on and off based on their calendar. Is Active column is mainly for cases outside of that — like when someone's OOO isn't on their calendar, or they want to pause for another reason.

Adding a new person

  1. Click New in the Lead Assignment Tracker
  2. Select the Salesforce user you want to add
  3. Select their territory from the multi-picklist
  4. Set their role (TAE or TAM)
  5. Add a Manual Leads Adjustment if they're joining mid-month (see above)
  6. Click Save

That's it, they're automatically added to the round robin.

Monthly reset

The Manual Leads Adjustment and total assignment counts reset at the start of each month so you don't need to redo these calibrations on an ongoing basis.

Lead assignments during time off

For scheduled time off, automation handles turning people off and back on based on their calendar — you don't need to do this manually. The steps below apply when someone's OOO isn't reflected in their calendar, or when you need to turn off lead assignments for a reason other than vacation.

In Salesforce → Lead Assignment Tracker

In Default app → Routing → Queues

Important: Even if someone marks themselves as "Out of Office" in their Default personal settings, that does not stop lead assignments. You still need to manually toggle them off in the queues.

When to take these actions

Calibration after time off

While someone is inactive, others in the queue continue receiving leads — so their totals rise. When the person returns, you'll need to rebalance so the round robin doesn't immediately dump a backlog of leads on them.

Lifecycle analysis

Growth | Source: https://posthog.com/handbook/growth/revops/lifecycle-analysis

Understanding how our revenue moves through different lifecycle stages helps us identify the specific drivers behind our growth, not just the net change in revenue. We use lifecycle analysis to see how much growth comes from new customers, expansions, contractions, and churn. We analyze this at both total revenue and per product levels to understand each component of our business.

Customer lifecycle stages

new

Customers in their first month of paying us.

reactivated

Customers who previously churned but have returned with monthly spend > 0. This signals customers who may be using our services on and off.

flat

Existing customers whose monthly spend remained exactly the same as the previous month. This represents our stable, predictable revenue base.

growing

Existing customers whose spend increased compared to the previous month. This shows how successful we are with our upsell, cross sell, and product expansion efforts.

shrinking

Existing customers whose monthly spend decreased but didn't reach zero. This could be due to usage based fluctuations but also an early warning indicator of customer dissatisfaction or competitive pressure. We pay close attention to this amount and do deeper analysis to understand the reasons behind.

How we calculate lifecycle components

New revenue: total monthly revenue from customers in their first month

SUM(CASE WHEN growth_lifecycle_stage = 'NEW' THEN mrr ELSE 0 END)

Reactivated revenue: total monthly revenue from customers who returned after churning

SUM(CASE WHEN growth_lifecycle_stage = 'REACTIVATED' THEN mrr ELSE 0 END)

Retained revenue: baseline revenue that continuing customers maintained

SUM(CASE
    WHEN growth_lifecycle_stage = 'FLAT' THEN mrr
    WHEN growth_lifecycle_stage IN ('GROWING', 'SHRINKING') THEN prev_month_mrr
    ELSE 0 END)

Expansion revenue: Additional revenue gained from existing customers through increased usage

SUM(CASE WHEN growth_lifecycle_stage = 'GROWING' THEN mrr - prev_month_mrr ELSE 0 END)

Contraction revenue: Revenue lost from existing customers due to reduced usage (negative value)

SUM(CASE WHEN growth_lifecycle_stage = 'SHRINKING' THEN mrr - prev_month_mrr ELSE 0 END)

Churned revenue: Revenue lost from customers who went to $0 (negative value)

-SUM(prev_mrr) WHERE prev_mrr > 0 AND curr_mrr = 0

Rate calculations

Baseline revenue

This the total monthly revenue at the end of previous month which is the denominator for our rate calculations:

Key rates

New rate: How much new revenue we acquired relative to our at baseline revenue

new_rate = (new_revenue / baseline_revenue) × 100

_Example_: 10% new rate means we acquired new revenue equal to 10% of our at baseline revenue

Expansion rate: Growth from existing customers as a percentage of base

expansion_rate = (expansion_revenue / baseline_revenue) × 100

_Example_: 5% expansion rate means existing customers increased their spend by 5% on average

Contraction rate: Revenue decrease from existing customers due to lower usage

contraction_rate = (contraction_revenue / baseline_revenue) × 100

_Example_: -3% contraction rate means we lost 3% of revenue from reduced customer usage

Churn rate: Percentage of at baseline revenue that was completely lost

churn_rate = (churned_revenue / baseline_revenue) × 100

_Example_: 2% churn rate means 2% of our baseline revenue base churned completely

Notes on data

Overview

Growth | Source: https://posthog.com/handbook/growth/revops/overview

How RevOps works

RevOps at PostHog is the Product Manager for Sales, Marketing, and Executive teams. Just as PMs help engineering teams build better products by connecting user needs with technical solutions, RevOps helps go to market teams make better decisions by connecting different parts of the business together.

We do this by combining data from marketing, sales, product usage, and customer success to show what's working and what isn't. While individual teams deeply understand their specific areas, we provide insights about how different parts of the business affect each other and help teams see these connections to drive revenue growth for PostHog.

RevOps values

  1. Make data simple
  2. Build for self service
  3. Automate relentlessly

1. Make data simple

PostHog has data everywhere: product usage, sales pipelines, support tickets, revenue numbers. With this wealth of data comes complexity. We turn this scattered data into clear insights teams can use.

This means:

Creating views to show total monthly MRR, per product MRR, and per product usage, and filters out the anomalies to make sure make sure analyses are accurate and consistent across teams was one of the early steps we took in this direction. Unifying data from our billing system, Salesforce, and Vitally to have full context on biggest gainers and losers queries to show full context on which customers' spend changed the most was another one to simplify access to this info and quickly take action when needed.

2. Build for self service

Teams should get the information they need without waiting for RevOps. Like engineers ship without PM approval, go to market teams should be able to analyze and act on data without asking us.

This means:

For example, we built a self-managing lead pool where leads automatically move if they haven't been touched in 7 days. Instead of leads getting stuck with specific AEs, any sales team member can now pick up and run with these potential opportunities. This keeps leads fresh and moving while giving everyone on the team a chance to work with promising accounts, no RevOps intervention needed.

3. Automate relentlessly

Manual work wastes time and doesn't scale. If someone has to do something twice, we automate it. We rely on teams to tell us what's not working because they see the problems first.

If a team at PostHog struggles with revenue operations, we've probably:

For example we built an automated workflow that identifies product qualified leads in real-time. When a company hits key milestones (like having 5+ active users and using multiple products) and matches our ICP they're automatically flagged as a new lead in Salesforce with their usage data so the sales team can now instantly see which customers can benefit from outreach and why instead of having to piece this information together themselves.

RevOps vision

Things we want to be brilliant at

Standardize key metrics: Own and maintain clear, consistent definitions for our most important business metrics including:

This ensures everyone across the company uses the same language and measures success the same way.

Connect the dots: Help teams understand how their work impacts others, things like:

Rapid insights: Build self service tools that help teams quickly answer their own questions:

Things we want to do next

Revenue attribution: Understand how customers move from free to paid, including what features they use, how long it takes, and what influences faster conversions. When a customer upgrades or buys more, know exactly why: was it reading docs? using a specific feature? talking to support?

Predictive analytics: Build on our work around identifying expansion signals to get ahead of customer behavior, find customers likely to buy more before they ask, and identify unhappy customers before they leave.

Things we don't want to spend time on

Being the "data police": We don't want to spend time enforcing data formats or policing how teams use tools. We focus on making it easy to do the right thing, not enforcing rules

Running reports for people: If someone regularly needs data, we should teach them how to get it themselves.

Clean up projects: If we're constantly cleaning up data problems, we've built the wrong systems, and should fix the source problems instead.

Responsibilities

What RevOps owns

Revenue insights:

Sales tech stack including:

SalesOps section in handbook has more information.

What RevOps supports but doesn't own

Revenue reporting and forecasting: RevOps provides recommendations and improvements but does not own implementation and maintenance. This is currently owned by the .

Marketing operations: Marketing owns their campaigns and analytics, we help connect marketing data with revenue outcomes.

Product operations: Product teams own their metrics and experimentation, but we help track how they impact overall revenue.

What RevOps doesn't do

Retention metrics

Growth | Source: https://posthog.com/handbook/growth/revops/retention-metrics

We use Net Dollar Retention (NDR) and Gross Dollar Retention (GDR) to track how well we're retaining and growing customer revenue over time. We use adjusted revenue to calculate these retention metrics for a more accurate picture of our business. This way, we get clearer signals about retention by removing the noise from spikes, trials, and organizational shifts.

How we calculate

We use a rolling time period approach that compares customer revenue from a base month to the current month:

NDR (Net Dollar Retention)

NDR shows total revenue retention including expansions, contractions, and churn.

Formula: Sum(current_month_mrr) / Sum(base_month_mrr)

If NDR > 100%: We're growing revenue from existing customers (expansions outpace contractions/churn) If NDR = 100%: We're maintaining the same revenue from existing customers If NDR < 100%: We're losing revenue from existing customers

GDR (Gross Dollar Retention)

GDR shows how much of our base revenue we're retaining, without counting expansions.

Formula: Sum(MIN(current_month_mrr, base_month_mrr)) / Sum(base_month_mrr)

GDR caps each customer's current revenue at their base month amount so it only measures downgrades and churn.

Why $5K+ ARR customers?

Why rolling retention?

Cohort based retention for lifecycle insights

We also track cohort based GDR and NDR as well as cohort based usage retention to:

Revenue adjustments

Growth | Source: https://posthog.com/handbook/growth/revops/revenue-adjustments

Raw revenue numbers can sometimes be misleading due to various factors that don't reflect the true health of our business. Our adjusted revenue methodology helps us account for these factors to get a clearer picture of our growth.

Why we adjust revenue

  1. Create a more accurate representation of our business performance
  2. Identify growth patterns by removing short term noise due to the nature of usage based revenue
  3. Easily spot any anomalies in growth patterns
  4. Standardize our reporting methodology

Where we use adjusted revenue

We continue to report unadjusted revenue for our top line reporting and overall growth metrics. We use adjusted revenue for retention metrics (NDR/GDR) and other business lifecycle analysis to get a clearer picture of our growth. This way we maintain standard financial reporting (unadjusted revenue) while getting a better understanding of our performance (via adjusted revenue).

Adjustments

We make the following primary adjustments to our revenue data:

1. Trial adjustments

Revenue from customers who are testing our platform with the intention of potentially moving to self hosted or another solution.

Why: Including trial revenue in our regular metrics could create an artificially inflated view of sustainable ARR, especially if we know the customer is likely to leave.

How:

2. Revenue spike adjustments

One time significant increases in customer spending that don't represent sustainable revenue growth. This could be due to bot attacks, implementation errors, or sudden unexpected increase in volume on the customer side.

Why: These spikes can significantly distort our monthly growth metrics and don't represent sustainable revenue we can count on going forward.

How: We define a spike when all these conditions are met:

3. Annual plan adjustments

Accounting for the full spending potential of customers on annual plans who receive discounted credits.

Why: This gives us insight into the actual usage value customers are receiving, which is often higher than what they pay due to annual discount incentives.

How: Calculate what customers would have paid without the annual plan discount (annual_mrr_value = total_mrr / (1 - discount))

4. Account consolidations

Reconciling multiple accounts that belong to the same organization.

Why: Organizations sometimes have multiple accounts that should be viewed as a single customer for accurate revenue analysis. This way we can make sure we're tracking true customer retention rather than treating internal movements as churn

How:

5. One time credit / refund adjustments

Revenue credits that temporarily drop a customer’s billed MRR to zero (e.g., one time promo credit, incident credit etc.)

Why: If we count one time large credits as churn in our retention math we’ll understate our true net revenue trends. Excluding these prevents misleading dips and spikes in monthly growth patterns.

How: check for following criteria:

if all three are satisfied we override that month’s revenue to equal prior month’s revenue.

Account allocation and handover

Growth | Source: https://posthog.com/handbook/growth/sales/account-allocation

We have different roles within the team who manage customers at various stages in their lifecycle. Customers will typically sign up and start paying for PostHog themselves, or land as customers via a Technical Account Executive. Once customers hit $20k a year in spend with us they should have a dedicated Technical Account Manager or Customer Success Manager.

TAM vs CSM

Technical Account Managers (Sales Team) and Customer Success Managers (Customer Success Team) are the primary owner of customers spending $20k a year and above; and we aim to have full coverage of those customers across the two teams and roles. When deciding whether a customer should be with a TAM or CSM we factor in to account their usage of our primary products.

Primary products are the set of billable main products which we believe that all engineers should be using, not including add-ons or platform features. Our current set of primary products are:

We track whether a customer is paying for each product in Vitally using the Paying for <Product Name> trait.

This allocation may vary depending on team capacity - there may be some accounts who only have 1 or 2 paid products allocated to a CSM rather than a TAM where there is more capacity in the CSM team for example.

---

Quarterly book planning

At the start of each quarter, TAMs should prepare their book of business with the following constraints in mind:

Target book size: 10-15 accounts with a combined ~$1.5m ARR. This gives TAMs enough focus to actually move the needle on expansion and credit pre-purchases.

Maximum book size: 15 accounts. New leads or handoffs from CS/Onboarding/TAEs will push a TAM above this throughout the quarter, but if you're starting a quarter at 18 accounts, you need to find a way to get to 15 or fewer.

Accounts to remove from your book

Before the quarter starts, review each account and remove those that meet any of the following criteria:

What is NOT a valid reason to hand off

Low engagement or an account being "difficult to work with" is not a reason to pass them off. That's literally your job. Specifically:

If an account is struggling on these dimensions, that's a signal you need to invest more effort – not hand them off. You should only hand off accounts that are in a good state.

---

Doing the allocation

It's Simon's job, with input from Charles and Team Leads, to review the list of $20K accounts without an owner, as well as accounts which need to be handed over from TAE and TAMs. We use the criteria above to figure out which team should own a customer, and then use Vitally data to understand which region they are primarily based in. Looking at the user list in Vitally will show you where the most users are so make a judgement call on where the TAM or CSM should be based to best support and engage with the customer. Once this has been decided the New Owner trait is populated with one of the following:

And then it is down to the Team Leads to figure out which team member is taking on the customer.

Quarterly allocation process

At the start of each quarter, Simon (with input from Charles and Team Leads) reviews:

  1. $20K accounts without an owner – accounts that need to be assigned
  2. Accounts flagged for handover from TAEs, TAMs, and CSMs
  3. TAM books exceeding 15 accounts – identifying accounts that should move to CSM or another TAM
  4. CSM accounts with expansion potential – identifying accounts that should move to a TAM

Once Simon determines whether an account belongs with a TAM or CSM (and which region), the New Owner trait is populated, and Team Leads assign the specific team member.

Mid-quarter changes

Account removals should only happen at the end of the quarter so that quota can be calculated correctly. However, accounts can be added to your book at any time if you're confident there's growth potential.

If you're assigned an account with a previous owner, work with them on a proper handover. If the customer isn't in a healthy state (usage and engagement-wise), push back and ask the previous owner to get them to a good state first.

New accounts with no previous owner come with a 3 month grace period – if they churn in that initial period, they won't count against your quota. Don't ask for the AM Managed segment to be added until you're confident there's growth potential.

---

Top 40 account management

Our highest-spend customers (~Top 40 by ARR) get special consideration for ownership decisions. Simon and Charles regularly review these accounts to:

For Top 40 accounts, ownership changes (TAM→CSM or CSM→TAM) are decided directly by Simon and Charles, not through the standard Team Lead allocation process.

---

Handing over customers

To help the new owner of a customer hit the ground running, we should make sure that the customer is in a good state and that a warm introduction happens. Typical handoffs between roles are:

| Transition | Typical timing | Condition | |------------|----------------|-----------| | TAE → TAM | When onboarded, typically 3 months after initial credit purchase OR 12 months after initial credit pre-purchase if the account is retained by the TAE | Customer onboarded to 1-2 primary products | | TAE → CSM | When onboarded, typically 3 months after initial credit purchase OR 12 months after initial credit pre-purchase if the account is retained by the TAE | Customer onboarded to 3+ primary products | | TAM → CSM | After expansion completes | All 3 core products adopted, discount agreement in place, no remaining expansion levers | | CSM → TAM | When expansion opportunity identified | Customer not fully expanded and has clear growth potential |

For accounts who will be landing at $100k+ a year or have high expansion potential after the initial deal, we should involve a TAM early in the process to ensure a smooth transition. See the section further down this page on how this works.

For handover to take place there should be an Account Plan (saved as a note on the account in Vitally) and the customer should have been onboarded properly to the products they are currently paying for.

All open invoices should also have been paid before handing over. It makes sense to use existing relationships to chase payments, rather than the new owner's first action needing to be chasing payments/suspending access for non-payment.

For TAE accounts being handed over, set the New Owner to Ready to move in Vitally and then flag this with Simon directly. There's no need to wait for the end of the quarter to do this. He will review the plan and current state of the customer and then work with TAM or CSM leads to assign a new owner.

Account Plan

Every account being handed over should have an up-to-date Account Plan saved as a note in Vitally. The existing owner should ensure that this is current and schedule a handover call to walk through it with the new owner. Feel free to push back and ask for it as the new owner if this doesn't happen! Ask your team lead or Simon for help with this if you're not getting the information you need from the previous owner.

Product Onboarding

Before handing over a customer, the existing owner needs to ensure that the customer is onboarded properly to the products they are paying for. We should first ensure that they are only paying for what they need to as detailed in the health checks section of the handbook and then ensure the following steps have been completed, depending on the products they are paying for:

This is an initial pass at what good onboarding looks like for each product. We will refine this and add it to Vitally as a checklist to work through with the customer.

General principles
Product analytics
Session replay
Feature flags
Data warehouse
Error Tracking

---

Account handover checklist

Every account handover should include a 15-30 minute call between the outgoing and incoming owner. This checklist helps you prep for that call and make sure nothing falls through the cracks.

When to use this

Before the handover call

The incoming TAM should prepare by reviewing the following in Vitally and SFDC before the call, so the handover conversation can focus on context that isn't in the data.

Self-serve research (do this first)

Prepare questions based on gaps in the data. The handover call should focus on things you can't learn from Vitally.

Handover call agenda

This isn't an exhaustive list and not every item needs to be covered every time. Use your judgment based on what's relevant to the account.

1. Relationships & people

This is the most valuable part of the handover – relationship context doesn't live in any tool.

2. Commercial context
3. Technical & product state
4. Risks & opportunities

After the handover call

Immediate actions (within 1 week)

Tips for a good handover

Unassign yourself in Vitally

Once the handover is complete, the outgoing owner should unassign themselves from the account in Vitally. This ensures the new owner is the sole point of contact and avoids confusion about who is responsible for the account.

---

Receiving an account as a CSM

CSM accounts should generally be in a steady state — they're using the products they need, they're engaged, and there aren't major unresolved issues. When you're taking an account from a TAE or TAM, it's worth looking beyond the surface to make sure that's actually the case. These aren't a rigid checklist. They're things to dig into that can surface problems which are otherwise easy to miss.

Billing and commercial

Product adoption

Engagement

Account documentation

Lower priority

Worth being aware of, but less likely to be blockers:

Pushing back

If you're seeing multiple flags — declining usage, no engagement, concentrated users, missing account plan — push back. An account with several of these signals isn't in steady state and probably needs more work from the previous owner before it's ready for CSM. Talk to Dana if you're unsure whether to accept an account.

---

High potential customers

For TAE-led customers who will be landing at $100k+ a year or have high expansion potential into new product areas, we should introduce a TAM earlier on than normal.

The prime time for this is when the technical win is confirmed - the TAM should be introduced to the customer by the TAE in an evaluation or POC wrap-up call when we know that the customer has selected PostHog.

The introduction is purely for relationship building and continuity purposes so that the TAM can hit the ground running with the customer after the initial credit pre-purchase is signed. It's still on the TAE to work with the customer on the deal, and as such only the TAE will be recognized on the initial deal for commission purposes. After the initial deal is closed won the TAM will take over the account in their book of business.

The TAE and TAM should use their overlapping time to work with the customer on a documented onboarding plan per the above guidance.

Account planning

Growth | Source: https://posthog.com/handbook/growth/sales/account-planning

This account planning framework is designed to help you quickly get up to speed on accounts and consistently think through a set of questions that deepen the partnership with your accounts. It encourages a proactive approach to identifying expansion and cross-sell opportunities, driving growth and customer success. This template was initially developed for managing a book of business primarily consisting of smaller startups. It will likely need modification to be useful for larger, more enterprise, accounts.

Here are some times where it may help:

Account planning template

I. Business info:

A. Business stage:

Businesses have different goals and constraints based on funding and stage. A venture-backed early-stage company may be very cash-conscious and focused on product-market fit, while a more established enterprise may be more focused on scaling, efficiency, and locking in multi-year discounts.

B. How they make money:
C. Vibe-based matrix:

Refer to the vibe-based sales matrix (internal only) - where does this account fit?

II. Product impressions:

You should always try out a customer's product where possible, as it can give you useful context. It helps you identify best practices we can recommend, understand their user experience firsthand, and spot potential cross-sell or value-add opportunities for PostHog.

III. Hiring roles / goals:

Review their careers page, LinkedIn job postings, and any announcements about team growth. Hiring trends can tell us what the business will be focused on in the next 12-24 months, what skills they're prioritizing, and what type of growth the business is forecasting.

IV. Business objectives :

Collect as much context as you can about the customer and their goals. Look for opportunities to align PostHog to those goals.

V. Stakeholders and power users:

VI. Current PostHog products in use:

This can be easily checked in Vitally. Thinking through this in a structured fashion may be helpful when taking on a new account.

A. Underutilized products & cross-sell mapping:

In Vitally, you can often easily identify which products are underutilized or not adopted. It may be beneficial to map these out in a more general sense.

B. Optimization opportunities (for existing products):

VII. Context / suggestions from others at PostHog:

Check Active Conversations in Vitally, support ticketing, Slack channels, CRM history.

VIII. Open requests and feedback

Has the customer submitted any feature requests or other relevant feedback?

IX. Risks:

Accounts overview

Growth | Source: https://posthog.com/handbook/growth/sales/accounts-overview

This is a high level overview of where leads and customer accounts go at different stages of their interactions with us. We use various criteria to figure out where the best place is for a customer to go. You find further details in this section of the handbook.

As we grow, this will keep changing!


flowchart TB
    A@{ label: "<b>New business leads<br></b>- Booked a demo (organic, paid ads)<br>- Emailed sales@<br>- Used &gt;50% startup credits + invoice &gt;$5k<br>- 'Cool company' in ocean.io<br>- Using PostHog, $0 spend, trigger for hiring increase, web/social increase, fundraise" } --> C["TECHNICAL ACCOUNT EXECUTIVE"]
    B["<b>Expansion leads</b><br>-MRR $500-1,667, &gt; 50 employees, &gt; 7 users, ICP country, paying &gt; 3 months<br>- High ICP score + Scale plan<br>- Off startup plan in next 2 months + last invoice &gt;$1500<br>- &gt;$1k MRR + &gt;50% change"] --> D["TECHNICAL ACCOUNT MANAGER"]
    n1["<b>Manual leads</b><br>- Anyone can create - use your discretion"] --> C
    n2@{ label: "<span style=\"color:\"><b>Onboarding leads</b><br>- F</span>irst bill of $100 + business email<br>- Not otherwise a lead" } --> n8["ONBOARDING SPECIALIST"]
    C --> n4["$20k+ potential? (TAE decides)"]
    n4 -- No --> n3["SELF SERVE"]
    n4 -- Yes --> n5["Close then nurture for 12 months"]
    n5 --> n7["Using replay + flags + error tracking AND expanded to full potential? (Simon decides)"]
    n7 -- No --> D
    n7 -- Yes --> n6["CUSTOMER SUCCESS MANAGER"]
    n8 --> n9["$20k+ potential? (Onboarding/BDR decides)"]
    n9 -- Yes --> C
    n9 -- No --> n3
    n3 -- "Organic growth to<br>$2k+ MRR" --> n14["Account review<br>(Simon decides)"]
    n14 -- "TAM criteria met" --> D
    n14 -- "Not yet" --> n3

    D --> n10["Expanded to full potential? (TAM proposes, Simon signs off)"]
    n10 -- Yes --> n6
    n11["BDR"] --> n9
    n3 -- "Some become..." --> B
    n12["<b>Manual leads</b><br>- Anyone can create - use your discretion"] --> D
    n6 -- If drops <$20k --> n3

    A@{ shape: rounded}
    B@{ shape: rounded}
    n1@{ shape: rounded}
    n2@{ shape: rounded}
    n8@{ shape: rect}
    n4@{ shape: diam}
    n3@{ shape: rect}
    n5@{ shape: rounded}
    n7@{ shape: diam}
    n6@{ shape: rect}
    n9@{ shape: diam}
    n10@{ shape: diam}
    n11@{ shape: rect}
    n12@{ shape: rounded}

Automations

Growth | Source: https://posthog.com/handbook/growth/sales/automations

Automations

As Vitally connects all of our Product, Stripe, Zendesk and HubSpot data together it's the best place to trigger automations via Playbooks. These Playbooks can call a webhook after Accounts or Users meet certain criteria. This allows us to call out to Zapier to use their inbuilt actions to update Zendesk, HubSpot, Slack and more. We use Zapier extensively throughout the company for automation. There is a shared Zapier login in 1Password.

Connecting everything together

Vitally requires a consistent external_id to be present to link everything together. For Accounts, we use the posthog_organization_id and for Users it's their email.

Vitally Segmentation

Vitally uses Playbooks to put Accounts and Users into Segments, which are useful for reporting as well as targeting of Playbooks. We have the following Segmentation Playbooks defined:

  1. Segment Name: $60K ARR
  1. Segment Name: $20K ARR
  1. Segment Name: Startup Plan
  1. Segment Name: Annual Plan
  1. Segment Name: Active Trial
  1. Segment Name: First payment forecasted this month

Vitally -> Zendesk/HubSpot Automation

As Vitally has Subscription/Segment information it's the best place to drive Zendesk tagging and other activities.

Ensuring that Vitally Accounts have corresponding Zendesk Organization and HubSpot Companies associated with them

The New Orgs to Zendesk and HubSpot via Zapier playbook triggers on Accounts where there is _no associated Zendesk ID_ but there _is a Stripe Customer email_, so that we can look up the contact and company information in HubSpot. When these criteria are matched the playbook sends the following traits to a webhook which triggers the Vitally Webhook to New Zendesk Org and HubSpot Zap:

The Zap then:

  1. Tries to find a HubSpot contact matching the email
  2. If successful, looks up the associated HubSpot Company
  3. Sets the posthog_organization_id property in HubSpot so that Vitally will be able to link to the Company
  4. Creates a Zendesk Organization with the following properties:

There are some scenarios (e.g. gmail signups) where HubSpot doesn't have an associated company record and as such there won't be a domain to supply to Zendesk. In this case the automation completes but also adds the Email and Zendesk org information to the Zendesk Orgs Without a Domain table. For each row there are two buttons:

Tagging Zendesk Organizations based on Segment and Subscription information

As Vitally is the best source of truth for Active Subscription / Payment information which informs our Zendesk ticket prioritization, there are a number of Vitally playbooks which will trigger the webhook associated with the Vitally Webhook to Zendesk Tags Zap, passing along the following traits:

The Zap then updates the specific Zendesk Organization with the requested tags.

  1. Zendesk Tag: priority_customer
  1. Zendesk Tag: paying_customer
  1. Zendesk Tag: non_paying
  1. Zendesk Tag: churned
  1. Zendesk Tag: startup_plan
  1. Zendesk Tag: trial

Setting the correct organization in Zendesk for new tickets

Zendesk uses the requester's email domain to associate the ticket and the requester with an organization in Zendesk. To mitigate the problems this causes, we have a Zap named Set user's correct organization ID which:

[Deprecated] HubSpot and Zendesk tagging

The Zaps in this folder have been mostly turned off in favour of the Vitally automations above, however there are some which are still enabled as we need to figure out how to handle them via Vitally.

Billing trial activated event -> HubSpot and Zendesk

_This needs to be moved to Vitally_

This Zap is triggered when a trial is activated in the Billing UI (triggered by the Billing trial activated action).

  1. Looks up the associated email in Clearbit
  2. Continues only if there is an associated Company in the Clearbit payload
  3. Calls the Update tags on Zendesk org Sub Zap to create/update a Zendesk org with the Name and Domain from Clearbit and Organization ID as the Zendesk External ID (so that it will link the Zendesk org to the Vitally Account)
  4. Finds the Company by name in HubSpot
  5. Sets the Organization ID and Trial end date in HubSpot.
  6. Creates an engagement (Task) in HubSpot for Simon reminding him of the trial end date.

HubSpot Automation

There are three zaps in this folder which create follow-on deals when any hands-on pipeline deal is closed:

  1. HubSpot Inbound Hands-on Deal Closed to Renewal Deal (Zapier automation details)
  2. HubSpot PQL Hands-on Deal Closed to Renewal Deal (Zapier automation details)
  3. HubSpot Renewal Deal Closed to Renewal Deal (Zapier automation details)

They're triggered by a deal closing in the respective pipeline. It figures out the new deal close date based on the term in the existing deal (1,2,3 years) and then creates a new deal in the renewal pipeline, with amounts and ownership copied over too.

Sales Pipeline Events to PostHog

This folder contains Zaps which ensure we are tracking pipeline updates as PostHog events, so that we can model our sales pipeline as a funnel.

Calendly Event Scheduled to PostHog

This Zap is triggered when a new event is created via Calendly, this:

  1. Looks up the PostHog Distinct ID via the email address of the person
  2. Captures a calendly.event_scheduled event in PostHog with either the Distinct ID above or email address as the Distinct ID if there wasn't a match.

HubSpot Deal Stage Changes to PostHog

This Zap is triggered when a deal stage is updated in HubSpot, this:

  1. Transforms the HubSpot ID of the Pipeline and Stage to the names via lookup tables and only carries on if matches are found
  2. Gets the Deal Contact and Owner information
  3. Captures a <pipeline-name> <stage-name> event in PostHog with the Contact email as the Distinct ID

Annual Plan Automation

To ensure consistency in the setup of annual plans we have Zapier Automation to take care of all of the Stripe-related object setup.

Load Contract Details to Annual Plan Table

Once an Order Form is closed in PandaDoc, This Zap will add a new row to the Annual Plan Table with the following information set:

  1. Order Form ID
  2. Customer Email
  3. Customer Address
  4. Company Domain
  5. Contract Start Date
  6. Contract Term (months)
  7. Credit Amount
  8. Discount
  9. Price

Create or Update Stripe Customer

If the Customer has an existing record in Stripe (e.g. they are already subscribed to PostHog) then copy their Customer ID (starts cus_) from Stripe to the _Stripe Customer ID_ column. If they don't have an existing Customer in Stripe then click the _Create Stripe Customer_ button in the table to trigger a Zap to create one. The Zap also automatically adds the ID to the table.

Create Invoice

Once you click the Create Invoice button this Zap will create a Stripe Invoice in draft format. The following table fields need to be populated for this to work so check them before clicking the button:

  1. Start Date
  2. Term (months)
  3. Credit Amount
  4. Price

Once it's completed it'll populate the table with the Invoice ID and Link. Review this in Stripe, and when you are ready send the Invoice to the customer.

Note: You need to send the invoice to the customer before you apply the credit below. If you apply the credit whilst the Invoice is in a draft state it'll just pay the invoice with the credit, which defeats the purpose

Apply Stripe Credit / Zendesk Tags

Here you can click _Apply Credit_ to trigger a Zap which applies the Stripe Credit and Zendesk tags using the corresponding Sub Zaps. It will apply the priority_customer tag if the price is above $20k, and paying_customer otherwise.

Schedule Subscription

If the customer doesn't already have a running monthly subscription this Zap will create one with the desired configuration of paid products. Select the products you want to include and then click the Schedule Subscription button. It'll create a Subscription which is either Scheduled if the Start Date is in the future, or live if it is in the past.

Remember to update the Subscription in the Billing Admin Portal

Note: It has the current default Stripe Price IDs hardcoded in the Zap so if we update those we need to remember to update them in this Zap too.

YC Program

This process is documented in the YC Onboarding section of the handbook.

PostHog for Startups

_Work in progress_

Sub-Zaps

These are used in a few different places to ensure we do things in a consistent manner. It also ensures repetitive tasks are easy to update if needed.

[Deprecated] Update tags on Zendesk org

_Mostly deprecated as we use Vitally for this now_

This Zap ensures that a Zendesk org is created and tagged correctly

  1. Accepts the following inputs:
  2. Company name (required)
  3. Domain (required)
  4. Tags
  5. Organization ID
  6. Instance
  7. Startup plan or Trial ends at
  8. Formats tags and startup/trial ends at in case of missing data
  9. Formats startup/trial end in YYYY-MM-DD
  10. Creates or Updates an organization with the information above

Apply Stripe Credit

This Zap applies credit and associated metadata to a Stripe Customer object

  1. Accepts the following inputs:
  2. Duration (e.g. 1 year or 6 months)
  3. Stripe Customer ID
  4. Amount (dollars)
  5. Description (optional)
  6. Credit start date
  7. Is startup credit
  8. Calculates the credit end date from the Start Date + Duration
  9. Converts Dollars to Cents (for Stripe)
  10. Adds the credit balance via the Stripe API
  11. Updates the following metadata on the Customer Object:
  12. credit_expires_at
  13. is_startup_plan_customer

Billing

Growth | Source: https://posthog.com/handbook/growth/sales/billing

Managing billing

This section explains how PostHog's billing system works. Most billing operations described below are handled exclusively by the and are not self serve. Sales should coordinate with the billing team for any billing modifications, pricing changes, or technical billing tasks rather than attempting to implement these directly.

All PostHog instances talk to a common external Billing Service. This service is the single point for managing billing across PostHog Cloud US, PostHog Cloud EU (and ,formerly, self-hosted customers).

The Billing Service is the source of truth for product information, what plans are offered on those products (eg a free vs a paid plan on Session Replay), and feature entitlements on those plans. Our payment provider Stripe is the source of truth for customer information, invoices, and payments. The billing service communicates with Stripe to pull all the relevant information together before responding to customer requests.

Credit-based Plan Automation

To ensure consistency in the setup of credit-based plans we have Zapier Automation to take care of all of the Stripe-related object setup.

Loading contract details

Once an Order Form is closed in PandaDoc, Zapier will add a new row to the Credit-based Plan Table with the PandaDoc ID of the document. The table will have the following information automatically filled in: PandaDoc Order Form, Company Name, Customer Email, Credit Amount, Discount, Price, Start Date, Term, PostHog Org ID.

Upfront Payment Setup
Step 1: Update Zapier table with existing Stripe ID

If this is a new contract for an existing customer, you will need to add their existing Stripe Customer ID manually to the table. You can find this information in Vitally under Traits. If this is a brand new customer, click “Create Stripe Customer” button to assign them a new ID.

Step 2: Create invoice
Step 3: Verify invoice details and send

Do not proceed to the next steps until invoice is finalized. Any credits added to an account gets automatically applied to outstanding invoices. If you add credits before payment is completed, the credits will settle any existing debts, and customer will not be able to make a payment.

For customers using Bill.com for payment, when they submit the invoice to the Bill platform it strips out the Stripe virtual account details. You'll need to ask them to follow the instructions in this help article to set the correct bank details for us in the Bill.com platform. The account details is provided in the invoice sent over. They'll need to make sure they use the original contact information and not your email if you're set as signer on the contract so we can process payments in the right account. In case they don't do this, we have a default customer account on Stripe which the money will go to. If this happens, mark their invoice as paid manually and then generate a new one against our default customer account to use the funds.

Step 4: Apply credits
Step 5: Schedule subscription

Failed/late payments

We define late payments as follows:

  1. For credit-based customers, that have not made payment on an invoice and their due date has passed. The first invoice is usually 30 days from the contract start date (Net 30) although can differ based on other contractual terms. This rule applies to all payment terms, including and not limited to annual and quarterly, regardless whether there are still credits available or not.
  2. For pay-as-you-go usage-based customers, we will attempt 4 automated payments using the card we have on file. Each failed payment sends an alert to the #sales-alerts Slack channel. After 4 failed payments we will stop attempting to take further payments.

In either of the above scenarios the account owner as defined in Vitally needs to take action to ensure that payment is made. If there is no owner in Vitally, Simon will handle this process. If you are an AE, remember this also has impact on your commission, as we don't pay out until the customer has paid their invoice.

You can find a list of failed and overdue payments in PostHog

Step 1 - On the day their payment becomes late

As the account owner you will be assigned a risk indicator in Vitally, as well as being tagged in an alert in #sales-alerts. For unmanaged accounts with a failed payment of $1500 or more Simon and Dana are tagged instead.

You should reach out to any known contacts, as well as any finance email addresses we have in Stripe asking for payment to be made immediately. For credit-based customers, you can download the Invoice PDF from the Stripe invoice page, and for monthly customers you can get the payment link from the Stripe invoice page. To get a payment update link, click on the subscription, then click actions in the top right corner and choose share payment update link. Make it easy for them to make payment by including these details in your email.

Make it clear in this outreach that if we don't receive payment in the next 7 calendar days, their user access will be suspended. If they come back to you with genuine reasons why they need more time, use your discretion with the next steps.

Step 2 - 1 day before suspending user access

Reach out to all active users on the account, and let them know that access will be suspended tomorrow due to the failed payment. This often creates urgency and will get any late payment resolved.

Step 3 - Suspending user access

To prevent users from being able to log in you need to go to the Django admin panel for their organization, then set the "Active" field to "No", with the reason selected from the dropdown: "Access revoked due to an unpaid balance." Then, hit save.

After completing this, email or Slack all users in the organization letting them know that access has been suspended and what they can do to rectify the situation. Also make it clear that if this isn't resolved within the next 7 days we will revert them back to the Free tier and they be subjected to the usage limits of that tier (e.g. they are likely to lose tracking data).

If they do pay after this point make sure to re-enable user access by reversing the above in Django admin.

Step 4 - 1 day before cancelling their subscription

Reach out to all contacts letting them know that due to the failed payment we will be terminating their subscription tomorrow.

Make it clear in this outreach that once the subscription is terminated they will be subject to the free tier usage limits and we won't store any data above that limit.

Step 5 - Cancelling their subscription

You can cancel their subscription in Stripe - navigate to their Stripe customer page, and then click the ... next to their active subscription to find the Cancel option.

At this point they will be notified about this automatically via the billing service.

Repeated failed payments

After three consecutive missed payment periods, the customer must provide advance payment covering three months of service based on their typical usage before account access is restored. If the customer disagrees or fails to make the advance payment, the account may be reverted to the Free Tier.

India-based customers

Withholding taxes

PostHog Inc is a US incorporated company and a US tax resident and we do not claim benefits under any Double Taxation Avoidance Agreements (DTAA). To support this, we provide:

These documents are available in the shared Finance drive. You can share them with the customer on request.

The full invoice amount is due. Any tax withheld is exclusive of the invoice, which will be treated as outstanding.

Stripe Products & Prices

⚠️ Product and price modifications are restricted and handled exclusively by the . These changes are only made in rare cases and require billing team approval and implementation. Do not attempt to modify products or prices directly - contact the billing team for any pricing-related requests.

Each of our billable Products has an entry in Stripe with each Product having multiple Prices. We use a billing config file to determine what is shown in the UI and how billing should behave. We use very limited metadata on some of these prices to allow the Billing Service to appropriately load and offer products to the instances:

Image: Stripe products

Custom metadata

On Stripe Products

On Stripe Product Prices The following keys are used to manage Startup prices:

Working with pricing

Each Product has multiple prices that can be used in a subscription. Which price is default depends on the billing config file. The default price in Stripe does not affect the actual default price for a product. This is instead defined in the billing config. In general, if coming from the UI, a customer will subscribe to certain prices depending on the config. There are special prices named Free which can be used to give a product for free. These can be added manually and are typically used for Enterprisey customers who pay a flat fee up-front and $0 for the actual usage (which we still want to track but not charge for).

Types of billing plans we support

We generally support the following types of billing plans:

If at all possible, it's best to stay with these types of billing plans because we already support them, and adding extra stuff will increase complexity. If you do need to add a different type of billing plan, chat with the before agreeing to anything with a customer to make sure it's possible!

Coupons and Discounts

As much as possible the existing prices should be used in combination with Coupons to offer custom deals to customers. Coupons are applied to the _Customer_ in Stripe, not to the customer's subscription.

  1. Visit the customer in the Stripe dashboard.
  2. Select Actions -> Apply Coupon.
  3. Select the coupon to apply.
  4. The UI should soon reflect the change. If you need it to reflect immediately, use the "Sync selected customers with Stripe" action in Django Admin.

When calculating usage limits, discounts are taken into consideration _before_ the limit is calculated. This means that if the customer sets a billing limit of $200 and has a 20% discount, they will get charged $200 for _$250 worth of volume_.

Creating new or bespoke prices
  1. Go to the appropriate product in question (do not create your own Product)
  2. Click "Add another price"
  3. Important: For metered products (e.g. Product Analytics, Session Replay), set up the price as follows:
  1. Expand the additional options and add a straightforward Price Description like Custom - {date of creation}
  2. Add the tiers as you see fit
  1. Add custom metadata if needed.

Plans

⚠️ Plan modifications are handled exclusively by the . Do not attempt to modify plans directly, contact the billing team for any plan related requests.

You can find a list of available plans in the billing repo. These are found inside costants/plans, divided by folder. Each plan can have a list of features, and a price. Features are used to infer which features are available in the product, for a customer on that plan. You can manually change the plan for a customer by updating the plans_map in the billing admin panel.

Employees can get access to paid features (like Boost) on personal or side projects. Ask in #team-billing with your organization ID and someone can set this up. There are two approaches for platform add-ons:

  1. Special billing-only plan: Add a plan like boost-addon-20250602 to the customer's plans_map in the billing admin. These plans exist only in the billing system and grant features without a Stripe subscription.
  1. Long trial: Create a trial that does not auto-convert with a long expires_at date. This works well for temporary access or when you want a clear end date.

Updating subscriptions

Stripe subscriptions can be modified relatively freely for example if moving to a custom pricing plan.

Image: Stripe subscription update

  1. Look up the customer on [Stripe dashboard][stripe_dashboard] using their email address or Stripe ID (this can be found in the Billing Service admin under Customers).
  2. Click on the customer's current subscription.
  3. Click on _Update subscription_.
  4. Remove the old item from the pricing table and add the new item.
  1. Click on _Update subscription_. Do not schedule the update for a later time. There will be unintended side effects if the changes are not applied immediately.
  2. Do not prorate the subscription.
  3. The changes should be reflected for the user within a few minutes.

NOTE: Removing a metered product price (events, recordings) and adding a new price will likely reset the usage. This is fine as the Billing Service will update it during the next sync.

Self-hosted differences

Self-hosted billing is no longer supported except for legacy customers who were using the paid kubernetes deployment.

Billing for data pipelines

For information about data pipeline pricing and billing, please visit our pricing page.

Communication templates for new feature adoption (TAMs)

Growth | Source: https://posthog.com/handbook/growth/sales/communications-templates-feature-adoption

Purpose

Marketing drives awareness at scale. TAMs help customers get value in their own projects. This page gives a simple plan to turn new launches into real outcomes for specific customers.

See also: How we work.

How this differs from marketing comms

Cadence

Before new feature launch: internal heads-up TAMs get notified early about a launch. Understand why it matters, what it does, and who benefits. Write a few short notes for target accounts.

Launch day Marketing sends the announcement.

After the marketing launch: personal nudge Send a short note right after the marketing announcement to build on the awareness. The TAM message should tie the feature to the customer’s stated goal and give one clear action they can do now in their own PostHog project.

Example

“Last quarter you set a goal to reduce activation drop-off. We just shipped [new feature]. You can turn it on in your project and try it on your onboarding flow. I recorded a 30-second Loom in a demo project: {loom_link}. If helpful, here is the direct link: {project_link}.”

A week or two after launch: data-triggered follow-up Look at usage in their project. Follow up based on what happened.

“Looks like [new feature] is on in your project. Is it moving {goal metric}? If you have notes, I will pass them to the product team.”

“Was thinking about your goal to achieve [goal metric] and how [new feature] might help with that. So I wanted to send a nudge in case it fell off the list. You can switch it on here: {project_link}.”

Next QBR after launch Bring the feature as a solution to the overall strategy, not a simple list of new features to go through. “Given your target to improve {goal}, we can be more relevant with these improvements: [selected new items].”

Realistic examples

Experiments → no-code experiments “You said you want to lift {goal}. No-code experiments lets PMs ship A/B tests without engineering. Turn it on here: {project_link}. Start with {page_or_flow} where we saw friction. Check {metric} in this view: {report_link}. Short Loom with the steps: {loom_link}.” See: no-code web experiments and getting started with experiments.

Feature flags → quick holdout “You’re planning to roll out {feature_or_copy_change} to improve {goal}. Keep a {holdout_percent}% holdout on the flag so you can see real lift before full rollout. Flag settings: {project_link}. First results show here: {report_link}.” See: feature flags and holdout testing tutorial.

Session replay paired with an insight “We saw a drop-off at {step_or_page}, which blocks {goal}. Create this funnel {event_sequence} and open replays linked to step {step_number}. Start here: {report_link}. This pairs the number with the clips so you can see why.” See: session replay.

Workflows “Follow-ups after {event} are manual today, so {goal} slips. Turn on a workflow that triggers on {event_or_property} and sends {message_or_action}. Enable here: {project_link}. First run appears here: {report_link}. Adjust, then expand.” See: workflows – start here.

LLM analytics “You’re aiming to increase {success_rate_metric} for {ai_feature} and keep {cost_metric} in check. Turn on LLM analytics: {project_link}. Watch prompts, responses, success rate, and cost per {n} prompts. First view to check: {report_link}.” See: LLM analytics – start here.

Heatmaps “People hesitate on {page}, which hurts {goal}. Open heatmaps: {project_link}. Compare {version_or_date_range} to see what changed. First view: {report_link}. Use this to pick the next tweak.” See: heatmaps.

Potential measurement

Some areas collect feedback in product. Watch for those notifications from your accounts.

Potential user segmentation for message adjustment in the future

Power users and beta candidates “You are a heavy user of {area}. [New thing] is ready. You can turn it on here: {project_link}. If you want a head start, I can share a tiny checklist.”

Flag users without experiments “You already ship behind flags. Add a small holdout on the next release so you can prove lift before rollout. Here is the page in your project: {project_link}.”

Single-product users with nearby needs “You use {current module} to hit {goal}. {Adjacent module} helps with the next step. Short Loom with setup: {loom_link}. Direct link in your project: {project_link}.”

Adoption laggards on the core path “Checking in on [new feature]. You can enable it here: {project_link}. If it is not a focus right now, all good.”

High-traffic, low-conversion areas “{page or step} has volume and a drop-off. Try [new feature] here first. I can share a minimal setup so you see a signal this week.”

Automation ideas

Communication templates for funding and exits

Growth | Source: https://posthog.com/handbook/growth/sales/communications-templates-fundraising

Purpose

When a PostHog customer raises a new round of funding, gets acquired, or goes public, it’s a major moment for their team. Our goal is to celebrate the milestone in a way that’s genuine, brief, and human, not transactional or opportunistic. We avoid product pitches, feature plugs, or follow-up asks in the initial message.

1\. Communication principles

Keep it human, not salesy The first message should feel like a note from one person to another—not a brand announcement. Congratulate them sincerely, acknowledge their achievement, and stop there.

Example: “Massive congrats on the Series C\! I imagine this is a huge moment for you and the team — hope you’re all taking time to celebrate.”

Timing matters

Personal \> Personalized We avoid template-like phrasing. If we can’t find something personal to say about the customer or their journey, it’s better to say less.

Channels

2\. Follow-up framework (3–4 weeks later)

Use their own public statements as the hook. Reference their goals, product direction, or challenges expressed in press coverage or announcements, and connect them meaningfully to where PostHog can help.

Structure:

  1. Open by referencing their recent announcement and one or two specifics (quote, metric, or goal).
  2. Briefly connect that to how PostHog can support those goals.
  3. Ask a question or offer to share something relevant (not a demo or pitch deck).

Example: “Hey — congrats again on the Series B\! I read about Heidi’s plans to scale your AI work globally and tackle latency head‑on. We’ve been working on similar challenges at PostHog around speed and reliability for teams deploying AI at scale. Let’s talk about what’s worked for other customers in keeping things fast and cost‑efficient as usage grows — when’s a good time to connect?”

3\. Key takeaways

Communication templates for incidents

Growth | Source: https://posthog.com/handbook/growth/sales/communications-templates-incidents

When things go wrong, our priority is simple: keep customers informed, quickly and clearly.

This section covers how we communicate during service disruptions, from small hiccups to major outages. We aim to be transparent, human, and proactive — sharing what we know (and what we don't) in plain English.

For the engineering incident response process, see Handling an incident.

PostHog customers rely on us to power their products, so we provide honest, timely updates through the right channels — usually Slack or email, and occasionally SMS for high‑touch accounts.

Core principles

1\. Transparency \> Perfection

Share what we know, when we know it, clearly and without “status-speak.”

2\. Human-centric

Messages come from people, not “The PostHog Team.” Show empathy and ownership (“I know this might interrupt your work; here’s what we’re doing.”)

3\. Consistency

Use a consistent structure and timing so customers know what to expect.

4\. Proactive by default

Reach out before customers ask, even if it’s just to say, “We’re aware and investigating.”

Severity levels

| Level | Description | Examples | Channels | Cadence | | ----- | ----- | ----- | ----- | ----- | | SEV 0 – Emergency | Existential service failure; all or most customers impacted with no workaround. CMOC sends immediately via broadcast. Account owners do not gate comms. | Full platform outage, data loss, security breach with active customer impact. | Pylon broadcast → Email → DM/SMS | Immediate broadcast, then every 15–30 min; postmortem within 24h | | SEV 1 – Critical | Major outage or data loss; widespread impact. | API unavailable, ingestion halted, login failures. | Slack → Email → (DM or SMS if needed) | Every 30–60 min; postmortem within 48h | | SEV 2 – Major | Partial degradation or downtime; workaround available. | Replay or query delays \>30 min, flag evaluation slow. | Slack or Email | Every 1–2 hrs | | SEV 3 – Minor | Limited impact or slow recovery. | Billing sync delays, isolated org issues. | Slack | Start and close | | SEV 4 – Informational / Planned | Maintenance or recovered incidents. | DB upgrade, scaling events. | Email or Slack broadcast | Before \+ after window |

Templates

Emergency (SEV 0)

This overrides the standard workflow. CMOC sends directly to all affected customer channels via Pylon broadcast without waiting for account owners. Account owners follow up individually once online.

Initial broadcast (Pylon):

We're investigating a major incident affecting [feature/service]. [Symptom — e.g., "Event ingestion is fully stopped" or "The PostHog app is unreachable."]

Our engineering team is actively working on a fix. We'll post updates here every 15–30 minutes until this is resolved.

Update template:

Update on [feature/service]: [Status — e.g., "Root cause identified. Fix is being deployed." or "Still investigating. No ETA yet, but narrowing it down."]

Next update in ~[15/30] minutes.

Resolution template:

[Feature/service] is back online as of [time UTC].

Root cause: [one-line summary]. Duration: [start–end]. Impact: [brief description of what customers experienced].

We're monitoring closely. A full postmortem will follow within 24 hours.

If you experienced data gaps or have concerns about impact to your project, reply here and your account owner will follow up directly.

Critical

Subject: PostHog Outage – We’re investigating

Hey \[Name/Team\],

We’re investigating a major outage affecting \[feature\]. You may see \[symptom\]. Engineers are on it — updates every 30 minutes until resolved.

We know this may disrupt your work — thanks for your patience while we get things back online.

— \[Your Name\], PostHog

Follow-Up (Resolution): Good news — the issue is resolved. Root cause: \[summary\]. Duration: \[start–end\]. Impact: \[brief effect\].

We’re monitoring and will share a full write-up within 48 hours.

Major

Subject: Performance issues in \[Feature\]

Hey \[Name\],

We’re seeing performance issues in \[component\]. You might notice \[impact\]. We’re mitigating and will update within the hour.

Thanks for your patience\! — \[Your Name\], PostHog

Minor

Subject: Slower performance in \\\[area\\\]

FYI — This shouldn’t block you, but we’re monitoring closely. I’ll update once it’s stable.

Planned maintenance

Subject: Maintenance – \[Service/Region\]

Heads up — maintenance on \[system\] from \[time window\]. No downtime expected, but queries or replays may be briefly delayed. We’ll confirm once complete.

Tone and voice

| Principle | Example | Avoid | | :---- | :---- | :---- | | Direct | “Event ingestion is paused.” | “We are experiencing an issue affecting a subset of users.” | | Empathetic | “I know this blocks work; it’s our top priority.” | “We apologize for the inconvenience.” | | Plain English | “Dashboards might not update.” | “You may experience degraded query latency.” | | Ownership | “We identified a config issue on our side.” | “A third-party dependency caused an issue.” |

Coordination within GTM

Engineering manages detection and resolution (see engineering incident handbook). GTM ensures clear, consistent customer updates, without duplication or coverage gaps.

Goals

Roles & responsibilities

| Role | Responsibility | | :---- | :---- | | Communications Manager On-Call (CMOC) | Activated for any incident requiring GTM notification. Drafts all comms using handbook templates. Coordinates with engineering for context and keeps a central log of who’s been notified. Manages regional handoffs if incidents span time zones or owners are offline. | | AM/AE/CSM | Sends comms to their accounts using CMOC drafts. If offline (PTO, off-hours, or time zone), CMOC assigns a regional backup. | | Regional Backup (Americas / EMEA / APAC) | Covers accounts when owners are offline. Takes handoff from CMOC, sends comms, and ensures follow-up continuity. | | Engineering Incident Lead | Owns technical response and provides updates to CMOC for accurate messaging. |

All coordination between CMOC and Account Owners should happen in #group-cs-sales-support transparently so that everyone who manages customers is in the loop.

Workflow

SEV 0 override: For SEV 0 incidents, the CMOC skips steps 4–5 and sends the initial message directly via Pylon broadcast to all affected customer channels. Account owners are notified in #group-cs-sales-support simultaneously, and take over individual follow-up threads once online. The CMOC continues to own broadcast updates until the incident is resolved or downgraded.

  1. Incident declared (Engineering).
  2. CMOC activated, notified of impact.
  3. Assess customer impact, this insight (or this Google Sheet as a backup) will help you understand which customers are using which components in which cloud environment.
  4. CMOC drafts the initial message, shares with the Account Owners in #group-cs-sales-support
  5. AM/AE/CSM sends to accounts; backup sends if primary is offline.
  6. Updates drafted by CMOC (30–60 min for SEV1, 1–2 hrs for SEV2).
  7. Regional handoffs coordinated by CMOC.
  8. Resolution: CMOC drafts closure; AM/AE/CSM (or backup) sends.
  9. Post-incident: CMOC archives thread; GTM logs feedback and follow-ups.
  10. Postmortem: Engineering writes technical summary; GTM adds comms learnings.

Example Slack workflow (Critical)

  1. Incident created: \#inc-2025-11-05-posthog-feature-flags-error.
  2. SRE posts summary; CMOC coordinates comms.
  3. CMOC drafts message and shares with the Account Owner (the person responsible for the affected accounts).
  4. Account Owner sends the message to their customers. Example outbound: “We’re investigating an outage affecting event ingestion. Updates every 30 minutes.”
  5. During: “Root cause identified (Redis queue saturation). Fix in progress.”
  6. Resolution: “Resolved at 11:42 UTC. Write-up soon.”

Using Pylon for broadcasts

It's best that communications are shared directly from the account owner; however, if speed is of the essence, i.e. for a SEV 1 or security issue, and some folks are not yet working or on PTO, the CMOC can use Pylon to send a broadcast to all customer Slack channels en masse:

  1. Log into app.usepylon.com with your PostHog Slack account.
  2. Go to the _Broadcasts_ link on the left-hand side of the navigation.
  3. Click _Create Broadcast_ in the top right-hand corner of the UI.
  4. Enter the message you want to send, ensuring the formatting looks correct in the preview on the right-hand side.
  5. Ensure the Send as option is set to PostHog, not your own user (unless you want to handle 450+ potential separate threads)
  6. Click _Next_ in the top right-hand corner of the UI.
  7. Select your audience. You can use the filters to select all channels not owned by specific people, e.g., those who are currently online and communicating 1:1 with customers.
  8. Make sure you click _Add to Audience_ to add the selected channels to the broadcast.
  9. Click _Next_ in the top right-hand corner of the UI.
  10. Set the engagement notification channel to be #support-customer-success
  11. Check that you're happy with the message and audience and click _Send Now_ in the top right of the UI.
  12. Ask everyone online to monitor #support-customer-success for replies and respond where necessary.

Communications templates

Growth | Source: https://posthog.com/handbook/growth/sales/communications-templates

We've put together suggested communications templates that TAMs can draw on for various situations like startup plan roll off, incidents, churn risk increase, or new feature rollouts.

These templates are not meant to be restrictive, but a general idea of how we communicate with customers.

Available templates

Contract rules

Growth | Source: https://posthog.com/handbook/growth/sales/contract-rules

We are transparent about how we contract with customers, including what discounts are available. It's better for them, and better for us. We are allergic to the phrase 'let me talk to my manager and see what we can do' - we follow a principled approach.

Discounts

We don't offer discounts to customers paying monthly, irrespective of commitment.

Although our standard monthly pricing has volume discounts built in, it's common practice when negotiating software contracts for the customer (and their procurement team) to ask for a discount. We follow the 4 discount levers framework, being transparent about what drives our discounting:

The 4 discount levers & why they matter to us

Our general principle is that discounts are earned, not given. Each lever represents real benefit to both parties:

  1. Volume: The amount of credits purchased - Larger deals have economies of scale. Our cost to serve a $500k customer is not 10x a $50k customer, so we can share those savings
  2. Timing of Cash: When we receive payment - Money today is worth more than money tomorrow. Cash in hand lets us invest in product, hire engineers, and grow the business faster
  3. Length of Commitment: Contract duration - Longer commitments reduce our customer acquisition costs. We spend less on people doing renewals and can invest more in product development
  4. Ability to Forecast: Mutual agreement to timing - When both parties commit to specific dates (contract close, renewal timing), it helps us plan resources and helps customers secure budget

Every discount reflects a value exchange that provides a sound basis for our pricing. We don't offer unilateral concessions - better pricing comes from moving on one or more of these levers. This framework gives both sides a clear frame of reference for what drives pricing decisions.

How our discounts work

In our consumption-based pricing model, the first way for a customer to reduce spend is to ensure that they are only sending data to us which is valuable to them. There is different guidance here depending on which product(s) they are looking at.

Beyond optimization, we offer discounts based on four levers:

1. Volume discount (based on credit purchase amount - Customers must qualify for this discount before receiving discounts 2 through 4)
2. Length of commitment discount (additive)
3. Timing of cash discount (additive)

Note: We require upfront payment for all discounted contracts. Quarterly or split payment terms are not available as they impact our cash flow and add administrative overhead. If the full projected amount exceeds budget, customers can purchase fewer credits upfront at the corresponding discount tier and then add more later.

4. Ability to forecast - mutual commitment to timing (additive)

For monthly-to-annual conversions & net new agreements: +5% additional, non-recurring discount

For renewals: +5% additional discount

If timelines change: We will handle these on a case by case basis, but the default is to withdraw the additional discount if the customer does not sign an order form by the time that was originally agreed.

You shouldn't offer discounts above the levels outlined here. If you go outside of these rules without clearing it with Ben Bradley (TAEs), Simon Fisher (TAMs or CSMs), or Charles Cook (as backup), you should assume by default that the deal will not count toward your quota.

Why we require up-front payment for credit purchases

We've found that split payment terms create friction for both teams – customers chasing internal approvals, us chasing invoices, nobody focused on delivering value. When customers pay quarterly or monthly, they often consume credits faster than they pay for them, effectively turning us into a line of credit. We are vendors, not lenders. Our focus is on building the best product, not managing accounts receivable. Up-front payments keep everyone focused on customer success and let us invest cash immediately into building features and support. If a customer needs payment flexibility, we're happy to adjust the credit amount and discount, per guidelines above, to fit their budget while maintaining up-front payment.

Credit for case studies

We don't offer additional discounts in exchange for a case study, as paying for case studies can devalue them. We should be working to get our customers to a state of happiness such that they are willing to tell everyone how great PostHog is without needing to pay for it.

Self-serve discounts

We also offer a way for customers to receive discounts on their usage without talking to sales or being on an Enterprise plan. In PostHog, if a customer meets certain criteria, they will see a banner on their billing page with a call-to-action (CTA) to "buy credits". The form they fill out will be auto-populated with an estimated number of credits based on their last 3 months of usage, but they can adjust this value as needed. They will have the option to pay with the card on file or to receive an invoice. Credits will be applied to their account once the invoice is paid.

Requirements for self-serve discounts:

Additional notes on self-serve discounts:

Non-profit discounts

We offer additional discounts to non-profits:

We use tax law in the country of origin to determine what is a not for profit entity. If a customer can provide proof they fit their country's definition, the discount is applicable subject to the guidance above.

When evaluating a discount, it’s important to review our margin calculations to ensure we remain margin positive, especially for larger accounts.

To set up the non-profit discount in Stripe, follow these instructions.

Non-profit discounts only provide an additional 5% on top of standard volume discounts, and only for credit purchases between $25,000 and $100,000.

Legacy discounts

You might see some customers with a 30% discount on their monthly Stripe subscription. These were added when the only way we billed for PostHog was through event pricing. This was originally designed to offset the cost versus competitors who had unbundled Group Analytics or Data Pipelines. These customers will typically be on a higher per-event price plan, so we should look to get them migrated to standard pricing as soon as possible.

Startup plan discounts

For customers on our startup plan, we offer two months free credit when signing a prepaid deal. This encourages startups to use their credits to understand usage, and then commit to a longer term plan with PostHog. This offer is available until the first billing date after the credits expire. If a customer has used up their credits before the expiration date, they still have until the original expiration date to decide and claim the offer. The amount of free credits is determined by how much they purchase on a prepaid plan. By default, we work with customers on prepaid plans that will cover their usage for the next 12 months.

Important clarification: operationally this is implemented as free credits applied before the contract start date, not as extra credits inside the contract term unless a specific dollar amount for the free credits is explicitly included under Special Terms.

You should follow the same inbound sales process and work with the customer on understanding and optimizing their usage. Then follow these additional steps take to present the prepaid plan + free credits option(s):

  1. Review the customer's average monthly cost
  2. Estimate the prepaid equivalent for 12 months of coverage (e.g. [monthly cost x 12])
  3. Inform them they can take advantage of this offer, which allows them to:
  1. If the customer wants to purchase fewer credits than the option above, then they will receive an additional 1/6 of the amount they wish to purchase for free.

All free credits associated with startup plan roll-offs are one-time only, and should be denoted in the special terms of the contract as "An additional credit in the amount of XXXXX (offered to customers in exchange for rolling off the Startup plan) to be applied to Customer's account upon signature with the same expiration date."

For contracting purposes, these free credits should either be applied before the contract term or included in the 12 month credit amount. If they are being applied before the contract term, adjust the contract date to start 2 months later and the one-time credits can be applied to cover the 2 invoices before the contract start date. In this case, the credits do not need to be called out in the contract, and the opportunity owner can add these credits as a one time credit in billing admin.

Margin negative deals

In exceptional circumstances, we may explore providing additional discounts which eat into our operating margin for the following cases:

  1. They are a strategic logo we'd like to land as a brand-new customer.
  2. We are taking their business from a competitor.
  3. We are preventing them from churning to a competitor.

For the avoidance of doubt, these types of deals are very rare (~1 per year), and not offered to customers with standard usage volumes.

If you believe you have a customer who falls into one of these categories and would like to provide additional credit/discount then in the first instance run through the opportunity details including margin calculation with your manager, who will then clear it with Simon (TAMs/CSMs) or Charles (TAEs).

Additional credit purchase

As it's often difficult to right-size the credit needed for a longer term plan as a standard we offer to honor the discount provided in the original purchase for any additional credit purchased in the first half of a contract term (e.g. 6 months for an annual plan). Within the first 6 months given our billing usage reports we should be able to predict whether the customer is going to run out of credit or not. There are also alerts set up in #sales-alerts to help notify account owners about this.

Price guarantees & lock-ins

We do not offer price guarantees for the following reasons:

  1. We regularly lower prices, which would result in higher costs for customers who've locked in a price
  2. We occasionally split or restructure products (e.g. Data Pipelines unbundled), which makes guarantees administratively complex
  3. Customers are in full control of their usage and can thus adjust their spending patterns as needed

This request most often comes from procurement teams unfamiliar with our pricing philosophy. Address it proactively in commercial discussions, but if there is push back, reference the above points. As an example:

"We've dropped Events pricing [X]% over [timeframe]. A price guarantee would have cost you more. We're committed to matching the cheapest at every scale—if we're not, tell us. Our prepaid credits for usage based pricing gives budget control without betting against our commitment to low prices."

Multi-year credit allocation

We will allocate all the credit purchased to the Stripe account when the contract is signed. As above, they can purchase additional credit in the first half of the contract term and take advantage of the same discount as specified in the original contract.

We will allocate the credit for that year to the Stripe account when the contract is signed, and then again when subsequent annual invoices are raised.

If a customer wishes to use subsequent year's credit early they must agree to pay the invoice for that year early before the credit is transferred.

The additional credit purchase applies to each year separately, e.g. they can purchase additional credits at the same discount level in the first 6 months of each year.

You can see a signed multi-year contract set up in this way by navigating to Documents -> Examples (folder) inside of PandaDoc.

Uptime SLA

Customers only get an uptime SLA if:

  1. They have subscribed to the Enterprise add-on; or
  2. You agree it with them as a special term as part of their contract if they are spending $100k+ ARR _post_ discount (i.e. $ spend, not credit usage).

An uptime SLA are not available to customers outside of these cases. You should certainly not agree to an SLA for customers on regular monthly contracts, and even for annual contracts it is not a given - it's one of multiple pieces you may have in play as you negotiate terms (much like a case study).

More details on how exactly the uptime SLA works can be found in our terms.

Payment method

For customers paying monthly, we only accept credit card payments, which will be taken automatically via Stripe at the end of their monthly billing period.

For customers purchasing credits upfront, we only take bank transfers because:

You should confirm ahead of the customer signing the order form that they are happy and set up to pay by bank transfer. If they are absolutely unable to accommodate bank transfer we can accept credit card payments under the following conditions:

If your customer must pay via credit card, you absolutely _need_ to let Mine (Simon as backup) know ahead of the order form being signed as there is a lot of manual work needed up front to make this work.

We absolutely do not allow payment by check. This is made clear on order forms.

Contract buyouts

Are you a potential customer who wants to speak to us about a contract buyout? Get in touch with the Sales team via your shared Slack channel, or reach out directly.

Sometimes customers will be locked into a contract with a competitor, but want to switch to PostHog when their contract is up. In this case, we are willing to let them use PostHog for free for up to 6 months. This is beneficial to PostHog as well, as we can get them set up and using PostHog sooner, capitalizing on the momentum of their interest today, and giving them more time to get comfortable with the platform.

Some rules:

Normal commission rules apply here - commission is paid in the quarter in which the customer pays their invoice.

New business renewal credits

If a customer is currently _not_ a paying user of PostHog, but is a user of one of our competitors, about to renew, and is shopping for a better deal, we are willing to significantly undercut the quoted renewal price. This is because those customers are not that likely to move over to us anyway, and quoting them a lower price works out in our favour either way:

  1. If the competitor matches our much lower offer, and the customer accepts, we've reduced their revenue by a significant amount
  2. If the customer accepts, we've gained net new revenue we otherwise would have missed out on, and we have the opportunity to sell more.

In order for this to not mess up later renewals, the way we do this is by giving them credit for the first year in order to reach a total discount of 40%. For example, if the quote from the competitor is $50k, and the total cost for our product (including other discounts) is $40k, we will give them additional credits worth $10k, in order to undercut the total quote by 40%.

In order to qualify for this, the customer needs to send us the full quote document from the competitor.

Credit over/under usage for contracts

When they don't have enough credit to cover their term

We have CreditBot alerts set up in #sales-alerts when a customer is going to run out of credit before their contract term ends, with the estimated runway remaining. The Vitally account owner (AE or CSM) will be tagged in this message. It's best to be proactive here so that the customer is right-sized well before the credit runs out:

For any of the above scenarios you should use our discounting principles which apply to the credit purchase amount.

In scenario one above, if their expansion credit purchase takes them into a higher volume discount tier, we should include this discount tier for them in the expansion contract. We won't issue a refund for the difference in discount when the expansion order form discount tier is greater than the discount tier of the original order form.

When they will end the contract term with credit remaining

We can roll up to half the amount of credit from the original order form to a new contract term, provided that the customer signs a renewal contract of equal or higher annual spend than the original contract.

When a customer doesn't renew their credit purchase

When a customer chooses not to renew a prepaid credit contract we automatically remove any remaining credits on the expiry date. Their account will then roll onto our standard monthly plan and they'll be charged for usage. It's the customer's responsibility to stop sending us events or cancel their subscription and downgrade to the free tier if they don't want to keep paying.

Varying contractual terms

When we vary terms

If a customer wants to vary either our standard template DPA, BAA, or MSA terms, it is a substantial effort for our legal team to review these suggested changes (also known as "redlines").

At a minimum, we will only do this for contracts above $20k a year, and we should expect even higher amounts of committed revenue if they are asking for big changes (e.g. changing significant provisions, adding Service Level Agreements, etc.). A customer needs to either be spending this amount at present, or agree to commit to this spend via an annual contract, in order to initiate legal review of suggested changes. We evaluate all requested changes proportionally against their annual committed spend with PostHog. A customers annual committed spend needs to be defined before proceeding to a negotiation over legal terms, otherwise there is no frame of reference for the negotiation.

We also sometimes receive unsolicited requests to vary our terms. In these instances the legal team will redirect the customer to work with their PostHog contact person for this, as we will only review redlines for a managed customer or opportunity where the potential annual revenue is understood.

See the guidance below if the customer asks to use their own contract instead of ours

How customers should suggest requested terms

The customer should redline the current .docx version of the document in question. You can find the latest versions of the templates in the Team Internal Info tab in the #team-sales Slack channel (do not save versions locally).

We don't accept redlines on our standard terms of service and if a customer has proposed this you should share the correct templates with them before involving legal.

Once they have returned the redlines to you first check to ensure that they have used the template which you provided, and then share the document for review in the #legal channel. There will usually be a few rounds of back and forth as we converge on an agreement. You will continue to represent PostHog's position to your customer throughout the negotiation. Please work with #legal on the appropriate responses and speak clearly to our customers.

What customers should expect

PostHog evaluates legal risk assumed against annual revenue received. In other words, contractual terms will be varied in proportion to the customer's committed annual spend with PostHog.

To illustrate with examples:

At any potential level of annual spend, PostHog will not proceed under unreasonable legal terms. Certain suggested terms may be non-starters for PostHog.

Varying terms for trials and proofs of concept (POCs) for prospective customers

We don't vary PostHog's standard terms for trials and proofs of concepts (POCs) for prospective customers.

All prospective customers are welcome to try PostHog for free and under our standard terms (including our standard DPA and BAA, if applicable).

We don't negotiate terms for trials and POCs for three reasons:

  1. Unlike many of our competitors, an annual subscription is not required to access PostHog, so a negotiated agreement is not necessary to use our services. Our product-led motion is designed to support customers trying PostHog.
  2. When evaluating custom legal terms, PostHog evaluates legal risk assumed against annual revenue received.
  3. Because prospective customers are paying us $0 for a free, sales-led trial or POC, there is no frame of reference for us to evaluate any potential custom terms. Spending our time and legal resources negotiating these terms is premature when a prospective customer doesn't know that they want to proceed with PostHog at all, much more at a qualifying level of annual usage.

Once the trial concludes, and per our guidance on varying terms, we will be happy to evaluate custom legal terms for an otherwise qualified PostHog customer.

Using non-PostHog contracts

If a customer requests to use a non-PostHog drafted contract for documents such as a DPA, MSA, Order Form, or BAA, we generally decline, except in special circumstances (see 'When we vary terms for customers'). We avoid doing this as it adds too much risk for us, and also because reviewing and negotiating non-standard terms introduces significant operational inefficiencies and doesn't scale well as we continue to grow. We typically do not even consider using customer paper unless the deal is over $200k annually or involves an extremely blue-chip company. It is best to manage this expectation early and just avoid entertaining the idea with customers as soon as possible.

We are somewhat more flexible when it comes to NDAs. That said, since we contract through our U.S. entity, we require customer NDAs to be governed by U.S. law. This is necessary to maintain consistency and ensure we’re not taking on legal or operational risk in jurisdictions where we don’t operate or fully understand the legal landscape. This is mainly about ensuring we can review and manage agreements efficiently with our limited legal resources.

Managing contracts

Growth | Source: https://posthog.com/handbook/growth/sales/contracts

Creating and managing contracts

For customers who want to sign up for an annual (or longer) plan there is some additional paperwork needed to capture their contractual commitment to a minimum term, and likely custom pricing as well. At a minimum, they should sign an Order Form which references our standard terms and privacy notice. In addition, they may want a custom Master Services Agreement (MSA) or Data Processing Agreement (DPA).

What about monthly customers?

Anyone on a monthly plan simply agrees to our terms and privacy policy when they sign up.

QuoteHog pricing calculator

While we offer transparent pricing available to all, you can use QuoteHog for customers who need a "formal quote," or who have very high volumes, or otherwise have bespoke needs.

Sign into QuoteHog via your PostHog Google account/SSO. Upon login, you will see a list of existing quotes, sorted by the created date. You can view a previously created quote or create a new quote using the "New Quote" button at the top right.

The quoting interface is intuitive and, of course, uses the same pricing we display publicly. Feel free to involve a customer in creating a quote if the opportunity presents itself and you think it would build trust.

Be sure to always click the "Save" button after making changes to a quote. QuoteHog does not autosave.

Quotes can be shared externally or embedded in an external source. Clicking the Dot Menu from a Quote and click "Share". If someone asks for a PDF version of a quote, you can view the external version and print it to PDF.

QuoteHog also provides Stripe reported usage and spend for existing customers. To do this, you need to first connect QuoteHog to Salesforce from the profile page. As you build a quote, click "Add customer info" and search for your customer account. This also allows you to link the quote to an existing Salesforce opportunity.

When building a quote for an annual plan conversion or renewal, consider:

  1. How is usage trending? Looking at the past 6 month's of usage (usage history tab in QuoteHog):

Note: QuoteHog's input expects monthly volume, so after estimating annual volume, don't forget to convert it to monthly volume.

  1. Is there opportunity to adopt additional products? How does that affect future usage?

You can create quotes with multiple options: e.g. one based on current usage, one with a higher tier to account for growth potential.

The legacy pricing calculator is available here.

Order form

An order form is a lightweight document that captures the customer details, credit amount, discount, term, and signatures from both PostHog and the customer. They are either governed by our standard terms or a custom MSA (see below).

You will likely need to use QuoteHog to get the correct credit amount to be included in the order form.

Creating an order form

We use PandaDoc to handle document generation, routing and signature. Ask Mine Katsu or Simon Fisher for access if you don't have it.

  1. The order form template to use is titled [Client.Company] PostHog Cloud Order Form - <MMM YYYY>
  2. When looking at the template, click the link to Use this template in the top bar.
  3. In the Add recipients box which pops up:
  4. Replace <MM YYYY> with the month and year the contract starts (e.g. March 2023)
  5. Add the Client email, first and last name
  6. Add the PostHog Signer email - normally the team member who is responsible for the customer (AE or CSM).
  7. Click continue
  8. In the pricing table, set the total amount of credit in the Amount box next to PostHog Cloud Credit
  9. Remove the Enterprise Plan line item if not needed.
  10. At the bottom of the pricing table, set the Discount % just above the Total
  11. On the right of the screen there is a sidebar, select the Variables tab and populate them as follows:
  1. If they are paying monthly change:
  1. If an MSA is being used rather than the standard terms you will need to replace the following text:

PostHog Cloud License Terms appearing at: https://www.posthog.com/terms and Privacy Policy appearing at: https://posthog.com/privacy (collectively the “Agreement”)

with either

PostHog Cloud License Terms entered into by and between the Parties on or about the date hereof and Privacy Policy appearing at: https://posthog.com/privacy (collectively the “Agreement”).

or, if the Customer insists on including the exact date of the MSA to remove ambiguity,

PostHog Cloud License Terms entered into by and between the Parties on or about [INSERT DATE OF EXECUTION OF MSA] and Privacy Policy appearing at: https://posthog.com/privacy (collectively the “Agreement”).

  1. You should link the order form to the opportunity record in Salesforce using the Contract Link field in the "Opportunity Closure Details" so that we have a reference to the completed paperwork from our CRM.

Routing an order form for review and signature

  1. When viewing the order form, check the recipients tab in the sidebar. The Client and PostHog roles should be filled in.
  2. A signing order should also be set, with the Client signing first (so they can review it before we sign).
  3. Ensure Document forwarding and Signature forwarding are set to on so that our Contact can re-assign the document if needed.
  4. Click Send at the top of the document and add a message explaining the context of the order form.
  5. Once the Client and then PostHog have signed it you should get an email to confirm completion.
  6. Don't forget to link to an opportunity in Salesforce and mark the associated opportunity as Closed Won.
  7. Zapier will automatically add a record in the Annual Plan Table with the PandaDoc Order Form ID.
  8. Celebrate!

Manual upload of signed order form

We prefer to keep all signatures in PandaDoc, but sometimes clients may prefer to sign a PDF copy. One way to minimize this is to send contracts for initial review via PandaDoc when possible. It is ok to have multiple drafts in PandaDoc as long as we have the final signed copy in there as well. When a client signs an order form outside of PandaDoc, please follow these steps to complete the process:

  1. If you have previously created a draft, find the document from the Documents page in PandaDoc (note: you cannot change the status from Home or inside a document).
  1. If no draft exists, upload the signed document directly ad a new document in PandaDoc.

Once you the signed form in PandaDoc is marked as complete and the Salesforce opportunity status is set to Closed Won, the RevOps team will get a notification and handle setting up the subscription and invoicing. See the Billing page for steps on how the billing setup works for more information.

Master Services Agreement (MSA)

Occasionally, customers will want to sign an MSA instead of referencing our terms in an order form.

  1. Download a copy of the PostHog Cloud MSA as a Word Document (legal teams prefer this format) and share it with your Customer contact.
  2. They may want to propose changes (also known as 'redlines'). Work with Hector or Fraser to get these agreed.
  3. Create a new document in PandaDoc, you can choose to either import from Google Drive or upload from your local machine. This should be the clean, non-redlined document as agreed by both parties.
  4. Change the name to be PostHog Cloud MSA - CUSTOMER LEGAL NAME.
  5. Add the Client and your name and PostHog email as roles.
  6. Add a Signature, Name and Title field for both PostHog and the Customer.
  7. Check the signing order (Client, then PostHog normally).
  8. Send for signature - so long as any proposed changes have been reviewed and approved by Hector or Fraser, you are free to sign on behalf of PostHog

Sometimes large customers will ask for changes to our MSA. We have a list of the kinds of changes we will/won't consider in a private repo in the company-internal sales contract changes directory that you can generally agree to without the Ops team reviewing. However, if you are ever in doubt, ask in #legal in Slack

Business Associate Agreement (BAA)

We offer HIPAA Compliance on PostHog Cloud and as such health companies will require us to sign a Business Associate Agreement with them. As this means we take on increased financial risk in case of a breach we ask them as a minimum to subscribe to one of the platform packages which is a guaranteed monthly payment. A maximum of one BAA per organization will be signed. Under most circumstances, it should be the company that owns the org/pays us.

  1. Ask the customer to subscribe to the platform add-on (as well as any other paid plans they wish to use). You can verify this in Vitally by ensuring that they are in the Teams Plan segment.
  2. Create a new document from the PandaDoc template.
  3. All you need to do it set the Client.Company variable and then send it to them for review and signature.
  4. It has been pre-signed by Fraser and will automatically add today's date as the date of signature for PostHog.
  5. You'll get a notification when everybody has signed it - we have automation in place to ensure that the HIPAA BAA Signed Date property on the customer's Salesforce Account record is updated.

We only provide our default BAA for platform add-on subscribers - customization requires >$20k annual spend. The BAA only remains active for as long as the customer is subscribed to a platform add-on - if they unsubscribe, we send them a message that their BAA will become inactive at the end of the month in which they cancelled. A customer who is on a platform add-on trial (with a credit card in PostHog) is eligible to sign a default BAA, but you should make it clear to them that the default BAA will be voided if/when the platform add-on subscription lapses. If the lead is not sure whether they will need a custom BAA and their usage wouldn't put them at $20K, then it is worth pushing them to get legal feedback by sending them our BAA before moving forward, else you risk spending a lot of time on an evaluation that ends up at $450/month.

Non-disclosure Agreement (NDA)

In some cases, prospective or current customers require a mutual Non-disclosure Agreement (MNDA) in place before conversastion or product activity can proceed. Terms already specify Confidentiality and if there is still a situation where a documented agreement is requested this can be easily accommodated.

Trust center approvals

Requests that originate from the Trust Center automatically get sent an NDA in the request from SafeBase to PandaDoc. Once the document is fully signed, access will automatically be granted.

Managing our CRM

Growth | Source: https://posthog.com/handbook/growth/sales/crm

Overview

We use Salesforce as our customer relationship management ('CRM') platform. If you need access, you can ask Mine Kansu for an invite.

As a first step, make sure you connect your Gmail account under your Salesforce settings. Go to Settings → Connected Accounts → Gmail and connect it. This ensures all your customer emails sync automatically with Salesforce. Next, make sure your Gmail account is connected in Vitally. This is essential so that we capture the full customer context and avoid duplicate or conflicting outreach.

As a general principle, we try to make sure as much customer communication as possible is captured in Salesforce rather than in individual email inboxes so that we make sure our users are getting a great experience (and not confusing or duplicate messages from different team members!). You should use the channel that suits the user, not us. Just make sure you keep Salesforce up to date with your interactions. We’ve found Slack messages usually get better response rates than email.

For existing customers, you'll sometimes send emails directly from Vitally. To ensure these also make it to Salesforce, first look up your _Email to Salesforce Address_ from the personal settings page in Salesforce, and then add it to your Vitally gmail settings.

All Slack messages sync up with the corresponding account in Salesforce. We use Pylon for this sync, so make sure Pylon is added to the customer Slack channel integrations and the channel is linked to the Salesforce account properly for the sync to work smoothly.

You are most likely to use the following regularly:

Salesforce offers a ton of resources if you want to dig deeper.

Managing our CRM

People currently come into Salesforce through one of the following ways:

New PostHog signups

When a user signed up (Cloud signup) event is ingested into PostHog we use the Salesforce App to sync contact data into Salesforce. We also populate the following Salesforce properties if they are set in the PostHog event:

Completed contact form

We have a contact us form on posthog.com where we ask users can get in touch with us. The sales@ alias gets an email notification and a notification is also sent to #sales-leads in Slack when one of these forms is submitted.

These submissions are processed through the Default app and routed into Salesforce as tasks. Tasks are then automatically assigned to the right team member based on account ownership and territory (see below).

If the submission is clearly a support or billing request, you don’t need to reach out manually:

Zendesk Integration

If you add "sf-lead" tag to a ticket in Zendesk, a new lead will be automatically created in Salesforce. This helps streamline the process of converting support questions or tickets into potential sales opportunities directly from Zendesk.

If you see "Zendesk" as the lead source, please review the ticket under the Zendesk widget in Salesforce which allows you to view the full context within salesforce. It will also appear in sales_form_message field for quick review of last request before the Zendesk ticket is converted to a lead.

Forwarding sales opportunities

If you are not in the sales team but are engaged with a client and identify a sales opportunity, forward the email chain to sales@posthog.com. A new lead will be automatically created in salesforce and assigned to the appropriate AE based on existing criteria. This way we can smoothly hand off potential opportunities and track things properly!

Important: The email must be forwarded (not replied to), and sales@posthog.com must be in the "To:" field—not CC or BCC—for the automation to work correctly.

Task assignment logic

When a new task is created, we first check whether the associated account already has an owner:

This ensures we avoid double assignments and maintain clear ownership.

Territories

Each territory runs its own round robin assignment for new, unowned accounts.

Stale task reassignment

If a task is assigned to a someone but remains untouched for 10 days, it will be automatically reassigned once via round robin. If it remains untouched after reassignment, it will be automatically disqualified with the Stale – Autoclosed reason.

Converting tasks to opportunities

If a task represents a qualified opportunity:

Use the following criteria (loosely based on traditional BANT qualification) to determine when a task should be converted to an opportunity:

All of the above criteria should be met before creating an opportunity. By doing so you drastically increase the odds of bringing them onboard as a successful customer.

If you aren't able to confidently say that you have covered the above, you should keep them as a Lead in the Nurturing stage.

Task disqualification reasons (reference)

When you disqualify a task, choose the picklist reason that best matches the situation. Salesforce groups reasons into categories; full definitions are on the field in Salesforce. This table is a quick map so the team uses the same buckets.

| Category | What it means | Reasons (picklist) | | --- | --- | --- | | Auto-Dispositioned | System closed the task or sent auto emails without hands-on sales triage. | Below Threshold – Auto; Stale – Autoclosed | | Re-route | Send the conversation to another PostHog team. | Support Request; Billing Support Request; Existing Customer Inquiry; Event Request; Partnership Request | | Not a Lead | Not a commercial sales opportunity for this path. | Spam; Duplicate Lead; Non-Commercial; Startup Plan / YC; Self-Hosted Requirement; Business Closed; Feedback; BAA / DPA Request | | Unreachable | You cannot reach them or they stopped responding; split by whether they are worth revisiting. | No Response – Pass; No Response – Prospect; Invalid Contact Info | | Fit | You could assess fit; outcome is ICP, product, technical sponsor, or competitive situation. | Below Sales Assist Threshold – Pass; Below Sales Assist Threshold – Prospect; Not a Good Fit; No Technical Resource; No Product Fit; Using Competitor / Unsolicited RFP | | Timing / Economics | Fit may be fine later; budget, timing, or capacity block a deal now. | No Budget; No Current Need; Resource Constraints | | Other | None of the above; use sparingly. | Other |

Splits and follow-ups (pick Pass vs Prospect carefully so reporting and nurture stay accurate):

Manual entry

If you meet a potential customer elsewhere (e.g., events, introductions, referrals):

Support and billing routing

For support or billing questions submitted via the sales channel, disqualify with Support Request or Billing Support Request as in Completed contact form (Zendesk ticket automation). If you still see legacy lead records from older flows, the same reasons apply; ticket creation may use this Zapier path for some automations.

Below Threshold – Auto (Customer.io)

When you should route someone to self-serve onboarding instead of hands-on sales, mark the task Below Threshold – Auto. That triggers the automated onboarding flow in customer.io, which guides them without manual outreach. Manual TAE judgment under the sales-assist threshold uses Below Sales Assist Threshold – Pass or Below Sales Assist Threshold – Prospect (see splits above), not this auto reason.

Spam

These mostly come into the sales inbox rather than the contact form. Whilst there is a Spam disqualification reason in Salesforce we can also prevent users from emailing the group again by banning them in the Sales Google Group. If you do ban someone bear in mind they won't be able to email our sales email until the ban is lifted so only use this for genuine spam (e.g. people trying to sell us competitor user lists).

Lead qualification criteria

Best practices

Handling time off (PTO)

By default, when you are out leads will still be routed to you, and as we have no expectation of you being available whilst on PTO leads may be missed and not followed up on. To mitigate this we need to temporarily remove you from lead round robin:

Opportunities

Opportunities track potential deals in Salesforce. Managing opportunities effectively is important for tracking progress, forecasting revenue, and ensuring accurate reporting. In our sales process, for each lead conversion we create an Opportunity. Correctly identifying the appropriate opportunity record type is important to optimize our processes.

Opportunity record types

New Revenue: Select this type when engaging with a customer who has never paid us before. This includes new customers and startup customers transitioning to a paid plan for the first time.

New Revenue – Existing Customer: Choose this type for additional credits to a customer who is already paying us. This includes upsells, cross sells, or expansion within the same account.

Existing - Convert to Annual: Choose this when discussing an annual contract with a pay-as-you-go customer.

Renewal: Choose this type when an existing customer is renewing their contract or subscription for our products or services. We automatically create a renewal opportunity if an 'Annual Plan' type opportunity is Closed (more on these later).

Opportunity types

Annual Plan: Select this type when the customer agrees to pay for a year-long+ subscription to our services.

Monthly Plan: Choose this type when the customer opts for a month-to-month subscription to our services. Amount field still reflects ARR here.

How to create an opportunity

Convert a task

If you're working a lead and want to create an opportunity from a task, simply check the Create New Opp checkbox and select the appropriate Opportunity Record Type from the dropdown.

This ensures the Lead Source is correctly carried over to the new Opportunity, and the task and opportunity remain linked for full visibility.

Creating an opportunity from scratch

You can also create an opportunity directly from scratch, but make sure to connect it to a Contact and an Account so all relevant data is linked properly. To do so:

Opportunity stages

Stages will differ depending on the chosen Opportunity Record Type. The following stages are for the New and Existing Business Record Types:

  1. Problem Agreement - Buyer explicitly acknowledges they have a meaningful problem that can be qualified (e.g. "What happens if you don't solve this problem?")

Exit criteria:

  1. Solution Agreement - Buyer confirms our solution is best suited to solve their problem. Can be as simple as "We think PostHog will work for us"

Exit criteria:

  1. Priority Agreement - A senior decision-maker acknowledges the problem as a priority and validates our solution.

Exit criteria:

  1. Commercial Agreement - Mutual agreement is reached on price and all contractual terms.

Exit criteria:

  1. Vendor Approval - Buyer completes internal processes (legal, security, procurement) and contract is executed.

Exit criteria:

  1. Closed Won (100%) - They have signed the contract and are officially a PostHog customer.
  2. Closed Lost (0%) - At some point in the pipeline they decided not to use us. The Loss Reason field is required for any opportunity to be marked as Closed lost.

Bolded exit criteria indicate the minimum standard for the opportunity to advance stages (for typically smaller, more transational deals). More detail is available on the stages and the exit criteria for each state in this spreadsheet

Forecast categories

Commit: PostHog is integrated and the buyer has stated an intent to purchase within the Close Date quarter.

Best case: PostHog is or is being implemented, volume justifies an annual commitment, and the buyer has stated interest in purchasing with the Close Date quarter.

Pipeline: Buyer is actively evaluating PostHog or intends to evaluate PostHog within the Close Date quarter and early volume/discussion indicates an annual contract could be justified.

Omitted: Not used. You can omit from Forecast by moving the Opportunity to a new quarter or marking it as Closed - Lost.

Forecast categories should be re-evaluated on an ongoing basis. While it is not ideal for Opportunities to move to an earlier category, we should do so if this reflects reality, especially as quarter end approaches. In addition, we should think about what we can do differently in future to make the forecast more accurate.

Renewal pipeline

When an opportunity with Annual Plan type is Closed Won, a Salesforce flow will create an opportunity associated with the contact and account from the original opportunity. The following fields will also be set:

The renewal pipeline stages are:

  1. Qualification (10%) - They have just became a PostHog customer and we're helping them getting set up.
  2. Meeting booked (20%) - They have reached a steady state where we consider them self-sufficient with PostHog.
  3. Product Evaluation (50%) - This step becomes relevant if decision makers have changed in organization or if new teams within the company are considering using us.
  4. Commercial & Legal Review (80%) - We are now working with them on contractual items such as custom pricing, MSAs etc.
  5. Closed Won (100%) - They have signed the contract.
  6. Closed Lost (0%) - At some point in the pipeline they decided not to renew. We should make a note as to the reasons why and optionally set a reminder task to follow up with them if we have improvements that could change their mind on our roadmap.

Opportunity notes

The "Opportunity Notes" section is to track key actions and next steps to manage an opportunity and avoid missed follow-ups. It has the following fields:

Opportunity closure details

This section is to add additional information for opportunities that are won or lost to capture context and details to setup customer account correctly:

Self-serve opportunities

If you feel like a customer doesn't fit a hands-on flow, then you mark the lead or opportunity as self serve. There are two ways to do this:

1. Self serve - no interaction

Use this checkbox when you decide to move a new lead to the automated self serve flow without any personal interaction or discussion. You can use this checkbox when a lead does not meet the necessary qualifications for direct engagement and the automated self serve emails would be sufficient for successful onboarding.

How to use:

2. Self serve - post-engagement

Use this checkbox if you have engaged with the lead in some form, such as a demo or discussion, but you believe they can proceed without further involvement.

How to use:

Important notes:

Points to consider when marking leads as self serve:

When moving someone to self-serve we should set them up for success by using the Post Demo route to self-serve. This encourages them to sign up to PostHog Cloud and provides some helpful getting started pointers. If there were any follow-up questions from initial meeting we should answer those in this email as well.

If you move an opportunity to self-serve then it won't be included in your quota retirement/commission calculation (as you aren't working on it).

All done - now what?

This is just the beginning of what will hopefully be an awesome relationship with a new customer!

We are just getting started here, but a few things that you should do:

CSM + TAM rules of engagement

Growth | Source: https://posthog.com/handbook/growth/sales/csm-tam-overlay-coverage

Some accounts have both a CSM and a TAM. The point is depth: two people sharing the load so each can focus on what they're best at, and the customer gets a better experience than one person stretched across everything.

Both roles have a real relationship with the customer. Both are in the Slack channel. Both know what's happening on the account. The difference is _focus_, not ownership.

The customer should never have to figure out who to contact. They reach out to either person, and PostHog sorts it out internally.

What each role focuses on

CSM

TAM

Both

What good looks like

What bad looks like

How to respond to frequently asked questions

Growth | Source: https://posthog.com/handbook/growth/sales/customer-faqs

Here's how to respond to common customer requests. These usually arrive in the form of new contact form submissions but may also be asked by existing customers too.

Can you increase my rate limits?

Here's how we'd break down use cases:

flowchart TD
    A{Is the API hitting <code style='padding: 4px; border-radius: 8px;'>/query</code> rate limits?}
    B{Does the use case fit endpoints?<br />(i.e. B2B2C user-facing analytics, data-powered APIs, internal home-grown dashboards)}
    C["Explain the use case in #team-data-modeling <br />(we're keen to talk to beta users!)"]
    D{Should they use batch exports instead?}
    E[Redirect them to start paying for batch exports.]
    F[1. Assume we're not increasing rate limits.<br />2. Reach out to #team-clickhouse with query details.]
    G["Go to the relevant team for that API<br />(Feature flags, Surveys, ..)"]

    A -->|Yes| B
    A -->|No| G
    B -->|Yes| C
    B -->|No| D
    D -->|Yes| E
    D -->|No| F

See RFC #438 for more context.

Do you have plans to add more hosting options outside of the US and EU?

Right now, no. The vast majority of our customers are happy to host on one or the other, with EU being the preferred domain for GDPR compliance. This is not a "never", just not in the near future.

Do you have a dummy account we can mess around with?

No, the best way to trial PostHog is to start sending your own data into it. When a trial is filled with dummy data, which isn't relevant to the specific team, the overall engagement and success of the trial is lower.

Does PostHog follow the MEDDPICC sales methodology?

Yes! But like everything we do here, it's not what you would expect. At PostHog, MEDDPICC means "Make every deal a delightful PostHog implementation - Charles Cook"

New customer onboarding

Growth | Source: https://posthog.com/handbook/growth/sales/customer-onboarding

Sales-led

Day -1 - Session: Initial demo

Our moat is that we have a fully-integrated tool that allows customers to go across Analytics, Recordings and Experimentation easily. We want new customers to see the value of this as quickly as possible when evaluating us against other solutions.

For high-touch prospective customers the following process will get them onboarded quickly so that they can experience the value we provide using their own product data.

The process should run for 2 weeks by default, but can be extended if we think it's worth the additional effort.

The aim at the end of the evaluation is to have them:

  1. Sending in auto or custom captured event data
  2. Enabled session recordings
  3. Created a trend chart tracking User Acquisition
  4. Created a funnel tracking Activation
  5. Added the above to a dashboard

At the end of the demo call

If at the end of a demo call we think a customer qualifies for high-touch onboarding we should outline our suggested evaluation approach. If they aren't quite ready to kick the evaluation off then we should follow up with a templated email reminding them of the process, then check in with them after they've had some time to regroup.

High touch criteria

As a small team we have limited bandwidth to run customer evaluations so we need to focus on potential customers who:

  1. Are likely to contract above $20k with us.

(Ideally we qualify this by giving indicative pricing in the demo)

  1. Are likely to enter into an annual contract.

(This is quite a high-effort process for people just going month to month)

  1. Are ready to get hands-on with PostHog and will make a decision in weeks, not months.

Expectations of the customer

We'll need them to be able to demo their product to us, as well as attend two or more zoom calls where we scope out the data and help them get set up.

Ideally we will also have them in Slack Connect channel so that we can provide responsive support and expose them to the wider PostHog team.

Some customers may wish to use MS Teams rather than Slack - we can sync our Slack with Teams via Pylon to do this. First you will need an MS Teams licence - ask Simon for one. Then, follow the instructions on the link here to get set up. Before adding the customer into the channel, remember to test it on both sides to ensure the integration is working correctly.

Day 0 - Session: Kick off

At the start of the evaluation, we want to review their product to understand and advise on the best approach to tracking, as well as address any privacy concerns associated with session recordings.

By the end of the call we should have a plan for event capture/opt-out capture and an agreed timeline to get that set up.

Prerequisites

The customer should come prepared to demo their product to us, where we can help figure out the key tracking events needed for the evaluation to be successful.

If they don't already know about AARRR we should share our AARRR blog post and Tracking Plan and ask them to review it before the call.

Structure

  1. Review goals and structure of this session
  2. Review key concepts:
  1. Have customer demo their app to you, focusing on where the above information is captured
  1. Recap and agree the tracking and other implementation details
  2. Agree the timeline to have tracking implemented and set up the following call (ideally 3 days after capture is implemented)

Deliverables

  1. A partially filled-in tracking plan detailing Activation and Acquisition
  2. A code snippet showing them how to implement tracking for their product (including Identification and Groups if applicable)
  3. Elements and pages to add opt-out-capture to

Day 3 - Session: Using PostHog

The aim of this call is to get the customer familiar with navigating PostHog as well as:

As much as possible the customer should be sharing their screen and driving the session, by teaching them to fish they become comfortable and self-sufficient with PostHog.

Prerequisites

Tracking should be set up in line with what was shared after the previous call.

Structure

  1. Review goals/agenda
  2. Have them share screen and guide them through:
  1. Live events
  2. Creating actions
  3. (Optionally if using Autocapture) the toolbar
  4. Creating cohorts
  5. Creating their Acquisiton trend insight
  6. Creating their Activation funnel
  7. Adding insights to a dashboard
  8. Navigate from dashboard to insight to recordings
  9. Note any inconsistencies or missing tracking information and plan to follow-up to help get that set up 4
  10. Show them the billing page and their projected usage (pricing discussion)

Deliverables

  1. Updated tracking guidance based on issues discovered in the guided demo
  2. Updated pricing quote based on volume

Next Steps

Every trial should have an end date by which time we expect the customer to make a decision on whether PostHog is right for them. If they need more time we first need to understand what they've not seen so we can proactively help them see everything they need to do make a decision (within reason).

If they do become a customer (yay!) then we should agree a regular check in call cadence with them from the start (it's much harder to do after they are in the steady state).

Customer Success-led

1-hr onboarding call

Customers with a platform add-on (and up) are entitled to a one hour kickoff/implementation call. This could include a high-level discussion of how PostHog fits into their stack, troubleshooting issues they've hit so far, or walking them through as they code it up, for simple setups. In practice we only need to worry about this for product-led / customers who haven't talked to sales before. [TO DO] Include a team calendly link in the welcome emails for Teams purchases or set up a separate campaign to email from a CSM.

Ongoing Training

Enterprise customers will also receive 1-2 hours of bespoke training per quarter according to their needs. This can be delivered in a few formats depending on where the customer is in their PostHog journey:

  1. A deep-dive on a specific topic of their choosing.
  2. Question and Answer session with their CSM.
  3. An intro/set-up session for a PostHog product they've not used yet.

Sales to CSM "handoff"

Customer lifecycle handoff/ownership is described in the sales and customer success overview.

In-person sessions with customers

Growth | Source: https://posthog.com/handbook/growth/sales/customer-onsites

In-person visits work best for accounts paying or likely to pay $50k+/year where you've identified specific opportunities for expansion, are navigating renewal conversations, or need to overcome significant technical or organizational hurdles. They're a high-effort, high-reward play - use them strategically.

This outline of what to consider and how to plan an in-person visit is designed to provide ideas, inspiration and avoid common pitfalls. It is not a framework nor a definitive guide - your intuition and experience should ultimately guide the who, what and how of in-person meetings.

Account manager driven vs customer driven visits

Account manager driven

Sometimes you'll be the driver for the visit happening, whether by pitching a specific outcome, or offering to 'drop in' when you're around. When this is the case, it's important that you have a tight, well defined schedule and some goals (whether shared with customer or not) for the visit. Don't leave your champion(s) to do the work of planning and scheduling the sessions - you want them to be able to join the session, have a positive and productive experience, and continue evangelising us without adding to their to-do list.

Customer driven

Sometimes the purpose of an onsite is defined clearly by the customer, for example overcoming a specific technical challenge, or building an analytical framework for a new product. In this situation, definitely schedule significant time to address their primary goal, but do not underestimate the opportunity to create less formal contexts with smaller audiences, especially your champions.

Formal Sessions

This section will cover the length, audience, purpose and content of sessions that you could include as part of your onsite - pick and choose like a menu. At a minimum you'll want a broader session with a big audience and a narrow session with the key stakeholder(s) and champions to progress a relationship with an account.

PostHog user analytics jam

Length: An hour, possibly longer with a small focused audience

Audience: Users of PostHog, the more the better, armed with laptops and logged in

Purpose: To level-up how users engage with our platform, spark new ideas and inspiration for customer teams, and expand usage and impact by enabling the audience on our tools.

Content: Set an explicit goal that you think is achievable within the session for you to work towards with the users. Invite ideas and suggestions to get the jam going, but make sure you have some solid ideas in your back pocket to get things started. Building a compound score (i.e Customer effort score, time to value, onboarding friction) can be a good one if the customer has no ideas. Do not start demoing, but do show users useful shortcuts (PostHogAI, Actions, Cohorts, Workflows, Realtime destinations etc) if relevant. Conclude by summarising progress, and suggesting some follow-up and continuation tasks to take it to the next level.

Check-in and planning with champions

Length: 30 minutes to an hour

Audience: The champions of PostHog at your account

Purpose: To level set about PostHog's reputation and role within the customer, and unearth any opportunities or risks, while building stronger relationships with champions. Also a chance to test the water for any cross-sell proposals, and ensure champions are aware of all of our products.

Content: Discovery and planning with the champions, potentially over a meal or in another less formal context - certainly not in front of their teams or boss, if relevant. Give the champions time to air any frustrations, and ask direct questions about renewal, their roadmap, any current or future needs in their team or beyond. This is a good time to find out where you really are with a customer and what organizational challenges you may need to navigate, as well as expand their perception of what we can do. Some good questions to ask:

-"Do you intend to renew with PostHog? If not, why not?" -"What tools are other teams using alongside PostHog data to get the full picture of user behavior?" -"What goals and focus areas are on the table for the next year? How does PostHog fit into those?"

New user demo on your own data

Length: 40 minutes with time for questions

Audience: Customer employees who do not yet use PostHog, or are very new.

Purpose: To increase the number of PostHog users at the customer, and expand laterally into teams that may otherwise not use us.

Content: A demo on the customer's data. Tailor this and create insights or show features that'll be a good jumping off point for the audience to go further on their own. Make sure to note who your audience is and check-in with any that don't start logging in within a week or two.

Executive summary

Length: 30 minutes

Audience: Senior folks - csuite or VPs

Purpose: To improve the perception of our value to senior people, while discovering our position and estimation at the decision-making layer of the customer.

Content: A punchy, direct delivery of information focused on value that connects PostHog to the key goals and objectives of our customer.

Example: Connect PostHog usage to a metric the exec cares about: "Your team shipped 12 features last quarter. Using PostHog's feature flags and analytics together, the product team can now measure impact within 24 hours instead of waiting for your monthly business review. This means faster iteration and less risk of shipping things that don't move the needle on [their key metric]."

Manufacturing Informal Access

Why this matters

The formal sessions are theater - everyone is in meeting mode, and the group is usually to large for real candor. Deeper information comes out over lunch, walking between buildings, or after a drink: you need to create contexts where people forget you're "the vendor" and just talk to you like a colleague.

Making the invitation

Bad: "I'd love to take you to dinner to discuss PostHog's roadmap" Good: "I'm grabbing dinner at [specific place] after our session - you're welcome to join if you're around" Notice: You're doing it anyway, specific location, low pressure, no stated agenda.

Common mistakes to avoid

Follow-up

Make sure you take advantage of good will and being front of mind with the customer after the visit to followup on any outstanding goals, move any paused commercial conversations forward, or ask for access to any teams or key people you weren't able to reach while in person.

Running product training sessions

Growth | Source: https://posthog.com/handbook/growth/sales/customer-training

Purpose - Get new PostHog customers self-sufficient and primed to explore more of PostHog.

Format - Two live group sessions for up to 30 people across product, marketing, and engineering.

Total TAM time - ~2.5–3.5 hours live delivery, plus ~one hour prep per session.

This document tells you what to cover in a training session and why it matters. It's not a script or the only way to approach training. Consider this a good baseline but you will want to always run it in your own voice, in whatever order fits the customer's needs. Product training is a separate and optional activity you can run with an account if you believe it will increase usage and help them derive more value from PostHog.

PostHog AI and MCP should be woven into every demo and practical. Don't teach them as standalone features. Use them as the way you build things in front of the room. The goal is for everyone to leave thinking AI-assisted analytics is the normal way to work.

Pre-session work

Do this before Session 1. The prep is what separates a useful session from a product tour that is easily forgotten.

Understand what will make the session valuable

Gather answers to the following questions (you may know the answers but it's still worth asking the customer directly):

Decide what to show

Prep a dashboard

Prep a session summary

Confirm attendees have access

If possible, schedule both sessions in the same week. Monday/Thursday or Tuesday/Friday. A long gap between sessions kills momentum.

Session 1: PostHog fundamentals

Audience - Everyone. Product, engineering, marketing, data, leadership.

Duration - 90 minutes (65 min content, 15 min Q&A, 10 min buffer).

Deliverable - A working dashboard in their project. Every attendee knows how to build an insight and watch a replay.

Why it matters

This is the only session that's guaranteed. If they never show up for Session 2, this has to be enough to make PostHog stick. Every topic here maps to the most-visited pages in our docs.

Topics to cover

Intro to PostHog (5 min)
The data model (10 min)

Events

Persons and properties

Cohorts

Pause for questions. Allow for some awkward silence.

Building insights (15 min)

The core of the session. Don't teach insight types in the abstract. Build them around a real question from the pre-session prep.

Trends

Funnels

For smaller audiences (~10 people), encourage attendees to build an insight themselves by prompting PostHog AI or clicking through the UI. Try: "Show me a funnel from page_view to sign_up to first_project_created in the last 30 days." For larger groups, this gets chaotic – demo it yourself and save the hands-on exercise for Session Replay.

Session Replay (15 min)

Connect Session Replay to the funnel you built. Show the numbers, then show the human behind the numbers.

Ask the audience to build a Session Replay filter using PostHog AI or the UI.

Dashboards (10 min)
Quick overview of what else exists (5 min)

Don't demo any of these. Name them so the room knows what's available and that they're tied to events.

Q&A (15 min)

Open floor. If nobody asks any questions, mention some of the below examples as commonly asked questions. This may make people feel more comfortable.

After Session 1

---

Session 2: Pick your track

Duration - 60 minutes (40 min content, 15 min Q&A, five min buffer).

Offered as two tracks. The customer picks one, or runs both if they have the headcount. Schedule it two to four days after Session 1.

Session 2 is optional. Hype it up, but don't treat it as a dealbreaker. If a customer only does Session 1, they're still in solid shape.

Track A: Product + engineering

Audience - PMs, engineers, data scientists. Anyone who ships features.

Deliverable - A live feature flag targeting a real user segment, plus a draft experiment with a defined hypothesis.

Feature Flags (15 min)

The gateway to Experiments. Nail this first.

Experiments (10 min)

Start from a hypothesis, not a feature. Ask the room: "What's something you're debating shipping right now?"

LLM Analytics (10 min)

Big for any team building AI features. Clustering in particular can provide insights that are otherwise hard to come by.

If they're not building AI features, skip this. Spend the time on Feature Flags and Experiments instead.

MCP
Bonus topics if time allows

Track B: Marketing + growth

Audience - Marketing, growth, content, demand gen. Anyone who cares about acquisition and conversion.

Deliverable - A Web Analytics dashboard configured for their site, plus a live survey draft targeting a real user segment.

Web Analytics (15 min)
Advanced funnels for marketing (10 min)

Build on what they learned in Session 1, applied to marketing use cases.

Surveys (7 min)
Session Replay for marketing (5 min)
Workflows (3 min)

After Session 2

---

Engagement tips

For the "too busy" crowd

Offer a 15-minute micro session. If someone reschedules twice, don't push. Offer to screenshare and build one thing while they watch. Low commitment, high value. Most people who do a micro session rebook the full one.

For $80k+ customers who are in-office – pitch a half-day onsite. Frame it as "we'll sit with your team and build your analytics stack together." Informal one-on-one time at someone's desk is worth 3x a scheduled Zoom.

Gather feedback with Surveys

Set up a PostHog survey targeting training participants after each session. This does two things: it collects real feedback on the training, and it shows attendees a live example of Surveys in action on themselves. Good dog-fooding moment.

Retention, Expansion & Cross-sell

Growth | Source: https://posthog.com/handbook/growth/sales/expansion-and-retention

As a Technical Account Manager, you'll spend as much time managing your existing book of business as you will closing product-led leads. Your first priority is retaining them - this is counter balanced to an upwards Land, Expand, Retain motion. We have to work twice as hard if we're trying to close new deals and make up for lost customers. You'll typically be assigned a bunch of customers who are paying monthly - this means they could turn off PostHog at any time.

Once you're confident that a customer isn't going anywhere, then you want to think about how you can expand their usage. Usually (but not always) this is after they've signed an prepaid credit contract.

In order of priority, your objectives should be following all points of "REREE":

The reason why we put cross-sell so high up the list is that we have seen that by _far_ the happiest and best-retained PostHog users, including from a revenue retention perspective, are those who have adopted 2+ products. It makes sense - it's relatively straightforward to replace PostHog if you're just using product analytics, but it's much tougher if you're using analytics + experiments + session replay.

Retention

Your objectives are to:

  1. Get people to talk to you
  2. Get a longer term commitment (maybe!)

1. Get people to talk to you

We have a handy guide to this in the getting people to talk to you playbook.

2. Get a longer term commitment (maybe!)

Once you've established contact, you basically want to get them into the same flow as if they were a new customer (and give them the same level of attention). You will be doing a combo of discovery and commercial evaluation, as the customer will want to figure out whether a prepaid credit contract with PostHog makes sense vs. what they've already got.

Do not push for discounted, credit-based plan no matter what - consider what actually makes sense here! Some customers are very highly likely to stick with PostHog even if they are paying monthly, e.g. if they have many users regularly logging in, lots of product activity, multi-product adoption etc. Do not turn up to a new customer and the first thing they hear from you be 'would you like to pre-purchase credits?'

You'll also go through the same contracting process with them. We usually find that convincing a customer happily paying monthly to switch to prepaid credits is quite difficult, especially if they are a fast-growing startup (who tend to value flexibility over pure cost saving). This means that the discounts may not be as effective. If you're finding this is the case, you can get them on an prepaid credit plan but paying monthly or quarterly and halve the discount you offer.

Steady state retention

These are customers that are happily using PostHog long term, and are neither a churn risk nor likely to have expansion potential. Managing this group is much more automated and taken care of by CSMs, who do things like tracking usage and setting up alerts in Vitally to trigger outreach from us when a customer changes their usage behavior (either up or down).

An important part of retention here is also to ensure support issues are fixed in a timely manner. We deliberately don't want to invest a huge amount in hands-on customer success here, because that can often paper over cracks in the product experience or quality of our customer support, so staying hands-off here is an intentional strategy. In the future, we will build out this playbook a lot more.

Expansion & cross-sell

Note: AEs and CSMs also do expansion at PostHog therefore this is not a Product-Led Sales TAM only approach. This is because we all are constantly on a sales footing with customers - for the most part, we don't do steady state account management with an arbitrary 10% uplift at renewal team.

An overview of how to drive expansion with a customer can be found in the cross-selling pages.

Principles for visiting customers

If you offer to do a meeting in person with a customer, they’ll then feel obliged to introduce you to other people to make good use of your time. Trying to get them to adopt more products can be a good trigger, but generally you should be matching the cadence for in-person meetings with the size of contract (ie. more regular for Very Large, less regular for Large). If necessary you can request a budget for travel and accommodation in Brex.

Generally speaking you should be trying to regularly see customers in your book of business who are $100k+ annually, or could get there. Occasionally you can pull in James/Tim if they are traveling to SF/NY especially, or if the customer is in London.

If you regularly visit customers, you can (and should) take some sweet merch. You can self-serve this using a discount code pinned in our team Slack channel to get 100% off your order.

Make sure to log notes in Vitally when customer visits take place. This can be done by creating a new note with the "On-site" category and describing any key details and takeaways.

Expansion strategies

Growth | Source: https://posthog.com/handbook/growth/sales/expansion-strategies

The Expansion and Retention page lays out the REREE priority order for managing your book: retain, expand (cross-sell), retain (commit), expand (new teams), expand (same team). The cross-sell motions page tells you what to sell. The use-case-selling playbooks tell you how to frame it. This page covers the layer underneath: how you structurally grow an account.

Not every account grows the same way. A 30-person startup with one engineering team is a completely different expansion motion than a 500-person company with four business units. You need to pick the right approach for the account you're working with, and sometimes run multiple strategies in parallel.

These four strategies are not mutually exclusive. Most accounts will involve a combination over time. But being deliberate about which one you're running right now, on this account, this quarter, makes it much easier to focus your effort and measure progress.

Overview

| Strategy | Core idea | Best for | |----------|-----------|----------| | Go deeper | Layer more products onto the team already using PostHog | Accounts with 1-2 products adopted, strong engagement, same team | | Build champions | Grow usage and advocacy from individual power users up | Accounts where you lack executive access, or where adoption is engineer-driven | | Expand into new teams | Replicate PostHog usage in a different team or business unit | Larger orgs with multiple engineering teams, product lines, or workloads | | Move upward | Engage leadership to drive org-wide adoption or commitment | Accounts with strong bottom-up usage ready for credit purchase or org-wide rollout |

Strategy 1: Go deeper on the existing team

The team already uses Product Analytics. You help them adopt Session Replay, then Experiments, then Error Tracking. Same people, same workload, more products.

When to use it

How to execute

  1. Review their current product adoption against the use-case-selling framework. Identify which use case they're closest to completing and what product fills the next gap.
  2. Tie the recommendation to something they've already told you. "You mentioned spending time reproducing bugs from user reports — Session Replay shows you exactly what happened" is better than "you should try Session Replay."
  3. Offer a trial incentive if needed. 2-3 months of credited usage for a new product removes the risk for them. See trial/evaluation incentives.
  4. Follow up with hands-on help. Don't just suggest the product — help them set it up, build their first dashboard or workflow, and show value in week one. A product training session can accelerate adoption if the team is large enough to justify it.

Signals that it's working

Common mistakes

Each additional product above 1 adds 0.2x to the quota multiplier (from a 0.7x base). Going from 1 to 3 paid products moves the multiplier from 0.7x to 1.1x on the same ARR. This is the most direct way to improve your quota math.

Strategy 2: Build champions from the bottom up

You identify 2-3 power users inside the account who are getting serious value from PostHog, and you invest in making them successful. They become your internal advocates, and their enthusiasm pulls in more users and more products organically.

When to use it

How to execute

  1. Identify power users. Check who's logging in most frequently, who's creating dashboards and insights, who's asking questions in your Slack channel. These are your champions, whether they know it yet or not. If you're struggling to make initial contact, the getting people to talk to you playbook has specific tactics.
  2. Invest in them directly. Share tips specific to what they're building. Point them at features they haven't found yet. Help them build something impressive they can show their team. The goal is to make them look like heroes internally.
  3. Equip them to sell internally. When your champion wants to bring in Session Replay for their team, give them the ammunition: a short summary of what it does, rough cost estimate, and how to pitch it to their manager. The cross-sell motions page has product-specific discovery questions and value stories you can adapt for this.
  4. Ask for introductions. Once you've built trust, ask your champion to introduce you to other people in the org. "Are there other teams that might find this useful?" is a low-pressure way to open the door to multi-team expansion.

Signals that it's working

Common mistakes

Strategy 3: Expand into new teams

Engineering Team A uses PostHog for product analytics. You get introduced to Engineering Team B (different product line, different business unit, different workload) and replicate the adoption. Same org, net new usage.

When to use it

How to execute

  1. Map the org. During discovery with your existing contacts, ask: "How many products or apps does your company maintain?" and "Which teams have their own engineering org?" Each product/app is a potential new workload. Your account plan should explicitly document known workloads and which teams own them.
  2. Get a warm introduction. Cold outreach to a new team inside an existing account almost never works. Ask your champion to introduce you, or use in-person visits (people feel obligated to introduce you to others when you're physically there).
  3. Treat the new team like a new customer. They have different needs, different stakeholders, different technical contexts. Don't assume that what worked for Team A will work for Team B. Run fresh discovery and consider offering a training session to get the new team up to speed.
  4. Start with the use case that fits, not the product the other team uses. Team A might use Product Analytics heavily, but Team B might need Error Tracking first. Let the use-case-selling framework guide the conversation.

Signals that it's working

Common mistakes

New team adoption is often the biggest single expansion lever in larger accounts. A new workload can mean an entirely new use case stack, which adds both ARR and product multiplier simultaneously.

Strategy 4: Move upward through stakeholders

You've built strong usage and advocacy at the IC and team lead level. Now you engage a VP Engineering, CTO, or Head of Product to drive an org-wide commitment: annual contract, standardization on PostHog, top-down mandate to adopt across teams.

When to use it

How to execute

  1. Build the business case before you ask for the meeting. Pull together usage data, product adoption, number of active users, and any concrete outcomes your champions have shared. Leadership doesn't care that Session Replay is cool. They care that it reduced bug reproduction time by 50% and saved 10 engineering hours a week.
  2. Get introduced, don't cold-call. Ask your champion to set up the meeting. "Would it make sense to loop in [VP] so we can talk about how PostHog fits into the broader engineering org?" Your champion's internal credibility is what opens the door.
  3. Frame the conversation around their priorities, not yours. Leadership cares about consolidation (fewer vendors, fewer contracts), cost predictability (annual plan vs. monthly surprises), and organizational efficiency (one platform for all teams vs. five point solutions). Lead with those.
  4. Have a specific commercial proposal ready. Don't go in with "we should do an annual deal." Go in with "based on your current usage of $X/month across these teams, here's what an annual commitment would look like, including the discount and what that saves you." See contract rules for discount structures, and remember that even after an annual deal is signed, additional usage beyond the annual run rate still counts toward your quota.
  5. Use the meeting to also open multi-team expansion. "Are there other teams that should be using PostHog but aren't?" is a natural question when you're talking to someone with org-wide visibility.

Signals that it's working

Common mistakes

Choosing the right strategy

There's no formula here, but some patterns hold:

| Account situation | Start with | |-------------------|------------| | Small team, 1-2 products, strong engagement | Go deeper | | Low executive access, engineer-driven org | Build champions | | Large org, multiple teams or products | Expand into new teams | | Strong bottom-up usage, approaching renewal or budget cycle | Move upward | | New account, first 90 days | Go deeper (always start here) |

For most accounts under $40k ARR with a single team, go deeper is the right default. You're adding products to the team that's already bought in.

For accounts over $60k ARR with multiple teams, expand into new teams is usually where the biggest growth lives. You can only go so deep with one team before you hit a ceiling.

Build champions and move upward are not standalone strategies — they're how you enable the other two. You build champions so they can pull you into new teams. You move upward so leadership can mandate adoption across the org. They're force multipliers, not end goals.

The best TAMs are running 2-3 of these in parallel on their largest accounts. One team is going deeper on products. A champion in that team is introducing you to another team. And you're building toward an executive conversation that ties it all together into an annual commitment.

Getting people to talk to you

Growth | Source: https://posthog.com/handbook/growth/sales/getting-people-to-talk-to-you

Product engineers, our ICP, are very self-serve and happy to implement PostHog themselves and read the docs without ever interacting with someone unless they have support queries.

Why is it helpful for someone to talk to you?

The reasons have to be _genuinely helpful_ ones - just 'having a point of contact' is not enough. Reasons include:

If you go down the 'saving money' route, bear in mind two things:

How to get people to talk to you

This is usually the most difficult bit! Sometimes customers will proactively reach out to us because they see their bill rocketing, but we have many customers who have happily self-served to a very high level of spend without feeling any need to talk to us. In particular, engineers have no interest in jumping on a call with you 99% of the time.

Before you do any of this stuff, get to know your customer as well as you possibly can. Don't do clickbaity things or trick people into talking to you - it'll just annoy them. And definitely don't just offer a generic checkin 'to see how things are going'!

Ideally you want to get multiple people into a shared Slack channel, as we've found this enables the best communication and allows us to provide them with great support. Just adding a bunch people to the Slack channel is also a legit tactic - forgiveness, not permission.

Your first message where we've never had contact

Despite the organization using PostHog, they may not recognize you/PostHog, or may not even be the correct person to talk to about PostHog, which means your message needs to be well crafted.

When crafting a message, consider the following:

  1. Your initial outreach isn't about you, it is about them. Lead with customer-centered comms. Avoid leading with being attached to their account or telling them how you are there to help them. Tim has some great thoughts on this subject.
  2. Avoid fluff. "I'm just reaching out to", "I just wanted to" etc. are empty phrases that take longer to get to the point. Before you hit send, reread and see if there is anything you can cut out.
  3. Lead with value within the first sentence. If it takes a paragraph to get there, you won't get responses.
  4. Ask yourself, if I got this email to the sales@ email box, would I engage it? Would I even give it a second look?

Some examples of good emails that have worked:

Hello [name],

It looks like your Product Analytics usage has increased over the past month and I wanted to ensure that the increase was expected.

Here are some tools you can use to ensure you are collecting the correct events and getting valuable insights from them. We have a whole host of tutorials and guides to help you get the most out of PostHog.

If you have any questions, don't hesitate to ask.

[First],

Wanted to reach out direct since I noticed the [Company] team ramp up usage in PostHog recently.

We'll typically reach out to help with optimizing event capture and make recommendations with regards to instrumentation + querying in PostHog.

Up for a chat? Here's my calendar, feel free to grab a time that works best for you.

Cheers,

Asking for introductions

If you feel like you have done a good job with a customer, and have genuinely been helpful, it's ok to ask for a favor back. You can be specific and ask for a direct introduction to a person you want to talk to, or try go a bit more broad and ask the person if they know anyone who would benefit from some help with PostHog. Either way, a warm introduction from a colleague is always going to be better than reaching out on your own. Something like "Hey Leon, our session last week seemed to have landed well. I'm glad you found it useful. I was wondering if you could help me out. Your team is growing really quickly, and there's a bunch of new folks starting to use PostHog. I imagine not all of them are super comfortable with the platform yet and could use a helping hand. Could you introduce me to Simon, Charles and Scott?"

Just been handed an account?

Sometimes you'll get a customer in your book who was previously working with someone else on the PostHog team. A pre-existing relationship can help, but it's not guaranteed they'll want to talk to you.

We've found a message like this in Slack/email works well after the intro:

Thanks [PostHog team mate]

Hey [customer] :blob-wave: Excited to be working with you! As I take over, it would be a big help if we could schedule a quick 15–20 minutes intro call [link to your Calendly]. Just a chance for me to learn more and figure out how I can best support you going forward. Let me know if you'd be open to that.

We've found most people will respond to this.

Have you been ghosted?

If you've had a conversation with someone, there was interest on their side and then they suddenly go dark, using the John Barrows Ghosting Sequence can revivify them.

  1. After 2 weeks of valuable follow-up and you've not heard back, reply-all to the latest email thread.

Change the subject to: "Still interested?"

And put in the body:

[Name]

Still looking at options like PostHog to solve [business problem they previously acknowledged]?

Let me know either way.

That last line is very important because it gives them a safe option to say "no". About half will respond.

  1. If there's no response again after another week, change the subject again to "Did I lose you?"

Leave the body empty. This will pick up about 80% of people who go dark. If not, close out the opportunity 3 days after this final message.

LinkedIn Sales Nav

To get notified about new hires and other changes to the accounts you manage, you can set up lists of accounts to track in LinkedIn Sales Nav.

  1. Search for an account you want and click on their profile.
  2. Click the star icon on the left, and then choose a list to add them to.
  3. Optional, tailor the notifications you get in LinkedIn

You will now be notified any time a senior hire joins your account, which will be helpful for tracking folks to reach out to and give advanced signals around potential data science hires.

Historical import

Growth | Source: https://posthog.com/handbook/growth/sales/historical-import

Historical import

Since our system does not experience a huge variance in incoming traffic aside from the occasional instrumentation bug, it's important to give the pipeline team a heads up in advance, since we may rate limit the requests. Additionally, we need to clarify a few commercial and technical points before giving the green light.

  1. Make sure they have their product questions answered first, ie, they are not relying on historical import data to validate their use case. It's ok for this to be a contingency of them using the product/paying us, but we should be pretty sure that they are committed so we can avoid asking pipeline team to spend (sometimes considerable) effort managing an import only to have a user decide we're not a good fit.
  2. Customer should answer the following:

If the count of events for a given distinct_id is too high, we may relax the constraint that events for a single distinct_id are always sent to the same kafka partition, which means these events might not be processed in the correct order. This can be problematic for merging events, where order of ingestion matters (eg an alias event arriving before an identify event on which it depends). This will need to be communicated to the customer.

Load testing

If a customer mentions load testing, get answers to the above and then alert the pipeline team asap, so that accommodations can be made, as this may require scaling up to handle properly. If a customer plans to send a large volume of single capture requests all at once, rather than ramp up to a peak over some time period, that is not a load test but more like a denial of service (DoS).

How to do discovery

Growth | Source: https://posthog.com/handbook/growth/sales/how-to-do-discovery

Discovery

The discovery mindset

Discovery isn’t walking through every PostHog feature. It’s having real conversations with customers to figure out if PostHog will be a good fit for them. Learn their problems, see how they solve things today, and find the people who’ll get excited enough to bring us in.

This is meant to be a guide, not a rule-set. Each person has their own unique style. The goal here is to surface the right insights by providing a framework for how to go about _asking the right questions_ vs. a talk-track for how to run discussions with customers.

Core principles:

Discovery isn't one-sided questioning - it's give and take. You learn something, you show something, you ask questions, repeat. The goal is understanding what customers are trying to accomplish so we can focus on relevant features rather than discussing everything PostHog can do.

Why discovery matters

PostHog is a broad product suite with common combinations depending on the use case. Discovery can help us provide customers with a better experience by understanding their specific needs so we can:

Timeline

We don't need to cram every question into the first call. Discovery is always happening and we have many customers who stay with us long term. Use each touchpoint to learn something new.

Other channels

Beyond the 1st call, there are other spaces where we frequently communicate with customers:

Most customers are preferential towards Slack while others like email/Zoom. Slack is central to communications at PostHog and tends to be a great place to offer real-time support and ask questions.

Before your 1st call

Prep work

Discovery includes preparation. Before speaking with any new customer interested in engaging further with PostHog, it's helpful to gather some basic knowledge to help with demoing relevant features and determining if it's a good fit.

Examples:

Asking questions

Discovery is about understanding the real problem through natural conversation. The goal is to be genuinely curious about their situation, not to interrogate them.

Question principles:

Understanding customer goals

Instead of asking about room for more PostHog products, ask about what the customer is trying to accomplish. Questions like "what's coming up in your roadmap over the next few months?" get better intel without feeling like an upsell and it's just generally a much more natural conversation.

When you understand their goals, you can frame PostHog around outcomes instead of features. For example, if you learn they're launching a jobs board and their GTM leans on niche SEO, you can shape the demo around using web analytics to nail that launch. You're telling a story where PostHog helps them succeed, not just showing what buttons do.

Goal-discovery questions:

When to use these:

Important: This only works if you're genuinely curious. It's not a checklist item for every call — forced interest is gross and salesy. But when the connection is there, it's a much better place to frame what we offer from.

What makes PostHog different

The demo

Demoing PostHog is an important part of our sales process and how we first introduce PostHog to customers. It brings immediate value to a call, is consistent with other messaging and builds credibility with technical folks.

A demo can also be a great format where questions bubble naturally.

Principles:

Examples:

Other questions you could ask while demoing:

Qualifying

A key component of discovery is qualifying customers to ensure they are a good fit and whether they're speaking with the right people at PostHog. You can find more about how we qualify at PostHog in the new sales qualification guide.

Qualifiers:

Disqualifiers:

Identifying your champion

Champions aren't just customers you're friendly with - they're people who will actively sell PostHog internally. While you won't always find a champion, working with one when possible can streamline deals and provide us with valuable feedback along the way.

Examples

Questions to identify champions:

Characteristics to listen for:

Follow up questions for champions:

While you can start identifying potential champions early in the process, building the relationship is an ongoing effort.

Discovery call structure

Give yourself enough time to demo - it can make all the difference!

1. Opening & understanding the situation (~5-7mins)

Goal: Get rapport, learn about their setup, and uncover any frustrations.

Potential questions to flow between:

2. PostHog demo (~15-20min)

Goal: Show PostHog, focus on relevant features, establish technical credibility, get feedback and ask questions.

Reference the demo section above for how you can incorporate discovery into your demo and learn more about how we do sales in the initial demo playbook.

3. Closing and next steps (~3-10min)

Goal: Establish timeline, confirm mutual fit and next steps.

We like to keep things conversational - if you're genuinely curious about their situation, this should all come naturally!

Summary

Discovery can help with addressing gaps in your knowledge about a customer and makes efficient use of both your time and theirs. By understanding their actual needs, challenges, and decision-making process upfront, you can:

Helpful docs for more learning:

How we work

Growth | Source: https://posthog.com/handbook/growth/sales/how-we-work

This page covers more of the operational detail of how our team generally works - for a broader overview of roles and responsibilities, visit the overview page.

Roles

We have three types of roles:

Technical Account Executives

TAEs work with:

As we start to generate cold outbound leads, these will be routed to TAEs to work with as well. Customers move off of a TAE to a TAM or CSM 3 months after closing on a prepaid contract (usually annual) - you have to ensure they are well set up, not just contract signed!

TAE Territory Review

In addition to the weekly sprint planning meeting on a Monday, we do a weekly territory review standup on Wednesday. A Technical AE is picked at random, and we spend 30min going through:

  1. Brief, mid-week announcements (if any)
  2. For one random Technical AE as chosen by the wheel of names - SFDC Hygiene check — is the deal value, stage, and close date accurate? Are the next steps up to date? No story time here, just data.
  3. Biweekly, we review all larger ($50k+) opportunities across all Technical AE. For each opportunity, the person reports and discusses:
  1. On alternate weeks from the larger deal review, we run wheel of names again (excluding the Technical AE selected for the hygiene check), and the selected Technical AE reports and discusses the opportunities in their pipeline, including:

The objective of the meeting is to hold each other to account, provide direct feedback, and also support each other. It is a great place to ask for help from the team with thorny problems - you should not let your teammates fail.

Technical Account Managers

Each TAM is assigned up to 15 existing customer accounts to work with. Additionally, you will manage inbound leads as they are assigned to you in your territory. Overall, the hard cap on existing book + new leads is 25 accounts, so staying extremely focused is important.

We use the "AM Managed" Segment in Vitally to show that an account is part of somebody's book of business and therefore included in individual and team quota calculations. AMs should not assign this themselves (that's up to Simon or Charles), but can add themselves as the Account Executive in Vitally to make it easier to track things you're working on.

For Product-led leads we will only add them to your book for quota purposes if you have a solid plan in place for conversion to prepaid credit or cross-product adoption. Account Owners can use the "Leads" Segment in Vitally to separately track these from the main managed book.

At the end of each quarter we will review your accounts and look to hand off some to bring your focus account list back down to 10. Simon and Charles will also review everyone's accounts each month proactively to make sure that the balance of accounts across the team makes sense.

TAM Territory Review

In addition to the weekly sprint planning meeting on a Monday, we do a weekly territory review standup on Wednesday. A Technical AM is picked at random and runs through the following for each customer in their book of business in Vitally:

  1. Rate your relationship with them (no connection yet/made contact/answering their questions in Slack/trusted advisor)
  2. What's your next step with that customer (annual plan, cross-sell etc).
  3. Are they a churn risk and why?

The objective of the meeting is to hold each other to account, provide direct feedback, and also support each other. It is a great place to ask for help from the team with thorny problems - you should not let your teammates fail.

Handing off customers to Technical CSMs

We want to ensure the expansion potential of a customer has been thoroughly exhausted before moving to a Technical CSM for more steady-state retention. When you want to move a customer off your book you should talk it through with Simon. Here are the things we will be looking at:

  1. Have you tried multiple times to make contact with all of the active users in an account?
  1. Are they using all PostHog products?
  1. Is there an opportunity to cross-sell to a different team?

If the answer to any of the above questions is 'no' then it's likely that there is more work to be done with a customer, but we will use a common sense approach here.

A customer being negative/difficult to work with isn't a reason to remove them from your book. It's your job to turn them around to being a happy customer (AKA be their favorite).

How commission works - Technical Account Executives

General principles

This plan will almost certainly change as we scale up the size and complexity of our sales machine! This is completely normal - we will ensure everyone is always treated fairly, but you need to be comfortable with this. For now we are generally trying to optimize for something straightforward here so it’s easy for PostHog (and you) to calculate commission. Fraser runs this process, so if you have any questions, ask him in the first instance.

Variables

Performance expectations for Technical Account Executives

There are cultural and role-based expectations for TAEs at PostHog. We also now have enough data to define minimum performance exceptions for TAEs relative to the annual commmission targets.

After your ramp period, you should expect to have a performance conversation with your lead and Charles if:

These standards are likely to change as the TAE role evolves. Any changes will be reflected in the handbook. We will always consider any relevant context when having these conversations with you - quota does not exist in a vacuum!

How commission works - Technical Account Managers

General principles

This plan will almost certainly change as we scale up the size and complexity of our sales machine! This is completely normal - we will ensure everyone is always treated fairly, but you need to be comfortable with this. For now we are generally trying to optimize for something straightforward here so it’s easy for PostHog (and you) to calculate commission. Fraser runs this process, so if you have any questions, ask him in the first instance.

Variables

Your quota and assigned customers are likely to change slightly from quarter to quarter. In any case, your quota will be amended appropriately (up or down) to account for any movement. We will also be flexible in making changes mid-quarter if it's obviously the sensible thing to do. If you inherit a new account, you have a 3 month grace period - if they churn in that initial period, they won't be counted against your quota.

If you have a customer you converted from monthly to annual under the old, non-usage-based commission plan, you won't _also_ get recognized for additional usage beyond their annual run rate in the first year - no double dipping!

If you believe there is a justifiable reason to vary these rules, then in the first instance talk them through with your team lead. Simon (Charles as backup) will be the decider here.

You can see how we are tracking on the TAM Quota Tracker dashboard.

TAM book of business rules

  1. Only accounts with the AM Managed segment in Vitally will be counted towards your quota. Simon adds this manually after reviewing with you and your team lead.
  2. All accounts in the AM Managed segment need an account plan in Vitally, which is updated and reviewed with your manager regularly.
  3. If you are assigned an account with no previous owner, you have up to 3 months to figure out whether they should be in your book or not. Don't ask for the AM Managed segment to be added until you're happy that there is growth potential there.
  4. If you are assigned an account with a previous owner, work with them on the handover process. If the customer isn't in a healthy state usage and engagement-wise, feel free to push back and ask for the previous owner's help in getting them to a good state before taking ownership. If you really can't resolve this, then talk first to your team lead. If you can't resolve it, Simon will be the tie breaker. It may be that we need you to work on the account regardless but will treat it as a lead with the same rules as point 3 above.
  5. Accounts which you've previously been paid quota on need to stay in your AM Managed book until they are handed over as per 3 above, or until they churn/fall below $20K ARR. In this case, we will keep them in the AM Managed segment for quota calculation purposes and then remove them after the quarterly calculations are complete.
  6. Nominally, you should have 15 accounts/around $1.5m in ARR in your AM Managed book. There is some wiggle room here, but if you find yourself with 25+ accounts, it's unlikely that you'll be able to give them the level of focus we expect from a TAM, so you should be prepared to hand some over to another team member.
  7. You can have accounts added to your book at any time, if you are comfortable that there is growth potential there. Removal of accounts should only happen at the end of the quarter so that quota can be calculated.
  8. If you actively work to reduce a customer's spend with us by optimizing their usage, we may exclude that usage drop from quota calculation. We will review this on a case by case basis but at the very minimum you'll need documented evidence of the work you did to optimize their usage before it dropped. This should first be reviewed with your team lead who will then ask for approval from Simon. To make the process easier, drop the details of your optimizations as a note on the customer record in Vitally.

We have a bunch of accounts where they are declining for reasons that have nothing to do with a TAM’s actions. We also have a bunch where they are growing in the same way. These even each other out in the bigger picture of hundreds of accounts, if anything in favor of the latter. If they fit the criteria for having a TAM assigned, you should be prepared to continue to manage both types of customers in your book, as churn prevention is a key part of the TAM role too.

How commission works - BDRs

General principles

This plan will almost certainly change as we scale up the size and complexity of our sales machine! This is completely normal - we will ensure everyone is always treated fairly, but you need to be comfortable with this. For now we are generally trying to optimize for something straightforward here so it’s easy for PostHog (and you) to calculate commission. Fraser runs this process, so if you have any questions, ask him in the first instance.

Variables

Team lead quota

From your first full quarter as a team lead in Sales, you will move to a 60% base 40% commission split in reflection of your new player/coach role. This will be based on your team's quota attainment although you will still have your own individual quota target.

Your individual quota will be lower than others in the team as you'll be spending more time on managing the team, but we still want you to demonstrate the sales individual contributor skills to your team. You should aim for 80% team management, 20% IC work, and the quota will reflect that.

To calculate the team quota, we combine the quota of all team members with proration applied if they are still ramping:

Example: With a flat quota of $250,000 and 3 fully ramped people, and 1 ramping, the team quota would be $875,000 (($250,000 * 3) + $125,000)

If someone leaves the team, we may recalculate the team quota depending on how their accounts and opportunities are reallocated to others in the team. If someone joins the team, we don't change the team target, and don't count their contribution towards the existing target, to keep it simple.

Travel to see customers

You are likely to need to travel a lot more than the typical PostHog team member in order to meet customers. Please make sure that you follow our company travel policy and act in PostHog's best interests. We trust you to do the right thing here and won't pre-approve your travel plans, but we do keep track of what people are spending and the Ops team will follow up with you if it looks like you are wasting money here. We are not a giant company that pays for fancy flights, accommodation, and meals so please be sensible.

Working with engineering teams

We hire Technical AEs. This means you are responsible for dealing with the vast majority of product queries from your customers. However, we still work closely with engineering teams!

Product requests from large customers

Sometimes an existing or potential customer may ask us to fix an issue or build new features. These can vary hugely in size and complexity. A few things to bear in mind:

Finally, if you are bringing engineers onto a call, brief them first - what is the call about, who will be there. And then afterwards, summarize what you talked about. This goes a long way to ensuring sales <\> engineering happiness.

Complicated technical questions

You will run into questions that you don't know the answer to from time to time - this is ok! Some principles here:

Working with customers in Slack

Most of our customers use Slack, and it's a great way for us to be responsive to them. Everyone has the permission in Slack to create a Connect channel with a customer, and you should do this as early as possible in your relationship with them.

When you've created the channel you should also add Pylon, which is used to sync Slack conversations with Zendesk so that our Support and Engineering teams can work on customer issues in a familiar context.

To add Pylon to your customer channel:

  1. In the Slack desktop app, click the channel name.
  2. On the Settings tab, click Add apps.
  3. Type Pylon and click Add.
  4. In the popup that appears in the Slack channel, select Customer Channel.
  5. Add yourself as the Account Owner.
  6. Click Enable.
  7. Add Tim, Simon, Charles, and Abigail to the channel.

Once enabled, you can add the :ticket: emoji to a Slack thread to create a new Ticket in Zendesk. Customers can also do this. Make sure that a Group and Severity are selected or the ticket won't be routed properly.

It's your job to ensure your customer issues are resolved, make sure you follow up with Support and Engineering if you feel like the issue isn't getting the right level of attention.

Lead routing & scoring

Growth | Source: https://posthog.com/handbook/growth/sales/lead-scoring

Lead routing

Generally speaking, companies already using PostHog and spending money will be routed to the product-led sales team. Leads where the customer is earlier in their lifecycle with us, e.g. using PostHog but not spending money, will go to the new business sales team.

We frequently tweak these rules and experiment with different signals to see which work best. Generally you should be aiming for a 20% conversion rate from these types of leads.

They follow the normal territory assignment rules in Salesforce, and are routed either to Technical Account Executives or Technical Account Managers depending on the type.

Product-led sales team

  1. Customers with MRR between $500-1,667, employee count > 50, user count > 7, based in ICP country, and has been paying for at least 3 months
  2. Customers who have high ICP score and subscribe to the Scale plan
  3. Customers with MRR >$1K and >50% forecasted spend increase this month
  4. Unmanaged customers with >$20K ARR who raise a support ticket

New business sales team

  1. Completed the book a demo form (organic inbound, paid ads campaign, or outbound)
  2. Onboarding specialist referral
  3. First signup from a company with 500+ employees who have ingested at least 1 event and invited at least 1 person
  4. Customers who have used 50% or more of their startup credits and had a last invoice greater than $5000
  5. Customers set to roll off startup plan in the next ~100 days with last invoice between $2k–$5k
  6. Customers who are set to roll off the startup plan in the next two months and had a last invoice greater than $1500
  7. AE named lists

_Ben experiments to find more winners:_

  1. Emailed sales@

BDR team

Campaigns are all tracked in Lemlist - these change week-to-week.

Lorena's focus:

  1. Engineers + Engineering Managers who follow us on LinkedIn but are not (yet!) customers
  2. Event attendees - Stripe Sessions/AI Tinkerers São Paulo
  3. Website showed intent -- Clay sheet from PostHog DW

Backlog:

  1. Filled contact sales but then went silent, never talked to an AE (next: AE Campaign to warm back up)
  2. Tried PostHog but did not convert - signed up but went inactive, never paid, never talked to an AE in DW (next: more filtering on this list)
  3. Closed lost opportunities (new biz _and_ renewals) 5+ months old where reason was 'unresponsive'
  4. Churned accounts that churned 5+ months ago
  5. Companies with recent fundraising activity - good opportunities, but very noisy

Automated (Abhischek):

  1. Warmbound - $100-499 MRR at some point in the account's history
  2. Job switchers
  3. High spenders in Stripe network with <$500 PostHog MRR that doesn't trigger an TAE/TAM lead
  4. (Coming soon) Requests for Trust Center access that require an NDA

Anyone at PostHog can also manually flag an account as a high potential lead. This includes new or low spend accounts with strong net new potential or existing paying customers with credible expansion potential. To create a lead, go to the customer's Vitally record and add a Segment for AM referral (product-led sales) or AE referral (new business).

Demo booking

Customers that want to book a demo and show strong ICP fit signals are automatically get shown a booking link for a demo with a TAE. Those <20 are for the TAE to manually review and schedule. Default is our contact form submission routing system for managing this.

Lead scoring

We calculate lead scores in Salesforce to help us prioritize our inbound book of business. Put simply, the higher the score the higher value a potential contract with a customer should be. We use Clearbit to enhance our contact information as it is created and then compute a score out of 70 in Salesforce based on the following parameters:

Note that we also calculate an ICP score in Salesforce. This is more marketing aligned and designed to show us whether we are capturing who we are building for as inbound leads.

| Metric | Value | Score | |----------------|----------------------------------------------------------------------------------------------------------------|-------| | Employee Count | 1-10 | 0 | | | 11-1000 | 10 | | | 1000+ | 20 | | Ability to pay | Estimated Revenue $0m-$1m | 0 | | | Estimated Revenue $1m-$10m | 5 | | | Estimated Revenue $10m-$100m | 10 | | | Estimated Revenue $100m+ | 20 | | Role | engineering | 10 | | | product | 10 | | | leadership/founder | 10 | | | marketing | 5 | | | other | 0 | | Sub-role | data_science_engineer | 10 | | | project_engineer | 10 | | | software_engineer | 10 | | | web_engineer | 10 | | | founder/ceo | 10 | | | other | 0 | | Country | Austria, Canada, France, Germany, Japan, Norway, Sweden, UK, USA | 10 | | | Australia, Belgium, Estonia, Finland, Georgia, Guernsey, Netherlands, New Zealand, Poland, Portugal, Singapore | 5 | | | Other | 0 |

New starter onboarding

Growth | Source: https://posthog.com/handbook/growth/sales/new-hire-onboarding

Your first few weeks

Welcome to the PostHog Sales team! We only hire about 1 in 400 applicants, so you've done well to make it here! Unlike a lot of companies, we don't have a super-long onboarding process and would prefer you to be up and running with your customer base as quickly as possible. Here are the things you should focus on in your first few weeks at PostHog to help you achieve that.

Ramping up is mostly self serve - we won't sit you down in a room for training for 2 weeks. If you're not sure who is supposed to make something below happen, the person responsible is almost certainly you!

How to fail

But first...

Sales at PostHog isn't like most other software companies! These are some of the things that you _shouldn't_ do:

Technical Account Executive ramp

Day 1

Rest of week 1

Week 2

In-person onboarding

Ideally, this will happen in Week 3 or 4, and will be with a few existing team members (depending on where we do it) and will be 3-4 days covering:

Weeks 3-4

How do I know if I'm on track?

By the end of month 1:

By the end of month 2:

By the end of month 3:

Technical Account Manager ramp

Day 1

Rest of week 1

Week 2

In-person onboarding

Ideally, this will happen in Week 3 or 4, and will be with a few existing team members (depending on where we do it) and will be 3-4 days covering:

Weeks 3-4

How do I know if I'm on track?

By the end of month 1:

By the end of month 2:

By the end of month 3:

By the end of month 4:

Getting your equipment setup right

In addition to following the guidance in the spending money section of the Handbook, there are a few things you should do to make sure you're set up to give high quality demos that look professional:

Alerting setup (for team leads)

We have certain automations in Vitally that your team lead needs to add you to. Please ask your team lead to add you.

New hire frequently asked questions

How does my quota work during my ramp period?

Your first three months of commission are paid at 100% fixed OTE. This will be calculated based on the date you start. If you start before the 15th of a month, you will get 100% fixed OTE for that month and two of the subsequent months. For example, if you start on Jan 13th, you will get 100% fixed OTE for Jan, Feb & Mar. If you start _on_ Jan 17th, you would get two months of 100% fixed OTE for Q1 and one month of 100% fixed OTE for Q2 in addition to two months of your quota'ed commission.

<img src="https://res.cloudinary.com/dmukukwp6/image/upload/q_auto,f_auto/shapes_at_26_01_14_12_45_06_b0c13c36a5.png" alt="New hire quota ramp visual" className="my-6 rounded-md shadow-md" />

How does support work at PostHog?

Can I login as a customer?

Are there any influential folks in our space I should read/listen to?

New business sales

Growth | Source: https://posthog.com/handbook/growth/sales/new-sales

We build PostHog for product engineers. While many non-technical folks use PostHog successfully every day, our sales process is built with technical folks in mind. Once implemented, a customer may use PostHog for all manner of things (and we hope they do!).

Three other general principles to bear in mind:

Maximizing your chance of success

Selling software, especially to larger companies, can be a complex process with lots of stakeholders involved. When moving your deal along you should aim to know as much about the following as possible given where you are in the process (inspired by MEDDPICC):

These are presented in the most likely order that you will be able to discover them, although that is not a hard and fast rule.

They are also available as Opportunity fields in Salesforce and as such you should keep them up to date when you learn more.

Always follow the lead to opportunity conversion guidelines when creating opportunities in Salesforce

Sales process

This is an overview for what you should actually be doing with a customer at each stage of the sales process. For details on how to manage this in our CRM, visit our Salesforce docs. The steps are:

  1. You get a lead
  2. You qualify
  3. First call (30 minutes) - Discovery & initial demo
  4. Second call (60 minutes) - Technical deep dive (if needed)
  5. Product evaluation
  6. Security & legal review (only if asked - skip otherwise)
  7. Commercial evaluation
  8. Closed - Won or Lost

1. You get a lead

We're constantly experimenting with the best lead types, documented here. Info on _how_ leads are assigned can be found here.

2. You qualify

Once you have been assigned a lead, you'll want to qualify them before scheduling a call. Things to consider:

Most companies add friction here by making customers jump on a call first to qualify them. We don't do this when we are confident that product engineers are, or will become, involved in the sale. We may ask for a 2nd call with the engineers involved if we're confident that PostHog can help and want to make sure they agree.

It's also totally fine to ask a customer questions over email in advance of the demo to make sure you're making the best use of their time - just be specific. A few clarifying questions is fine, a 30 question survey is not.

Examples of good discovery questions

- What is the problem? What is this problem affecting?

- What metric is impacted as a result of this? What metric would be improved as a result of PostHog?

- How important is this problem to the wider team?

- What attempts have been made to fix this so far? Why has no attempt been made to fix this?

- Why fix this now? Why fix this ahead of the other important things happening at your company?

- Are you looking at {product} as a point solution that would slot into the rest of your stack, or are you looking to consolidate multiple tools and have a single source of truth?

- What does the rest of your stack look like? What other tools or data would you want PostHog data to connect to?

- Who owns that data stack? Do you have a data team or data engineers?

- Who will be the consumers of PostHog data? How are they currently answering their questions, and how easy is it for them to do so with existing tooling?

If you're pretty sure that they should be qualified out of our sales process, you should still be helpful over email - some customers just use the form to get in touch and don't want to actually have a demo (e.g. they have a billing question or are asking about compliance things like HIPAA.) There is a Claude skill that can help draft genuinely helpful responses to folks like these - it will put together helpful resources from our docs and blog posts that can help customers get started, even if they don't qualify for a call with a salesperson.

Requests for Proposals (RFPs)

There are two types of RFPs:

If it's an unsolicited RFP where we haven't had any prior contact or usage from the company then it is highly likely that you will burn a lot of time for nothing and you are free to decline. If you find the unsolicited RFP otherwise compelling and want to proceed, the suggested approach here is to see if anyone from the company has recently signed up to PostHog. If so, then make contact with them to see if they are aware of the RFP and can provide more information on PostHog's inclusion.

If you can't identify anyone who has recently signed up to PostHog, then ask the person who sent you the RFP for a call to gather more context before making a decision on whether to fill it in. If they aren't willing to get on a call then it's likely that we are not their vendor of choice, and they are using us to make up the numbers in a tender process. As such, we shouldn't spend time on this kind of activity. If you choose to spend time with these, timebox your effort to ensure you are not devoting a week to a 500 question RFP where we have very slim chances of success. Your time is your most valuable asset.

If it's a solicited RFP, you're free to proceed so long as the opportunity is qualified as a whole and you carefully balance the level of effort required in the RFP against the opportunity for you & PostHog. Again, a 500 question RFP may not be worth it if they plan on spending <$20k for PostHog (a 50 question RFP may not even be worth it in this instance)! Use your best judgement, and it is generally still wise to timebox your effort.

Bigger Opportunities at Bigger Companies

When you're working a deal north of $100k at a larger company, the playbook shifts. Generally, expect to challenge their stated evaluation criteria early, as well as sell to multiple people and functions within the organization. You need to dig past the surface-level requirements they may list and get to the real decision drivers. Question the "why" and "how" behind their stated criteria, because committee-driven procurement processes can hide the actual priorities behind "just so" rubrics that can obscure the real reasons they will buy (or disqualify).

On the relationship side, you need a strategy for engaging their leadership and developing champions at multiple levels within the account. If a key leadership stakeholder goes dark, escalate to PostHog leadership to help re-engage. If needed, don't be afraid to translate PostHog titles to something they would understand (e.g. Generic Exec Person = COO). For deals this size, on-site presence can also matter — you should attempt to build relationships in person, not just over Zoom.

Lastly, take a prescriptive and consultative approach to their evaluation process. The larger the opportunity, the more proactive you need to be about controlling the process. Ask for help from your lead, your team, and in Slack. These opportunities take a team effort.

Startups

If they're eligible for the Startup Plan, route them to the application form and disqualify them as it's not an immediate opportunity (but we sincerely hope they grow into loyal PostHog customers). If their usage will burn through their credits quickly, you should feel free to switch their lead status to Nurture and keep close tabs on them. Per our usual approach to sales, we want to make sure they're successful in this "high-use" scenario and are building with us for the long-term.

You can also redirect them to use the In-app support modal if they have a product-related question - this will then be routed to the right team, as well as showing them CTAs to upgrade for high priority support.

Leads below the sales assist threshold (less than $20K ARR)

We often get requests for demos from leads or existing customers who are below our sales assist threshold, and who don't have a defined use case for PostHog. It usually comes in the form of "show me all the features" or "I need someone to demo to me." These can be large time sinks because they are non-technical, don't have a clear idea of what they want, and are unlikely to ever grow into a sales-assist level customer.

We also want to be helpful to our current or potential customers, regardless of spend. Time permitting, we can offer a demo if they are willing to give us the information we need to put something together:

This makes the demo actually valuable and can be an opportunity for you to learn more and get some demo practice. You'll also find that 90% of these requesters never respond because they are either unable or unwilling to engage with the questions, which allows you to avoid the biggest time sinks.

If you realize that they will be too small (<$20k) to go through our sales-led process and you are unable to get this information from them, you should route to self-serve.

3. First call (30 minutes) - Discovery & initial demo

Your goals on this call depend on who shows up. You should know who's coming ahead of time and be prepared to change your approach based on the actual attendees.

The ideal outcome is getting engineers to be hands-on with PostHog as quickly as possible.

Path A: Engineers are present on the first call

When you have engineers on the call from a qualified company (ICP fit or otherwise highly qualified), your goal is to get them using PostHog immediately.

Structure:

  1. Intro & Qualification (5-10 min)
  1. Technical Demo (15 min)
  1. Call Close (5-10 min)

Success looks like:

Path B: No/minimal engineers present (non-technical stakeholders)

When engineers aren't on the call, your goal is to earn a second call with their engineering team, while also being helpful to the non-technical stakeholders in discussing PostHog.

Structure:

  1. Intro (5 min)
  1. Qualify or Disqualify (10 min) - we do this politely and constructively. The customer's time is valuable and we know best who succeeds with PostHog, so we're driving the sale.
  1. Demo (10 min)
  1. Call Close (5 min)

Success looks like:

Important: If you can't get a second call scheduled, be skeptical of the opportunity. Keep the task in nurture status until it's on the calendar - only convert to an opportunity after the call is confirmed.

General demo tips

We have various slide templates - ask someone on the Sales team for an invite to our Pitch account. Use the deck as scaffolding, pulling out relevant slides. Do not spend the demo presenting a deck with an engineering team - most people at PostHog spend 90% of the demo call actually in product or talking to the customer about their needs. But sometimes, there is a legitimate need for a deck.

Before you demo, make sure there is enough data to properly showcase our features. If needed, you can use Hogbot to generate more synthetic data. This is built by the sales team for the sales team, so if you see anything you want to improve, don't hesitate to submit a PR!

You should give a relevant and pointed demo - don't just throw everything in, as the customer will get overwhelmed. If you don't show what's important first, people on the call will become distracted.

For example, a customer may say "we need to see how our customers are using our platform". In this case, a good approach is to go straight to Session Replay, then tie Replay into Analytics, then go from there.

Start with what their biggest problem/request is, stay there until they are happy, then move on to point two. We don't want to fall into the trap of doing the same demo for each customer regardless of what they say at the beginning.

Make sure you cover:

4. Second call - Path B (60 minutes) - Technical deep dive

This call happens when engineers weren't on the first call. Your goal is to qualify the opportunity through the engineers and get them hands-on with PostHog.

Structure:

  1. Intro (5 min)
  1. Discovery (15 min)
  1. Technical Demo (30 min)
  1. Call Close (10 min)

Success looks like:

BANT

By the end of either the 1st or 2nd call with a customer, you should have a defined idea about:

  1. Budget - Calculate and share a rough ballpark figure based on which products they'll use and their expected usage. Articulate the process by which a sales-led trial will help them refine the estimate.
  2. Need - Is PostHog a good fit? Be politely honest if we're not, to avoid wasting everyone's time.
  3. Authority - Who will make the decision at the customer organization? Who holds the budget?
  4. Timeline - When does the trial start? When are they looking to make a decision/have a contract in place?

It's really easy to convince yourself that you've got a well-qualified opportunity after a demo goes well. Everybody has been laughing and having fun so they must love PostHog right? You need to be more objective than that - ask the AI in the call recording to rate you on BANT qualification to see whether you actually got all of the information you need to confirm that a real opportunity exists here. If you are missing any qualification information, don't be afraid to go back and ask your champion for additional context here. It'll save you wasting a whole bunch of time helping a customer in an evaluation where they aren't serious about buying PostHog, and the inevitable Closed - Lost which comes as a result of that.

5. Product evaluation

Once qualified, and if you think they are a good prospect for our sales-led process, your first priority is to try and get them into trial of PostHog with a shared Slack channel as quickly as possible. If you close them, a shared Slack channel will also be their primary channel for support. Add the Pylon app to the channel and it will automate the support bot and channel description. React with a 🎫 to customer messages or tag @support to create a ticket in a thread. Generally it's better to seek forgiveness than ask permission for adding people to a Slack channel - use your judgement.

Some customers may wish to use MS Teams rather than Slack - we can sync our Slack with Teams via Pylon to do this. First you will need an MS Teams licence - ask Simon for one. Then, set up a Slack channel. Then, follow the instructions here to get set up. Before adding the customer into the channel, remember to test it on both sides to ensure the integration is working correctly.

You should then follow up with a standard email/Slack message that:

Probably as a separate message, you should set out the criteria for the product evaluation to be considered a success - the evaluation will almost certainly fail if you just leave the customer to noodle around trying PostHog.

If the customer isn't super clear on how to articulate the success criteria then use the following as inspiration:

Don't be over-reliant on support during the evaluation. As the AE, you should be highly focused on customers during their evaluation to maximize your chance of success. We deliberately hire people we know customers will love working with, so now is your time to shine.

  1. Guide them on how to set up tracking depending on their app paying attention to common points of friction such as:
  1. Guide them on creating insights either based on:
  1. Once you have a week's worth of data in, calculate pricing based on their actual usage and proactively share this.
  2. A week before the trial period ends have a wrap-up call to ensure that they have seen everything they need to see, and identify any last remaining areas you can help them with, and next steps after the trial ends.

In an ideal world this involves multiple calls per week during the trial period so that you can build a trusted relationship with the customer, but don't force that if they prefer to use Slack/Email.

If non-technical people such as Product Managers, Marketing, etc. are involved we know from prior experience that the PostHog UI, while powerful, can be overwhelming, especially if they have used similar tools in the past. You should be prepared to run multiple remote or in-person sessions with these people to ensure that they get what they need out of the evaluation.

We usually set up the following trials depending on likely contract size:

Most customers don't need this beyond sharing our existing documentation. This step often occurs in parallel with product evaluation. Usually only bigger companies ask for this.

You do not need an NDA to share PostHog internal policies - by default most of these should be publicly available in the Handbook anyway, though some are only stored in Drata. If a customer asks you to sign their NDA, you can sign, but have our counsel review it first. As a starting point it must be governed by US law, and mutual.

If the customer requires a vendor questionnaire or security questionnaire then it's best for the AE involved to try and fill it out. If a company reaches out initially with this request, it's often best to try and understand if the customer has an intention to pay or at least grow into a paying customer before investing a lot of time filling it out. If there are any questions that are unclear post the specific question in #team-people-and-ops channel. It is easy to get driven into filling out security questionnaires for accounts that would come in below the sales assist threshold. If the lead is pushing security review without having had any commercial discussions, be transparent up front and let them know that we only do security review for accounts at $20k annual spend or greater. We are happy to work with them to understand their usage, and at that point, further entertain security discussions or point them towards a self serve path.

Some customers may need payment details up front as part of their vendor onboarding process. Stripe allows you to generate these ahead of them signing the contract — you can see how to do it in the billing guide for applying credits.

If you need help with anything data privacy or MSA-related, ping Fraser for help.

6. Commercial evaluation

The Contracts page has full guidance on the nuts and bolts of how to put together a commercial proposal - we use PandaDoc.

Don't be the AE who gets to this point and suddenly realizes you have no idea who the buyer is! You should already know this, their budget, their purchasing process etc. already as part of your discovery - if you're finding out now, hopefully it's not too late...

By this point, you may have run into some additional objections. These are the most common, and how to handle:

Ahead of the contract being signed, you'll also need to understand the customer's invoicing process. Companies will typically have a Finance or AP team who should be the billing contact in Stripe. Make sure you are also aware of any special invoicing requirements (e.g. a Purchase Order number) well ahead of the invoice being generated. Follow our contract rules here - e.g. no payment by check, ever.

7. Closed - won

Hooray! This is defined as when the contract is signed by _everyone_. 'They're about to sign' - NOT CLOSED. 'I've sent a DocuSign' - NOT CLOSED EITHER.

If an opp moves forward with PostHog on a month-to-month basis, but is below $20k annual spend, change the type to "Monthly Contract" and mark it as closed - won in Salesforce.

Once the contract is signed, it lives in PandaDoc. Next step - get them set up with billing.

Now it's time to set up an onboarding plan. We will templatize this, but for now you should send them something in the first week that includes:

Here is minimum checklist of things that we find customers should know how to do:

Post-onboarding, you'll want to change gears to start thinking about retention, expansion and/or cross-sell.

Simon and Charles review accounts every month to see if/when it makes sense to reassign accounts once they've closed.

7. Closed - lost

Oh no! It's ok - the most important thing here is that we learn. You should capture the reason in the Salesforce opportunity - this could be:

Add detailed comments as well, including what, if anything, we could have done differently (even if not realistic - e.g. build an entirely new product).

For certain categories, you should create followup tasks:

Share info about closed-lost people internally where it will help us learn - this may be with the sales team, relevant product team, or the company as a whole in Slack. The important thing is not to blame each other for losses, it's to find opportunities to do better next time!

Outbound sales

Growth | Source: https://posthog.com/handbook/growth/sales/outbound-sales

Woah woah woah, we're doing outbound?

Yes! But do not be afraid:

_So why are we doing it now? I thought our inbound pipeline was good?_

Outbound sales is a thing we will need to get really good at as we continue to scale PostHog, as 100% inbound eventually dries up. We are not going to be the first company in history to build a huge Saas business with zero outbound, and most companies like us start thinking about outbound around our ARR. Even the largest, most beloved devtool products of all time do this - they just do it in a smart way.

We want to start doing outbound now because, if we wait til inbound slows down, we’ll panic and make bad decisions, trash the brand, and copy and paste what other boring companies have done in a short-sighted way that doesn't work for our audience.

Outbound is helpful because it is a good way to generate more leads in a semi-predictable way - and there are lots of cool ways to do it in 2025 using GTM engineering, agents etc. We should view outbound as a type of hyper-focused _marketing_ that generates sales opportunities.

Let's get on the same page - what _is_ outbound?

‘Outbound’ means a few different things. This is how we think about it in relation to customers:

  1. Using PostHog and spending a lot of money
  2. Using PostHog and spending a little money
  3. Using PostHog with good engagement/high ICP, but not spending any money
  4. Person signed up at some point, but not really using, usually just kicking the tires
  5. Not signed up, but has heard of PostHog
  6. Not signed up, never heard of PostHog

None of these people are currently talking to us - that's why they are under the umbrella of 'outbound'.

We’ll call 1-3 ‘warm’ outbound and 4-6 ‘cold’ outbound.

What we're doing today

Our model is:

Our focus today is on inbound leads, getting much better at warm outbound (we have a huge number of leads that we could be converting better), and experimenting with colder outbound. Lorena Viana is leading our experiments today with the .

Check out the leads page for more detail on lead types, where they go, and the specific outbound campaigns we're running. These are changing very frequently as we figure out what does and doesn't work.

How TAEs talk to outbounded prospects

Remember, we contacted them — be transparent about our process and who we build for. How well we do discovery in our initial conversations will dictate how well (or poorly) we position PostHog.

If they’re interested, we’ll show them how to try PostHog and help them along the way; if they’re not a fit, we’ll say so honestly. We need to earn the right for each step and not assume their interest.

So, what does that mean for a first conversation? We:

  1. Do research & get context
  2. We are human & transparent when we meet them
  3. Explore their role & current state/stack
  4. Qualify or disqualify
  5. With explicit permission, give a brief PostHog pitch
  6. Ask the hard question
  7. Provide a relevant next step & schedule it on the call
  8. Action the task
  9. Rinse, lather, and repeat

Goal: help them decide if PostHog solves a real problem, not close in one call.

In order:

1. We do research & get context

Do basic account research:

Use this to form a call hypothesis.

2. We are human & transparent when we meet them

We contacted them. This call only makes sense if we can solve a real problem for them. Start with:

"Hey [name], thanks for making the time. I know this was a cold outreach from [Dmytro/our team], so I really appreciate you giving me 30 minutes."

"Before we dive in, I'm curious - what made you decide to take this call?

Often this is enough. If they’re vague or skeptical, get specific with your pre-prepared hypothesis:

"[Dmytro mentioned/I saw] that [specific trigger - e.g. you're growing team/launching new product/scaling analytics]. We work with companies at your stage who struggle with [specific pain - e.g. fragmented analytics tools/poor data quality/lack of actionable insights]. I wanted to understand if that's actually a problem you're facing."

"If it is, I can share how other companies like yours have solved it. If it's not a problem, I'll tell you honestly (or you can tell me) and we'll keep this short. Will that work for you?"

If they answer clearly, set a simple agenda:

"Got it. Here's what I was thinking for today: I'd love to understand how you're handling [your role/the use case behind the trigger] now, what's working and what's not, and then share how other companies like yours have approached it. If it seems relevant, we can go deeper. If not, I'll tell you honestly and we'll keep this short. Sound fair?"

3. Explore their role & current state/stack - find the pain

As Charles Cook says, companies don’t buy software; humans do. Start with their role/team.

"Tell me more about your role and team."

Then move to the trigger/use-case:

“How are you thinking about [use-case/trigger] in that role/team? What do you need to understand about your product/users/customers?"

Other prompts:

"What are you using for [use-case/trigger] today? And how'd you end up with that setup?”

"What do you love about it? What drives you crazy?"

You’re digging for pain, urgency, and priority in this part of the conversation. Drill in as needed:

4. Qualify or disqualify

Run a quick mental evaluation of their answers on the call. Assess four factors:

If unclear, ask directly, e.g. timeline:

Why is this a problem you're trying to solve this / next quarter?

Their answer tells you if this is a priority.

If you have fewer than four, disqualify politely.

"Based on our conversation, and being completely honest, I don't think we're the right fit because [reason]. My recommendation: [alternative]. If [use-case/trigger] changes, do please reach out."

If you have all four, ask permission to pitch.

Disqualify outbound tasks that won’t convert.

Bonus: end early if they’re disqualified or disinterested. If highly qualified and eager, skip the pitch and go straight to a next step.

5. With explicit permission, give a brief PostHog pitch

Open with what you heard and ask for permission to pitch:

"Based on what you shared - [their pain] - let me tell you how PostHog works and you tell me if it's relevant. Does that work for you?"

Pivot to a tailored elevator pitch (below is generic):

"PostHog makes dev tools that help product engineers build successful products. These include many discrete tools that help with user behavior and analytics, product engineering, communication and data - all in one platform.”

"Companies switch for three reasons: (1) tired of fragmented tools, (2) want engineers and product teams to have direct access to data, (3) our transparent, usage-based pricing."

6. Ask the hard question

Ask:

"Does that sound like it solves the problem you described?"

If they’re uncertain, emphasize the free trial:

"Knowing that we offer folks like you a free trial period to evaluate PostHog for yourself, does it sound like PostHog solves the problem you described?"

Wait. Embrace the pause. And, get their answer. If we don't solve a problem for them, this isn't worth continuing.

7. Provide a relevant next step & schedule it on the call

If qualified and interested, propose a next step and book it on the call:

"What makes sense as a next step? Demo? Trial? Talk to your team?" "Okay, I'll [take action]. Let's reconnect on [book specific date/time now]."

If hesitant or marginal, ask:

"Here's what I'm hearing: [summary]. Not sure if we're a fit yet. What would help you figure that out?"

If they disqualify themselves post-pitch, disqualify:

"Based on our conversation, and being completely honest, I don't think we're the right fit because [reason]. My recommendation: [alternative]. If [use-case/trigger] changes, do please reach out."

8. Action the task in PostHog's Salesforce

This is internal hygiene. Track tasks to reflect the opportunity:

9. Rinse, lather, and repeat

You should always aim to get them into a shared Slack channel or establish a regular communication cadence with them (call/email). Nothing will happen if we aren't talking.

Where else you take a qualified outbound sales opportunity is dependent on the specifics of your conversation.

Your process may resemble later stages of the new business sales process.

Otherwise, you can:

What won’t change: qualify each step, solve a real problem, and don’t assume interest just because a task became an opportunity. Stay focused on their pain and you’ll earn the right to keep moving.

How our outbound data pipelines work

So far we run three automated pipelines that enrich accounts, surface timely signals, and qualify targets. Abhischek Thottakara manages these.

Salesforce enrichment (weekly)

Every week, we pull all Salesforce Accounts and enrich them via the Harmonic API with company info like funding history, headcount, and traction metrics. Our SSoT of Accounts (Single Source of Truth) is the Salesforce Accounts table.

Before enriching, we filter out personal email domains (Gmail, Yahoo, etc.) and normalize website domains so matching is consistent.

Job switchers → Clay (daily)

A daily query (Clickhouse + Customer.io) detects job-change signals — someone who was at a company using PostHog just moved to a new role. Only changed or new records are sent to a Clay webhook so we stay within Clay's submission limits.

Why this matters: when someone who already knows PostHog changes jobs, that's a timely outreach moment. They're evaluating tools at their new company and already have context on what we do.

Product-led outbound → Clay (daily)

First, a daily Warmbound query pulls a base set of target accounts filtered by revenue band (MRR $100–$499), company size (50+ employees), and company type.

Then a second qualification step filters those accounts against product signals. Only accounts that pass both steps and have changed since the last sync are sent to Clay.

An account passes the second step if it shows buying intent through signals like:

This focuses outbound on accounts that are already engaged with the product (i.e warmbound), not just random companies that match a firmographic profile.

Where data lives and flows

graph LR
    SF[Salesforce Accounts - SSoT]
    H[Harmonic API]
    PG[Postgres]
    CH[ClickHouse]
    Q[Qualification]
    CL[Clay]
    LL[Lemlist]

    H -- "enrichment (weekly)" --> SF
    SF -- "accounts" --> Q
    PG -- "billing data" --> CH
    CH -- "usage & signals" --> Q
    Q -- "qualified targets (daily)" --> CL
    CL -- "selected accounts" --> LL

| System | Role | How often updated | |---|---|---| | Salesforce | Account records, opportunity tracking, enriched fields | Weekly (enrichment), real-time (sales activity) | | Harmonic | Company enrichment data (funding, headcount, traction) | Weekly via enrichment pipeline | | ClickHouse | Product usage data, job-change signals, ICP scoring | Daily via pipeline queries | | Postgres | Organization and billing data | Continuous | | Clay | Outbound qualification and personalization | Daily via webhook syncs | | Lemlist | Email sequencing and outreach delivery | Via Clay |

Appendix: full GTM data flow

This is the broader picture of how data moves across all our GTM systems, not just the outbound pipelines above.

graph TD
    subgraph PostHog
        PH[New signup in PostHog]
        CDP[CDP Destinations - Events, Person Info, Org Info]
        CF[Contact Form]
        BILLING[New customer in billing]
    end

    subgraph Enrichment
        HARMONIC[Harmonic]
        CLEARBIT[Clearbit]
        CLAY[Clay]
    end

    PH -- "PH destination" --> SF
    PH -- "new org" --> BILLING
    CDP --> DEFAULT[Default app]
    CF --> DEFAULT
    DEFAULT --> SF

    SF[Salesforce - Contacts + Accounts]

    SF <--> HARMONIC
    SF <--> CLEARBIT
    SF <--> CLAY
    BILLING --> SF

    SF --> VITALLY[Vitally]
    CIO[Customer.io] --> VITALLY
    ZENDESK[Zendesk] --> VITALLY
    PYLON[Pylon] --> VITALLY
    GMAIL[Gmail] --> VITALLY
    STRIPE[Stripe] --> VITALLY

Overview

Growth | Source: https://posthog.com/handbook/growth/sales/overview

Our primary focus is on making our paying customers successful, not forcing sales through. This mostly means an inbound sales model, but we are also running some outbound sales experiments.

While this means working with a smaller number of users than typical B2B SaaS companies, we know that the people we talk to are mostly already pre-qualified and genuinely interested in potentially using PostHog.

The Sales team act as genuine partners with our users. We should feel as motivated to help and delight users as if we were on their team. In practical terms, this means:

Sales team vision

Things we want to be great at

Things we're interested in trying out

Things we don't want to spend time on

How to work with different types of customer

We look after customers who are paying or could pay $20k+/yr. This means sometimes we will work with existing smaller accounts if we see potential to grow them into larger ones.

We've written an internal playbook for how to manage different types of customers - this goes into a lot more detail about company style, how they work, likely PostHog adopters, how to communicate etc.

'Enterprise' customers

As we get bigger, we're getting more inbound demand from larger organizations which have a very different buying process from our smaller customers. If we want to reach our ambitious revenue goals, we'll need to get good at selling to this segment of customer. However, we need to do this without compromising our focus on building a great product for our ICP.

To prevent us going down the wrong path with deals like these, we follow 4 simple principles:

We'd typically define a deal as a large deal if it has most of the following:

Who the Sales team are

Our small team page is maintained on the Sales & CS team page. In addition to people who share PostHog's culture, we also value:

Product enablement

Growth | Source: https://posthog.com/handbook/growth/sales/product-enablement

Overview

PostHog has a broad and growing set of products, and folks in GTM roles need to develop and maintain deep product knowledge for each individually, as well as understand how they work together. This deep understanding helps us drive initial adoption of PostHog as well as cross-sell and expansion. Without a structured enablement process, we face several challenges:

Rather than hiring an external sales enablement person (who would need significant time to ramp up on PostHog and wouldn't speak with customers regularly enough), we're leveraging internal expertise to build and maintain our enablement program.

How it works

Subject-Matter Experts (SMEs)

Each product area has a designated SME from the Sales, CS, or Onboarding teams. The SME is responsible for:

Important: SMEs are enablers, not gatekeepers. The goal is to level up the entire team, not to create dependencies on specific individuals.

Content areas

For each product, SMEs should develop and maintain content covering:

Content should be primarily recorded (Loom, Gong) or visual (Pitch) to support our async, distributed team.

New hire onboarding

New joiners to Sales, CS, and Onboarding teams go through PostHog GTM Academy, which incorporates product training content from SMEs in a structured learning path. This ensures consistent foundational knowledge across the team.

Staying current

It's on the SME to schedule a regular (nominally monthly, but this may vary by product - use your judgement here) update call with the GTM team and someone from their product team to cover:

We're a global team. Try to schedule the meeting to get as many folks live as possible (8-10 AM Pacific time is an ideal slot here), but also ensure it is recorded.

Content storage

All content should include a "last updated" date so team members know they're working with current information.

As we develop content, we can link directly to it from this page.

Product areas and SMEs

| Product Area | SME | Last Content Update | |--------------|------------|---------------------| | Product analytics | Ben Smith | - | | Web/Customer/Revenue analytics | Jon | - | | Session replay | Dana | - | | Feature flags | Sachin | - | | Experiments | Sachin | - | | Error tracking | Christophe | - | | Surveys/Product tours | Leon | - | | Data pipelines (batch and realtime) | Ryan | - | | Data warehouse | Ryan | - | | LLM Analytics | Leo | - | | Workflows | Phil | - | | PostHog Code | Landon | - | | Logs | Sean | - |

For SMEs

Getting started as an SME

If you've volunteered to be an SME for a product area:

First of all, thank you!

  1. Connect with the product team - Consider joining sprint planning calls to stay informed
  2. Audit existing content - Review what training materials already exist
  3. Identify gaps - Determine what content needs to be created or updated
  4. Recruit help - Enlist others (team members, product team, etc.) to help create content
  5. Establish a cadence - Plan regular content reviews and updates

Best practices

What this is NOT

This enablement program is not:

New products

When a new product is approaching customer availability:

  1. Identify an SME early in the development process (bonus points if you self identify with a handbook PR)
  2. SME coordinates with the product team to understand capabilities and use cases
  3. SME develops initial training content before general availability
  4. SME delivers a new product training session to the wider GTM team
  5. Product area is added to the table above

Switching SMEs

If you don't want to be an SME for a product area anymore, or want to switch with someone else to stay fresh, first identify someone else who is willing to step in and make the change as a PR to this page.

PLG lead qualification

Growth | Source: https://posthog.com/handbook/growth/sales/product-led-lead-qualification

Most product-led leads land in your queue because they crossed an automated threshold (MRR, ICP score, employee count), a manual referral from Onboarding, Engineering, or elsewhere, or some other qualification. Once it lands in your lead task queue, your job is to decide whether this account meets the criteria for TAM ownership.

This covers the things TAMs look at to make an informed decision. Note: Disqualification can happen really quickly, but often it takes a bit more time to decide if it's qualified. Some accounts will have immediate opportunities and be ready to engage, and some may require more nurturing.

A note on existing resources: Several handbook pages cover the diagnostic tools you'll use during qualification. The Metabase account analysis playbook walks through event composition, billing breakdowns, and project mapping. Checking the health of a customer's deployment covers $identify/$groupidentify patterns, autocapture noise, and implementation quality. Those pages answer "how healthy is this account's implementation?" This page answers a different question: "given what I see, should I invest my time here?"

What you're actually deciding

TAMs have a soft cap of 15 managed accounts. Every lead you take on means less time for the rest of your book. You're asking one question: does this account have a realistic path to $20k+ ARR and meaningful expansion potential across multiple products?

If the answer is no, simply mark the task as "disqualified" with your reasoning. If the answer is "not yet," change status to "nurturing" and come back. If the answer is yes, change status to "in progress" and move fast.

Step 1: Size the opportunity

Open the account in Vitally. You need three things:

Current MRR and trajectory. Check current MRR, forecasted MRR, and the delta between them. A $600 MRR account growing 30% month over month is more interesting than a flat $1,200 account. Look at the last 3 months of invoices, not just the current snapshot.

Revenue composition. Go to Metabase and look at the per-product spend breakdown (see the Metabase playbook for how to navigate this). You're looking at whether revenue is concentrated in one product (fragile) or spread across multiple (sticky), and whether the spend comes from intentional usage or a misconfiguration.

Credit and contract status. Are they on startup credits? How much remains and when do they expire? Monthly plan and growing? That's an annual conversion opportunity. See startup plan roll off for how to handle those accounts specifically.

Quick filters to deprioritize:

Step 2: Evaluate the company

Engineer count and company size. Check Harmonic/Clearbit data in Vitally (employee count, headcount growth). Companies with 50+ employees and a meaningful engineering org are more likely to expand. A 10-person startup spending $800/mo might get to $20k ARR eventually, but the timeline is long.

Growth trajectory. Recent funding (harmonic_last_funding_date), aggressive hiring (compare harmonic_headcount vs harmonic_headcount_180d), and early-stage companies with significant capital can qualify even if current spend is below threshold. The new business team learned this the hard way: their inbound lead skill was incorrectly disqualifying well-funded, engineer-heavy companies because it relied too heavily on stated MAU. They added a growth trajectory override for exactly this reason.

Business type and use case fit. The company type tells you which expansion path to lead with. The cross-sell motions page lists the profile of accounts where cross-sell works best: smaller/startup-size without existing tooling, engineer-heavy with direct technical contacts, heavily engaged users pushing the limits of PostHog. The use-case selling guide helps you map teams/roles to the problems they are trying to solve with PostHog.

ICP score. Use it as one signal among many. ICP score is rigid and data completion is always an issue. A -5 ICP score on a well-funded, engineer-heavy company should not stop you.

Step 3: Check the implementation

The Metabase account analysis playbook and deployment health checks cover the mechanics of each diagnostic in detail. Here, the question is different: you're reading these signals to decide whether to invest your time, not to diagnose a support issue. If the high spend is a result of a poor configuration with too much unnecessary volume, you're likely to invest time where there isn't future growth. Helping customers is never a bad thing, but it won't be a high ROI activity for you as a TAM. Though, there are potential opportunities to offer them FDE services (for a cost).

Event composition. High autocapture percentage with zero Actions means they haven't invested in instrumentation. That's both a risk (they might not be getting value) and an opportunity (optimization advice is the strongest opening message you can send). High custom event percentage often means they are more serious, and more importantly, have engineering resources available to invest into PostHog.

Products activated vs. products paying. Check paidProducts in Vitally. Single-product users with obvious cross-sell fit are prime targets. Also check if they've turned on products they're not yet paying for. Experimentation with products, even if in the free tier, shows intent.

Billing limits and conservatism. Fewer billing limits means less friction to growth. Limits set very close to current usage means they're cost-conscious, which gives the annual discount conversation a natural hook. It's also an opportunity to reach out to let them know they are close to possibly losing valuable data.

Data destinations. Data flowing out to a competitor (Amplitude, Mixpanel) is a risk signal. Data flowing to a warehouse (Snowflake, BigQuery) or an ad platform is a stickiness signal.

Project count and workloads. Multiple active projects mean multiple workloads, which means a bigger expansion surface area.

Step 4: Check for existing engagement

Before you reach out, confirm nobody else is already working this account:

Product-led leads can overlap with onboarding referrals, TAE pipeline, and CSM accounts. The sales handover page documents how the onboarding team evaluates these same accounts from their side. If someone is already engaged, coordinate before reaching out.

Step 5: Is it qualified?

Qualify and start working when:

Qualifying a lead means you're investing time in it. It does not mean adding it to your managed book yet. Add yourself as the Account Executive in Vitally and use the "Leads" segment to track it separately. You have up to 3 months to figure out whether a new lead belongs in your book.

Add to your managed book when you have traction:

The AM Managed segment is what triggers quota tracking. Adding an account too early, before you have real traction, locks you into carrying it against your 15-account cap without a clear path to quota credit. Simon reviews and approves AM Managed additions, so come with evidence, not just potential.

Track as a "nurture" but don't add yet when:

Pass or disqualify when:

For accounts you qualify, see getting recognized on the deal and getting people to talk to you for next steps. You need to demonstrate concrete sales activity to get the account added to your book. Sending a couple of emails and one call is not enough.

What makes a strong opening message

When you qualify a lead, the signals you found during qualification are your conversation starters:

See the communication templates for feature adoption for message structures that work.

Product-led Sales

Growth | Source: https://posthog.com/handbook/growth/sales/product-led-sales

A large proportion of our paying customer base sign up to a paid plan without ever talking to the Sales team. We don't want to force these customers through a sales process if they don't need it, but we also know that having a human to help them through the process on hand is likely to maximize the chances of retaining them as a paying customer long term. Longer term, we know that customers who have worked with a member of the PostHog sales team retain better, and are much more likely to expand their usage through cross-sell.

Product-led lead generation

Product-led leads can be generated in different ways - see this page for more info.

Some product-led leads might have already chatted with someone on our team. Before reaching out, take a quick look in Vitally to see if there’s any prior activity, and check in with the AE or team member who was involved to get the full picture if needed. Every lead has a "Vitally account URL" field in Salesforce which links directly to their Vitally profile for easy review.

Working with the customer

Just as with the inbound sales process, it's on you to decide how you qualify the lead. If you think they have potential to end up paying more than $20k a year then you should reach out to introduce yourself and offer help. As they have likely done a lot of research themselves, they may not need a demo so a 30-minute discovery is probably more appropriate here.

Getting people already happily using PostHog to talk to you can be challenging - here are a few things you might want to try.

If it's a viable opportunity then you should convert the lead to an opportunity and then follow the New sales process. Bear in mind that you can join it at any point depending on where the customer is at in their buying journey (e.g. you might skip product evaluation if they are ready to buy). If they are eligible for a shared Slack channel and they do not already have one, set one up.

Even if after speaking with them you think they may not end up at $20k+, you should educate them on how to get help, as well as the value of adding our Scale, Boost, and Enterprise plans.

Startup plan roll off

Customers who are rolling off the startup plan present a unique opportunity, as they are already using PostHog and may well be spending >$20k annually. Just like any other customer, we want to help them reduce spend and get the most out of their existing usage, while also educating them on the savings involved with an discounted, credit-based plan.

For customers that may have started implementation late, or ran into issues during their startup period, at our discretion, we can extend the life of their credits by 3 months. To do so, visit that customer's billing admin page (linked in Vitally), scroll to the bottom, and click the extend startup plan button.

Getting recognized on the deal

As they have already shown intent by signing up/subscribing, you will need to demonstrate that you have actively worked on the opportunity to include it in your book of business. We will use a common sense approach here but sending a couple of emails and 1 call won't be classed as 'actively working'. We want to ensure there is concrete sales activity going on with this customer. Simon will make the call here, escalating if needed.

Professional services

Growth | Source: https://posthog.com/handbook/growth/sales/professional-services

Some potential customers either expect to pay for professional services to help them get set up. There are others who don't ask for this, but where we can tell it would be helpful for them.

For now this is only a service we offer to potential customers by default, so this will mainly be of interest to the .

Who we can offer professional services to

A good candidate for this probably has some combination of the following:

If you are working with someone where this might be applicable, ping Charles and Simon first, as we can offer to send a forward deployed engineer to work with them to help get set up. Please don't just offer this to anyone without checking in, as we don't have unlimited capacity.

For ongoing training, this is something that we are solving for separately, but is not within the scope of professional services at the moment.

What work is included

Typically, we will send a forward deployed engineer to work with a customer for a week in person. What we charge depends heavily on the nature and scope of the implementation, but in any case starting at $10k. Simon will work with you to figure out the relevant scope of work and contractual terms.

We don't offer this for free, because it is a valuable service that customers expect to pay for. We also don't offer it as a freebie negotiation tactic, because that devalues it for all other customers.

The specific checklist of what will be implemented depends on the customer, but the following sections detail the broad topics we can cover.

Implementation scoping

This should be conducted ahead of time to ensure that we deliver services according to the customer's needs. We will document a plan for:

At the end of this session we should have a good plan and understanding of any onsite work which needs to take place, as well as who in the customer our engineer will be working with, and what level of access to customer systems they require.

Technical implementation

Here we make sure that PostHog is correctly integrated into the codebase using one or more of our SDKs. We should also:

Getting started training

Once data is integrated, we should provide an intro to PostHog session to the customer to teach them the basics of how to use PostHog. This will be tailored to their needs but should provide a baseline understanding of how to navigate the UI, where to find events, create insights, filter replays, etc.

Whilst Sales and CS folks also provide ongoing training to customers in their book of business, it's important to ensure they have a basic understanding of PostHog, especially if they are brand new.

Migration

Whilst it's crucial to get live data flowing in to PostHog, the customer may also want to bring over historic data to PostHog from their previous tools. This will normally be product analytics data, and we have both managed and manual processes for this depending on the incumbent tool.

Longer term it's expected that forward deployed engineers will own the managed migration tools and also build out that capability.

Once data is migrated, we may also need to implement dashboards and insights from the previous tool. There's no automated way to do this currently, so we will need a login to the previous tool to understand and recreate the visualizations they need to move over.

If the customer is replacing a feature flagging tool and has existing feature flags in place, we will need to migrate the flags in the codebase and ensure that the flags and targeting are set up correctly in PostHog.

Data connectivity

This will normally involve the setup of realtime and batch export destinations for other tools in the customer's stack. We'll need API keys for the relevant tools as well as an agreed criteria for choosing which events will go to which destination.

This may also involve getting data into PostHog without using our SDKs, for example, by using the Webhooks or Data Warehouse sources.

SQL Query implementation

Some customers aren't able to write SQL themselves and don't want to rely on PostHog AI. We can scope and write the SQL queries that they need, as well as creating the relevant views and person joins so that all of their data is connected.

Statement of work

Before any onsite work we will need to document a Statement of Work (SOW) which outlines the scope of work and the agreed terms of service. We should incorporate what we learn in the scoping phase into this to ensure we have all the customer's needs covered and allocate the right amount of time to the project.

Refunds

Growth | Source: https://posthog.com/handbook/growth/sales/refunds

We know things happen and sometimes you might need to issue a refund. Here’s how we handle common scenarios:

Learning curve

Just got off of the startup plan/new client accidentally used us a lot.

We issue refunds or credits in this category if this is the first bill >$1 and/or they meet eligibility criteria as explained below.

Unexpected stardom

Side project sudden volume spike

We issue refunds or credits in this category if this is the first bill >$1 and/or usage spiked by >200% compared to their average usage over the past three months, and the company doesn't have any revenue, or is a hobby project.

Under attack

Bot spike/abusive user drove traffic which in turn increased PostHog usage

We flag accounts with unusual activity spikes for review, and refund or issue credits to cover the overage amount once the issue has been resolved. The issued amount covers any amount exceeding the average usage of the three months preceding the spike.

Wrong setup

New feature trial with incorrect configuration

We issue refunds or credits in this category if the customer was charged for features they didn't intend to use due to default settings or configuration errors, and this is the first occurrence of unintended usage charges.

Eligibility criteria

Customer must meet the following criteria to get a refund:

Repeat incidents

For first incident response, we follow standard policy above and provide guidance for preventing future incidents (e.g. ask them to implement billing limits)

Subsequent incidents:

Request channels and processing

Refund requests can come through different channels:

In-app ticket

Contact sales form or email to sales@posthog.com Account Executives can direct these to the Support team using the ticket emoji in the #website-contact-sales Slack channel to auto-create a Zendesk ticket

Large account requests

Processing credits or refunds

How to calculate overage amount

Review customer usage

Before doing a refund, review customer's usage. Some useful sources:

What's "normal" vs "weird" usage:

Identify baseline usage

Calculate overage

For event-specific overages (optional):

If you want more precision when a single event type is inflated, use the 'Event counts by type last X days' insight in the Metabase dashboard:

  1. Change the lookback days to find the baseline period before the spike
  2. Identify the inflated event type and compare its spike volume to the baseline
  3. The difference is your overage amount for that event type

Calculate the amount to refund/credit

Refund or credit?

How to issue refunds or credits

Prerequisites

You need Support specialist level access to Stripe, ask Simon for access.

Issuing credits

  1. Go to billing admin
  2. Next to 'Credits', click on 'Add'
  3. In the 'Customer' field, use the drop-down menu to find your customer
  4. In the 'Amount' field, set an amount of credits you wish to issue for this customer
  5. In the 'Reason' field, select a reason which best describes why you're issuing the credits
  6. Add an optional note in the 'Notes' field
  7. Include an optional link in the 'Reference link' field, e.g. Zendesk ticket, Slack message link, etc.
  8. Click 'Save and view'
  9. Confirm that the credits were successfully added to the customer's balance in Stripe under 'Customer invoice balance'

Issuing a refund

Refunds are now initiated through Billing Admin and finalized in Stripe via a credit note.

There are two ways to reach the Add Refund screen.

Option A:

  1. Navigate to Billing Admin → Customers.
  2. Find the right Customer (search by organization ID or customer ID).
  3. Once in the Customer view, scroll down to _Related invoices_ section. Find the right one (you can identify it by its id, dates or amount).
  4. Click on "Start refund"

Option B:

  1. Navigate to Billing Admin → Invoices.
  2. Find the right invoice (search by invoice ID, organization ID, etc).
  3. Click and open the invoice view.
  4. Once in the view, click on the top right button "Start refund"

Once you do that (through any of the two options), you'll land on the "Add refund" screen. From there, you can continue with the refund:

  1. Allocate refund amounts per product. Refunds must be issued per product. Enter the refund amount for each affected product. You may need to do more math here: for an event spike refund may span Product Analytics, Person Profiles, and Group Analytics. Billing Admin does not automatically split refunds across products, you must do the math and allocate amounts manually. As you enter per product amounts, the total refund amount updates automatically.
  2. Select refund reasons: Choose a Stripe refund reason (required) and select an internal reason (used for internal reporting and analysis)
  3. Add any relevant notes or context (e.g. Zendesk ticket, Slack link, short explanation)
  4. Once you review everything and all looks good save the refund in Billing Admin. This will issue a Stripe credit note, which is processed as a refund to the customer’s default payment method. Stripe automatically sends a notification email to the customer.

Fixed fee product refunds

For fixed-fee subscriptions (e.g. Boost plan), Stripe’s default proration behavior can cause double crediting.

Example: A customer subscribes to a fixed fee add on by accident and requests a refund. After we issue a credit note, they cancel their subscription. When this happens, Stripe automatically creates a prorated “unused time” line item on the next upcoming invoice. This results in the customer being credited twice:

To prevent overcrediting, we need to manually delete the pending invoice item that Stripe creates after the subscription cancellation.

Steps:

  1. Find customer profile in Stripe (you can search by organization id)
  2. Locate the proration adjustment under Pending Invoice Items.
  3. Manually delete the line item.
  4. Add a note in Zendesk documenting that the proration line was removed to avoid double crediting.

Spotting suspicious stuff - watch out for:

When to escalate to RevOps

Tag Mine Kansu in Zendesk and share what you checked, what you think we should do, and any other relevant context. RevOps will review usage trends and customer lifecycle (e.g. new client, high-value account) to figure out next steps.

Our approach

We'd rather fix unexpected usage issues than have customers pay one massive invoice and then reduce spending or leave us. The goal is to maintain a fair, transparent relationship that works for everyone in the long term.

Trial periods and usage spikes

Risk mitigation and churn prevention

Growth | Source: https://posthog.com/handbook/growth/sales/risk-mitigation-and-churn-prevention

If you're actively thinking about churn prevention in response to a customer churn threat or major red flag, it's already way too late.

Churn prevention is best done from early, and often, risk mitigation practices.

We should default to flagging "at risk" accounts using the "Churn Risk" segment in Vitally well before the customer has told you they are exploring alternatives. If you have the slightest inkling that something may look off or something has you feeling a bit uncomfortable, flag it. This could be anything from not taking action on a recommendation you gave them for too long, down-trending volume with no apparent seasonality cause, only one or two core users of the platform, or no Slack activity for an extended period. To name a few.

There are a few risk-mitigation strategies you'll want to incorporate that serve as early detection and proactive mitigation, as well as a process for what to do when an account is actively at risk.

---

Risk mitigation (proactive)

Risk mitigation is about building habits that surface problems before they become emergencies. If you're doing this well, you'll rarely need the reactive playbook below.

Quarterly account planning

Every AM Managed account should have an Account Plan note created in Vitally once per quarter. You should also review this weekly with your manager. This forces you to step back and evaluate the account holistically rather than just reacting to whatever's in front of you.

Title format: Q[X] Account Plan - [Company Name]

Use the Account Plan template in Vitally, which auto-populates key fields from the account record. The template covers:

Account overview

Business objectives

Stakeholders and users

Current usage and cross-sell

Risks

Action items

If you can't fill out most of this template, that's a signal you need to dig deeper into the account. An incomplete account plan usually means incomplete understanding of the customer.

Early warning signals

These aren't emergencies yet, but they should make you pay closer attention. Many of these are also tracked automatically as risk indicators in Vitally.

| Signal | Why it matters | |--------|----------------| | Recommendation not actioned for 2+ weeks | They're not engaging with your guidance | | Down-trending event volume (no seasonality) | Usage decline is a leading indicator of churn | | Only 1-2 active users | Single point of failure, low organizational buy-in | | No Slack activity for 14+ days | Relationship is going cold | | Billing page visits without context | They're evaluating costs, possibly shopping around | | Champion changed roles or left | Your internal advocate is gone | | Support tickets spiking | Something is broken or frustrating | | Single product usage | Low switching cost, easy to replace | | Data exported to external warehouse only | PostHog becomes a pipe, not a destination |

When you notice any of these, don't wait. Reach out, dig in, and address it before it compounds.

Drive adoption of behavioral products

Not all PostHog products carry the same switching cost. Customers who primarily use PostHog as a data pipe to an external warehouse are structurally riskier. If their analysts query Snowflake or BigQuery, PostHog becomes invisible and replaceable.

Behavioral products create stickiness because they're used directly in PostHog's UI and embed into day-to-day workflows:

| Product | Why it's sticky | |---------|-----------------| | Surveys | Active feedback collection tied to user segments. Hard to replicate externally. | | Cohorts | Saved user segments used across insights, feature flags, and experiments. Accumulated investment. | | Workflows | Automated actions triggered by product events. Operational dependency. | | Feature flags & experiments | Engineering teams build release processes around them. Deep integration. | | Session replay | Qualitative context that doesn't exist in a data warehouse. Unique value. |

When you see a customer heavily reliant on data pipelines and external analytics, proactively introduce behavioral products. Frame it as expanding what they can do, not replacing their warehouse workflow. The goal is to make PostHog the place where decisions get made, not just where data passes through.

Practical moves:

If they're doing all their analysis in Looker or Mode, ask why. Sometimes it's habit, sometimes it's a gap we can fill. See communication templates for new feature adoption for outreach examples.

Implementation health

A lot of customers self-serve without ever talking to a PostHog human. This means they can implement PostHog in ways that cause problems down the road: inflated bills, inaccurate data, or features that don't work as expected. Left unchecked, these issues lead to avoidable churn.

Proactively check for common implementation issues, especially for newer accounts or accounts that haven't had a technical review. See checking the health of a customer's deployment for the full checklist.

Billing waste:

Tracking issues that erode trust:

Feature flag resilience:

When you find implementation issues, don't just tell them what's wrong. Help them fix it. A customer who had a billing problem you solved is more loyal than one who never had a problem at all.

De-risking common churn scenarios

Most churn follows predictable patterns. See common churn reasons for the full list. Here's how to de-risk the scenarios we have some control over:

| Churn scenario | De-risking strategy | |----------------|---------------------| | Champion leaves | Multi-thread relationships across teams. The more users actively in PostHog, the less one departure matters. | | Champion isn't the decision maker | Identify and build relationships with actual decision makers. Your champion can help with introductions. | | Customer builds internally or switches to competitor | Drive multi-product adoption. Harder to replace five products than one. | | Poor customer experience | Stay on top of open issues proactively. Circle back before they have to follow up. Rebuild trust through responsiveness. | | Customer can't extract value | Offer workshops, training, or hands-on help building specific insights. Don't wait for them to ask. | | Missing critical feature | Loop in the relevant PM and engineering team. Be transparent about what we can and can't do. Make sure the request is also tracked in Vitally | | PostHog isn't trusted as source of truth | Dig into data discrepancies. Often an implementation issue. If they're exporting everything to another tool, they're one step from leaving. | | Privacy/compliance concerns | Help them understand data controls, masking, privacy controls, cookieless tracking, and data deletion options. Often they assume they can't use features when they actually can. |

For scenarios outside our control (acquisition, company shuts down, not ICP fit), document what happened and share learnings with the team. There's usually something we can learn even when we couldn't have changed the outcome.

---

Churn prevention (reactive)

When an account is actively at risk (they've told you they're evaluating alternatives, usage has cratered, or you've lost a champion) you need to move fast and follow a clear process.

When to flag an account as at risk

Add the account to the Churn Risk segment in Vitally if any of the following are true:

When in doubt, flag it. It's easier to remove a flag than to explain why we didn't see a churn coming.

Internal process

1. Add churn risk segment in Vitally

When you flag an account as at risk, add a note in vitally with:

The churn risk bot should automatically post this in the #customer-churn slack channel. This keeps the team informed and surfaces accounts that might need additional support or visibility.

2. Weekly at-risk account review

We hold a weekly team meeting to review all accounts in the Churn Risk segment in Vitally. Come prepared to:

The objective is accountability and support. If an account has been at-risk for 4+ weeks with no improvement, we need to either escalate for additional support or accept the loss and document learnings.

3. Escalation

Escalation means getting support at a higher level, not handing off the account. You remain the owner and primary contact. Use your best judgment on when to pull in additional resources:

The goal of escalation is to get the right people involved to help you save the account, not to pass the problem to someone else. You're still driving the relationship and the recovery plan.

Recovery playbook

Once flagged, your job is to diagnose and act:

Diagnose the root cause. Is this price, product, relationship, implementation, or business change? You can't fix what you don't understand. Use the churn scenarios above as a checklist.

Get on a call. Don't try to save accounts over email. Get face-to-face (or video) time to have a real conversation.

Listen more than pitch. Understand their perspective fully before proposing solutions.

Be honest about gaps. If we can't do something they need, say so. Credibility matters more than closing a save. This aligns with our sales principles: we don't care about losing deals if we'd have to compromise on our principles.

Create a recovery plan. Document specific actions with dates and owners. Share it with the customer so they know you're taking this seriously.

Follow up relentlessly. A save isn't done until the risk is resolved and usage is stable. Check in weekly until you're confident.

When churn happens

Not every at-risk account can be saved. When a customer churns, write a retro and share it in #customer-churn as soon as possible while the details are fresh. See learn from churn for the template and guidance.

---

Summary

| Activity | Cadence | |----------|---------| | Account Plan note in Vitally | Quarterly | | Implementation health check | At onboarding + annually | | Early warning signal monitoring | Ongoing | | Behavioral product adoption push | Ongoing (especially for warehouse-heavy accounts) | | #customer-churn posts | As needed when flagging risk | | At-risk account review meeting | Weekly | | Recovery calls with at-risk accounts | Within 48 hours of flagging | | Churn retros in #customer-churn | Within 1 week of churn |

The best churn prevention is never needing to prevent churn. Build the habits, check implementation health, drive behavioral product adoption, and catch problems early.

Running trials

Growth | Source: https://posthog.com/handbook/growth/sales/running-trials

Running trials at PostHog

A trial creates space for you and the customer to validate technical fit, agree on success criteria, and close the deal. This guide covers when to offer trials and how to run them effectively.

When to offer Trials

Offer a trial when:

Skip the trial when:

Trial length: 2 or 4 Weeks?

We default to 2 weeks ($20-$60K ARR) unless there's a compelling reason for 4 weeks.

Extend to 4 weeks when:

Trial "must-haves" and "should-haves"

Every trial _must_ include:

  1. SDK installed and events firing - Can't trial without data. This is a non-negotiable, and should occur as a "Day 0" item requiring completion before proceeding to any of the recommended timeline steps below.
  2. Documented success criteria - Define what "winning" looks like for them. What do they need to see for the trial to be considered a success. Leaving them to their own devices and hoping they envision their success with PostHog in the way that we do is not likely to work.
  3. Documented timeline - Define the "when" for each of the success criteria in #2.

Every trial _should_ include:

  1. Kickoff call - Use this time to align on the "must-haves": success criteria, timeline. An understanding of "how" they will evaluate is just as critical as what you can be doing throughout the trial to help check off the success criteria. Using this time to collaboratively build the success criteria ensures alignment and mutual understanding.
  2. Shared Slack channel - Set up before kickoff if possible, so that there's a more "live" way to communicate that comes with better and more accessible support. See Shared Slack Channels with Customers for additional guidance.
  3. Onboarding success plan - Use the 30-day onboarding success plan template as a starting point, then iterate where appropriate. Adapt for trial length, and share with the customer as a Slack canvas in the shared Slack channel.

Suggested timeline

Note: Per above, the trial shouldn't progress past this point until the SDK is installed and event data is being sent to PostHog.

Days 1-3: Kickoff & Review

Days 4-7: Initial validation

Business process happens in parallel

As customers are technically validating PostHog, you should also start to work with them on initiating procurement, reviewing legal requirements and aligning on price.

Days 7-14: Close

Support approach

Different customers may need and/or request different levels of support during their trial. We should match the customer's energy accordingly:

Hands-on (larger deals, less technical teams):

Self-serve (technical teams, smaller deals):

Most trials fall somewhere in-between. It's up to us to read the room and adapt.

Monitoring engagement

It's important to have visibility into a customer's usage and engagement in order to validate whether or not they will be successful with PostHog.

These signals are not guaranteed to always indicate success. Some teams are chattier than others, some teams like to keep comms over email, others enjoy regular Zoom meetings - see above for notes on Hands-on vs. Self-serve approaches.

Pro tip: You can always use session replay to also check their activity and learn how they are using PostHog! Leverage PostHog AI to help you analyze multiple user sessions. Look for things like:

High engagement signals:

Low engagement signals:

If engagement drops, be proactive:

Your time is valuable. If timing isn't right, it's okay to pause the trial and reconnect with them at a later date.

Extensions

It's common that a customer needs more time to validate PostHog. People get sick, take vacations, priorities change. There are a number of reasons why a customer may need an extension and we're happy to be amenable to them while getting a good understanding of the path to trial end, win or lose.

When considering granting extra time (7-30 days), we should ask for something in return. Examples include:

Always understand: Why do they need more time? What specifically needs to be accomplished?

Wrapping up the trial

Whether it's a 14 or 30-day trial, you should already be getting clear signals that the customer will choose PostHog halfway through. Schedule a feedback session to confirm the technical win and understand what else needs to happen before we make things official.

In your feedback session, be sure to:

Don't wait for the trial to end to start closing conversations. If success criteria are met early, no need to wait. You can always start moving towards the required closing steps (order form, amount, where to send invoices, etc.) at the customer's pace.

Sales & CS Tools

Growth | Source: https://posthog.com/handbook/growth/sales/sales-and-cs-tools

Sales, CS & Onboarding Tools

Here are the common tools the Sales, CS, and Onboarding teams use daily.

Tools through Google and Single Sign-On (SSO)

Tools by invitation

Mine or Simon can help you out for access or invites for the following tools:

Tools that you may find useful and not required

Useful Slack channels

Sales operations

Growth | Source: https://posthog.com/handbook/growth/sales/sales-operations

Overview

This page outlines how we manage customers, differentiating those who make contact via booking a meeting with us (hands-on) versus those who sign up and get going themselves (self-serve).

If you are looking for guidance on how to manage customers in HubSpot specifically, visit our CRM page.

Hands-on Process

  1. Customer will either:
  2. Fill in the contact form on the contact page, which captures what they are interested in as well as metrics such as MAUs, event count etc.
  3. Email us directly at sales@
  4. We'll do some ICP scoring and either route them to self-serve or email them introducing ourselves and answering any questions they've shared as well as offering up a call/demo to discuss their needs further.
  5. On the initial call we'll spend some time understanding what they want and then optionally give a demo if that's what they are there for.
  6. Ensure call notes go into HubSpot against the contact/company/deal so that they are shared amongst the wider team
  7. If they are ready to get started with PostHog, we should either:
  8. For lower volume customers we should send them a getting started templated email which providers pointers on how to get set up as well as where to get help if they get stuck.
  9. For higher volume customers we can create a Slack Connect channel in our Company Slack, this allows us to provide more focused support to ensure that they are successful.
  10. As a priority we should get them sending data in from production (even just a small part of their app) so that they can see the value of PostHog quickly (decreasing time to revenue) see how we do this in the Onboarding section below.

Self-serve Process

For customers that sign up themselves, and begin using the product, we provide a number of self-serve resources, including:

  1. Docs
  2. Tutorials
  3. Pre-recorded demo
  4. Community page

Additionally, all users can contact us for support/bugs/feedback using the ? icon in the top right of the PostHog app. This is routed to the appropriate team in Zendesk.

Ensuring customers see value quickly

Most potential customers will show up because they want to replace an existing analytics product, or start doing product analytics from scratch. In either case, we should show them the power of PostHog as quickly as possible. To that end, getting live production data through our pipeline and available for analysis should be our top priority.

  1. Help them get set up with tracking their production site/app using one of our client or server libraries.
  2. JavaScript / Autocapture is the easiest - also make sure to turn on Session Replay.
  3. If people aren't sure what they want to track, AARRR is a great framework to use and will give people a good taster on the types on insight they can see. We have a number of supporting resources:
  4. A blog post on getting started with the framework.
  5. A sample AARRR tracking plan which we can give to customers to fill in. It shows how we do things at PostHog and may help inspire people who don't know how to get started.
  6. Encourage them to create dashboards for them to show off PostHog in the wider organization.
  7. Keep on top of any support requests / blockers they may have.

Free trials?

Generally speaking we don't need to do anything around free trials as our free tier has a generous 1m events, 1m feature flag calls, and 5k sessions. If a customer is going to go over this limit pretty quickly then we can agree to give them 2 weeks of free usage - this can be done in the billing service. See the billing page for more info (and the latest on this).

Figuring out the best solution for a customer

Assuming PostHog is the best solution for a customer, you should look at their level of scale and if they have any specific security needs to determine the most appropriate plan for them.

In general, PostHog Cloud is the best option for customers. It is much more scalable than self-hosted instances, doesn't require devops time to configure, monitor, and run, and is also the only way to use all of PostHog's paid features. In certain cases, the open source / free product may be the best choice, if customers are very technical, and also have a strong data control requirement.

What about Open Source?

Open Source will be appealing to customers who want to self-host, but are happy with 1 project only and community-based support.

By contrast, paid has premium features around collaboration - such as user permissions so people can't delete everything, multiple projects to keep data tidy, basically functionality to keep things running smoothly when you have lots of logins.

Okay, they're using PostHog. Now what?

Congratulations, this is the best part! Now we focus on making customers successful at unlocking insights into their product.

Read about how we do this in the dedicated handbook section, Ensuring Customer Delight at PostHog.

How we measure revenue

We typically use two top-level metrics when looking at revenue: MRR (monthly recurring revenue) and NRR (net revenue retention).

The easiest way to see these is on the go/revenue dashboard. These queries were built by Tim

FAQs

_Can I give a customer a discount?_

Again, no need - we already have usage-based pricing which is _heavily_ discounted at higher volumes, and we only bill month-to-month, so customers don't need to feel locked in to a longer term contract. If it's high volume (B2C) we can do this on an ad-hoc basis.

_How do I work with a customer who wants to sign an MSA?_

This occasionally happens when we are dealing with very large companies, who may prefer to sign an MSA due to their internal procurement processes or to have the security of a locked-in contract from a pricing perspective. We have a contract version of our standard terms and conditions that we can use for this - ask Charles.

We'd only really look to do this with people spending $20k+ per year - we don't do it below this value because of the legal effort required.

_How do I find out a customer's usage?_

The tool we use for this currently is Pocus, which combines revenue, PostHog, HubSpot data all in one place. You can search for the org/user/domain using cmd + k and the popover should give a deep dive into usage across products, revenue, engagement, etc.

_Can a customer transfer from self-hosted (e.g. Open Source) to Cloud?_

Yes! See migration tools repo for events and the migrate meta repo for everything else.

_Can a customer transfer from Cloud to Self-hosted?_

Yes! Raise a support ticket in the app under Data Management.

_What if the customer knows their user volumes but has no idea about number of events?_

A good approach is to point them to our Downsampler app and set it to say only capture 1% of users. If they then go to their billing page, they can see the events count. Multiplying this by 100 will indicate their actual likely volume, without creating a ton of risk that they spend too much money.

We also did a study on PostHog Cloud and most companies were within the range of 50-100 per user per month.

_What privacy features does PostHog offer?_

_What apps are available?_

We have the full list here. We also accept apps built by the community, which we audit first before adding to the list.

Procuring and selling via AWS

Growth | Source: https://posthog.com/handbook/growth/sales/selling-via-aws

PostHog is now available on AWS Marketplace for SaaS products. The way we've chosen to list our product is that it is not available as a public offering and instead is "listed" but only available via AWS "Private Offers", which means we create custom order forms for each customer through AWS that they accept via their portal.

AWS Marketplace lets vendors use their own terms and MSA. For now, PostHog team members set the price as a lump sum credit purchase for annual pre-payment only. Down the road, if we change our listing to public on the marketplace, we could set up usage-based billing through AWS (but that's future state).

Why this matters

  1. Our ICP lives in AWS - Product engineers already have AWS access and budget. Adding PostHog to their AWS bill just makes sense since we're part of their product infrastructure stack
  2. Procurement bypass - Organizations have bigger, more flexible AWS bills. Way easier to add a line item there than set up a whole new vendor
  3. Customer kickbacks - Buyers get ~3% of purchase price back as AWS credits (sweet deal for them)
  4. TAM incentives - AWS TAMs get SPIF'd for marketplace sales (we should apply to ISV Accelerate to fully capitalize on this)

Current requirements

For now, we're keeping it simple:

Using Clazar for private offers

Since AWS Marketplace can be a pain to navigate, we're using Clazar to manage this. Clazar ties private offers directly to Salesforce (something AWS doesn't do natively). Future state: would be nice if QuoteHog could create these directly too!

Initial setup

Before you start:

Creating a private offer via Clazar

  1. Open the opportunity in Salesforce
  2. Navigate to the AWS Private Offers widget (should be on the opportunity page)
  3. Click "Create Private Offer" - Clazar pre-fills most fields from the opportunity
  4. Fill in the required fields:
  1. Configure pricing:
  1. Choose EULA type:
  1. Review and submit - Takes ~45 minutes to generate in AWS (yes, really takes that long sometimes)
Option 2: Via Clazar platform

If you need more control or Salesforce isn't cooperating:

  1. Log into Clazar at app.clazar.io
  2. Navigate to Private Offers in the main menu
  3. Click "Create New Private Offer"
  4. Fill in buyer details:
  1. Configure offer details:
  1. Set dimensions and pricing:
  1. Legal terms:
  1. Review the status tab - Should be green when ready
  2. Submit the offer

After creating the offer

  1. Wait for generation (~45 minutes)
  2. Clazar sends notifications when the offer is live
  3. Share with customer:
  1. Track in Salesforce - Status syncs automatically via Clazar

Common issues & solutions

Customer can't see the offer:

Offer needs changes after creation:

Payment not showing up:

Pro tips

Shared Slack Channels with Customers

Growth | Source: https://posthog.com/handbook/growth/sales/slack-channels

We offer shared Slack channels to customers and prospective customers in several circumstances:

We use shared Slack channels to provide timely support and to build relationships with those at our customers shipping things with PostHog.

Shared Slack channels allow many folks at PostHog to support our customers. And, a shared Slack channel must be configured correctly in order for this support to work.

Setting up a Shared Slack Channel via Slack Connect

We use Slack Connect to share Slack channels with our customers.

To get a shared Slack channel going, follow these steps:

  1. Create a new Slack channel - the expected syntax for the name is posthog-[customername].
  2. When determining the [customername], make sure to make it searchable (avoid acronyms, if possible).
  3. Obviously, invite the relevant customer folks! Be sure that you're inviting them to the channel you've created and not our Slack workspace.
  4. Invite certain leaders who want to help monitor the channel, including: Tim, Charles, Abigail, Simon, your team lead and anyone else internal who may be connected to the customer. PostHog folks will sometimes join the channel if they're interested in the customer or the use-case
  5. Invite Pylon to ensure those from PostHog and the customer can create support tickets from Slack threads - Use the a slash command in Slack to invite Pylon /invite @pylon . Pylon will join and prompt you with some questions. Note that this is a customer channel, and select yourself as the channel owner. If your name is not available in the dropdown, you can login to Pylon and add yourself as the owner from the Pylon UI. You can check to see if the connection is established in the "Account Mapping" section in Pylon.
  6. Set your preferences to "Get notifications for all messages" in the channel -- this will ensure you don't miss a message and allow for speedy support.
  7. Ensure that the Slack channel name is recorded on the relevant Salesforce Account record in the Slack Channel field (Pylon should sync this automatically) -- If the [customername] in Slack is different from the Account record name in Salesforce, Pylon will not automatically match the two.
  8. Grab the Admin Panel link (from Vitally under PostHog Default Dashboard) and in the channel add this as a new link. Name it Org Link and add a new folder called Support. This is helpful to our Support Team for quickly accessing the customer's account when questions are posted in Slack.
  9. Add your role and title to the channel description (e.g. Technical CSM: FirstName LastName). This will help team members identify who's the main point of contact for this customer.

If you have any questions as you go, ping your colleagues for support in your team channel.

Editing a pylon integration with Slack

If you accidentally set the wrong channel for a feed or mess up some other pylon settings, you need to log into the Pylon admin to change it. You can SSO via Slack, just put in your PostHog email address. Once logged in, click on accounts in the left rail, find the account you need to change, click on it, and on the right rail, you will see Slack integration settings.

Using MS Teams via Pylon

Some customers may wish to use MS Teams rather than Slack - we can sync our Slack with Teams via Pylon to do this. First you will need an MS Teams licence - ask Simon, or Dana for one. Then, set up a Slack channel according to the instructions above. Then, follow these steps:

  1. In Teams go to "See all your teams" and then "Create team". When naming the team the expected syntax is `[CustomerName-PostHog]. Make sure to set the team type to Public and name the first channel "Shared" then finish creating the team.
  2. Now, go to the team you created and go to the Apps tab. Click "Get more apps" and open the Posthog Team app that's under "Built for your org". Select the Shared channel from your newly created team.
  3. On this page in Pylon you should see your new team listed, using the search dropdown to connect it to the Pylon account associated with the customer.
  4. Before adding the customer into the Teams, remember to test it on both sides to ensure the integration is working correctly. After you test it, invite the relevant customer folks by adding them as members to the team!

Onboard Your Customer to Slack Support

Welcome them to the channel when they join!

Set context for the channel's purpose and timing (if applicable). Let them know that they may hear from anyone at PostHog who is monitoring the channel, and also don't miss the opportunity to train them how to open a ticket with the Pylon app.

A message like this one does wonders to help them understand how to open a ticket if you're not online to help yourself:

We also have an app here that will open a ZenDesk ticket if I'm sleeping. You only have to add the :ticket: emoji to the thread and it will open a ZenDesk ticket automatically, and capture the back and forth in the specific Slack thread that received that emoji. You can also @support in a thread to open a ticket as well. It's a good habit to get into in order to make sure our distributed team can help.

The New sales playbook has more on ensuring that the customer is set up for success.

TAM excellence

Growth | Source: https://posthog.com/handbook/growth/sales/tam-excellence

A question we often get asked is: "What makes an excellent TAM?" This touches almost every stage of the role, from candidates (what would I have to do to succeed at this potential role?) to new hires (where should I be spending my time?) to folks in their first year (how do I know I'm doing well?) to old-timers (how do I keep up with the rest of the team?)

An excellent TAM:

| General Principle | Specific Examples | |---|---| | Distinguish yourself from other vendors with personalized outreach | Record personalized videos, submit PRs for a customer's software, send personalized food or merch, sign up for their product, invite them to events, or make donations in their name | | Dig into a customer, find what will be valuable to them, and surface it in a way that deepens the relationship | Get the content right — saving money, expert recommendations, ideas for success beyond PostHog — and deliver it in a way that earns attention | | Take ownership of a customer problem and see it through to resolution | Even when a full fix isn't possible, owning the issue and driving it to conclusion builds trust — people notice when you see things through | | Build relationships for the long term so that today's work pays off a year from now | Don't write off a quiet or unresponsive customer — the relationship being built now is for next year's expansion, not this quarter's | | Balance cross-sell with non-sales value so customers feel helped, not sold to | Avoid making every interaction a revenue conversation — give them value that doesn't require opening the wallet | | Show sincere interest in the customer's business and back it up with real knowledge | Congratulate them on product launches, give feedback on their product, leave them reviews — and actually know their product well enough to do so | | Have the technical depth to get hands dirty and help with technical questions | You shouldn't be implementing for customers as a rule, but being capable of it demonstrates you understand enough to be genuinely useful | | Don't accept "we're good" as a final answer — keep engaging to find where you can help | "Talk to us in 6 months" usually means "don't upsell me" — respond with concrete suggestions on cost reduction or real pain points instead | | Regularly share learnings — both wins and failures — publicly with the team | When you learn something (especially from a mistake), sharing it helps others avoid the same issues and raises the whole team's effectiveness | | Develop a sense for when an account is at risk and act proactively | Be close enough to your accounts that you can detect when something feels off, and address it before it becomes a problem | | Balance directness and transparency with knowing when to give a customer space | Understanding what the right balance looks like for each individual customer is the art of the role | | Spot gaps in the sales process and proactively fix them rather than complain | An excellent TAM doesn't sit on process frustrations — they take ownership and make improvements | | Identify the customer's key business goals and align PostHog directly to those outcomes | If a customer wants to increase conversions or grow premium plans, show specifically how PostHog helps reach that goal — make it essential, not just a nice-to-have | | Enable and upskill your customer champion so they look good within their org | Do the hard work for your champion, then let them take all the credit | | Be a relentless advocate for customer interests internally | TAMs are closest to customers — engineers need to hear from you about how customers are actually feeling | | Continuously grow your PostHog expertise to keep pace with the product | PostHog is constantly changing — never shy away from selling a new product because of unfamiliarity; learn it |

Team lead responsibilities

Growth | Source: https://posthog.com/handbook/growth/sales/team-leads

General principles

As team lead in a customer-facing role you'll be responsible for making sure that your team is exceeding expectations when it comes to their specific role at PostHog. This generally means that:

  1. They have a solid plan for any managed customers in their book of business or deals in their pipeline
  2. They are proactively building relationships with their customers, even those who are hard to engage with
  3. They are flagging any potential churn as soon as they become aware of it
  4. You are proactively helping them when they are struggling with what to do next on a customer or deal
  5. You are providing continuous feedback to them, especially when their performance is below expectations

Team-specific responsibilities

Product-led Sales

Technical Account Managers (TAMs) own a book of business of nominally around 15 customers with an ARR of $1.5m, and also look to bring new customers into that book via product-led leads.

Customer Success

Customer Success Managers (CSMs) own a book of business of around 30 customers with an ARR of $1.5m and focus mainly on keeping them as customers (retention).

Onboarding

The Onboarding team operates at scale, supporting hundreds of customers whose MRR falls below the TAM/CSM threshold. As a result, the number of customers in the program, as well as the ARR represented, can fluctuate from month to month. The team is currently focused on customers whose first bill is forecasted at $500+ MRR. Its north star metric is maintaining 90% logo retention through customers’ first three months.

Other things we collectively need to stay on top of

Tracking new products

When we know that a new product is being launched, we need to ensure that Vitally tracking is in place for that product before it is launched. This involves:

  1. Updating the Postgres integration to ensure that we are tracking the following traits for the product:
  1. Once the traits are in place, create a success metric to capture the product's data usage if applicable.
  1. If there is a specific engagement to track how people use the product in PostHog add it to the Vitally engagement events Action

Incident comms

We need to ensure that teams are able to proactively follow our incident comms process. We're not quite ready for a full on-call rotation yet, but Simon or Dana take the lead as Communications Manager On-Call (CMOC) when an incident is declared in EU working hours, and Landon or Tyler take the lead when it's during the US working day.

Trials

Growth | Source: https://posthog.com/handbook/growth/sales/trials

Prerequisites

Process for giving a customer a free trial:

  1. Log in to Billing with your Google SSO login.
  2. Click the Trials link on the left sidebar.
  3. Click the Add Trial button (top right).
  4. Fill out the trial form
  1. Click Save
  2. The next time that Customer visits PostHog, their AvailableFeatures will be updated to reflect the standard premium features (they might have to refresh their page to properly sync the new billing information).
  3. Once this date passes their AvailableFeatures will be reset to the free plan unless they have subscribed within this time.

Additional steps for existing customers with paid subscriptions For customers with existing paid subscriptions we need to complete additional steps to make sure they are billed correctly.

Important: Ask Mine to update Stripe and billing admin so she can make sure revenue numbers are unaffected and customer isn't billed while on trial.

  1. Follow the steps above to create a trial.
  2. Remove Stripe Subscription ID in the Billing Admin (keep the Stripe Customer ID).
  3. Set all products in the product map to a free status.
  4. Cancel subscription in stripe: Ensure the subscription is canceled in Stripe so they are not billed during the trial.
  5. Create new subscription before trial ends and update Billing Admin so customer experience isn't affected when transitioning back to a paid plan.

If they need a shared Slack channel as part of the trial, follow these instructions.

Consider framing a collaborative method for progressing in the trial period with timed objectives. If it's new and depending on the level of engagement, we can use a detailed success plan.

Turning knowledge into agent skills

Growth | Source: https://posthog.com/handbook/growth/sales/turning-knowledge-into-agent-skills

Our documentation is a critical piece of PostHog's context flywheel – a system that connects our codebase to our docs, which then feeds into our AI agents, the Wizard, and PostHog Code. When documentation is outdated, the agents that help customers integrate PostHog become outdated too.

This means your knowledge directly powers our AI tools. When you write down what you know, it doesn't just help humans – it helps robots help customers faster.

Why your contributions matter

The team has automated writing documentation from PR merges using InKeep, which indexes our codebase and docs to create first-pass drafts. But there's knowledge that only comes from working directly with customers:

This knowledge lives in your head. When you write it down in the handbook, it can be transformed into skills – portable packages of context that the AI Wizard can use to help customers.

How to contribute

1. Write it in the handbook first

The handbook is the appropriate place to document playbooks, processes, and tribal knowledge. We have Markdown rendering for both the documentation and handbook, so content can flow between them.

Good things to document:

2. Make it actionable

The context mill transforms handbook content into skills for the AI Wizard. To make your content skill-ready:

For example, the Logs skill is just a root prompt with some reference material – it's that simple.

3. Think about automation

When you write something down, ask: "Could an agent do this automatically?"

For example, if you write a playbook for gathering a customer dossier, that could become an automated web search agent task. If you document how to identify cross-sell opportunities, that becomes a skill the Wizard can use.

What gets turned into skills?

Skills are defined using a YAML specification that allows for different variants based on app detection and context. The team currently has over 60 skills the Wizard can use.

Your handbook contributions can become skills that:

The Wizard and how it helps customers

The AI Wizard is a one-line npx command that runs an agent to integrate PostHog:

npx -y @posthog/wizard@latest

Here's what it does automatically:

This dramatically reduces manual integration work – what might take 3-5 hours happens in minutes. And it produces customized code tailored to each customer's setup.

The Wizard is agentic software that runs on our docs. When you write something down, the Wizard can execute it as a skill – it's like turning documentation into executable code.

So ask yourself: if an agent can read, analyze, and understand a user's codebase, what else could it discover or build to help the user get value from PostHog faster?

Those answers can become Wizard skills – and new ways of creating customers.

How this helps you help customers

Close the gap in customer conversations

The Wizard and skills architecture lets you close the gap between customer-facing hypotheticals and technical diagnostics without needing engineering present. If a customer asks "how would I track X?", you can point them to the Wizard or a specific skill.

Portable diagnostics

If customers are hesitant to run an agent on their codebase, they can receive the open-source skills package directly. This gives them 80-95% of the Wizard's functionality to run locally with their own tools.

Better onboarding

For new customers, the Wizard provides an excellent launchpad – 10-15 best-practice events that help them overcome the initial difficulty of deciding what to track. The best approach is to:

  1. Run the Wizard first
  2. Review what it did together with the customer
  3. Come up with a plan and tweaks

The Wizard helps you get past the blank page. It's much easier to iterate from there!

Getting started

  1. Identify something you repeat: What do you explain to customers often?
  2. Write it down: Create a handbook page or add to an existing one
  3. Make it specific: Include actual steps, code, or configurations
  4. Let the team know: Share in Slack that you've added something that might make a good skill

The written-down knowledge also enables automation beyond the Wizard – like having agents gather customer dossiers based on your specifications or analyze competitor implementations before a call.

Your knowledge is valuable. Write it down, and it becomes executable.

User event streams

Growth | Source: https://posthog.com/handbook/growth/sales/user-event-streams

Using PostHog's data pipelines (CDP), you can create a real-time feed of customer activity directly in Slack. This lets you monitor how users in your book of business are engaging with PostHog without constantly checking dashboards or running queries.

This is valuable for getting a pulse on account health and engagement patterns. You'll see who's actively using the product, which features they're exploring, and when they might be hitting friction points. It's not meant to replace proper data analysis, but it gives you the "vibes" and can help you time your outreach more effectively. For example, if you notice someone reading a lot of feature flag docs and then creating several flags, you know they're actively working on something and might appreciate a quick check-in.

A word of caution: don't be a creep about this. Use it to inform when and how you reach out, not to surveil every click. If you notice someone's activity suggests they need help, check in naturally without revealing you're monitoring their every move.

How to set up your event stream

1. Get your account organization IDs

Query PostHog's data warehouse to pull Salesforce data and create a CSV of all PostHog organization IDs for accounts you own. This gives you the list of orgs to monitor.

2. Create a CDP destination

Set up a new data pipelines destination using a webhook endpoint. This is where you'll send the filtered events.

3. Filter for your accounts

Configure the destination to filter for all events where the organization ID matches your CSV list of org IDs. This ensures you only see activity from your accounts.

4. Select relevant events

Add filters for events that represent meaningful user actions across product areas:

Product analytics & insights:

Session replay:

Feature flags & experiments:

Surveys:

AI & Max:

Data pipelines:

Error tracking:

Engagement & product intent:

Billing & account health:

Team growth:

5. Customize the payload

Modify the webhook payload to include:

6. Route to Slack

Send the data to a Slack App endpoint, Zapier, or Relay.app to transform and redirect the events to your personal channel like your-name-alerts or your-name-user-event-stream.

What you'll get

A real-time feed of user activity that helps you:

Remember: this is supplementary context, not a replacement for proper account analysis and data review.

Matching PostHog to a business type

Growth | Source: https://posthog.com/handbook/growth/sales/utilization-by-business-type

This guide provides detailed instructions on how to achieve key business metrics using PostHog. Each business type has specific metrics that matter most, and this guide shows you exactly how to set up PostHog to track and optimize for those metrics.

B2B SaaS

Common business problems & personas

B2B SaaS companies often grapple with a core set of challenges that directly impact growth and sustainability:

Key business problems
Primary personas & their pain points

Product managers

Customer success managers

Sales teams

Marketing teams

Executives

Key metrics & PostHog

MRR/ARR (monthly/annual recurring revenue)

Importance: Measures the predictable revenue a SaaS business generates monthly or annually. It's crucial for forecasting, valuation, and understanding the company's financial health and growth trajectory.

PostHog approach: Track subscription events (subscription_created, subscription_upgraded, etc.) with properties like plan_tier, amount, and currency. PostHog helps analyze conversion funnels (e.g., trial_started to subscription_created), visualize revenue retention with cohort analysis on dashboards, and set up alerts for significant MRR changes. For non-technical users, autocapture on pricing pages and CTAs can power no-code funnels and session recordings to optimize conversion flows and pricing interactions.

CAC (customer acquisition cost)

Importance: The average cost to acquire a new customer. Understanding CAC is vital for marketing efficiency, profitability, and ensuring sustainable growth.

PostHog approach: Track marketing touchpoints (ad_clicked, demo_scheduled) and lead generation form submissions with properties like source, campaign, and UTM parameters. Integrate marketing spend data into PostHog for a unified view. Use funnel analysis to identify efficient acquisition channels and dashboards to visualize CAC trends by channel. Autocapture can track landing page visits and form submissions, enabling non-technical users to analyze lead quality by traffic source and optimize landing page UX with session recordings.

LTV (lifetime value)

Importance: The total revenue a business expects to generate from a single customer relationship over their lifetime. A high LTV indicates strong customer relationships and product value, enabling higher CACs and more aggressive growth strategies.

PostHog approach: Track all revenue-generating activities (subscription_payment, addon_purchase, upgrade) with customer segment and acquisition properties. Conduct cohort analysis for revenue retention and correlation analysis to identify high-value behaviors. PostHog's predictive analytics can forecast LTV. For non-technical users, autocapture can track feature usage and upgrade page visits to understand engagement patterns that correlate with high LTV, allowing for dashboards showing feature adoption by segment and alerts for potential churn signals impacting LTV.

Churn rate

Importance: The rate at which customers cancel their subscriptions or cease to use a service. High churn is detrimental to growth and directly impacts MRR/ARR and LTV, highlighting product-market fit or customer experience issues.

PostHog approach: Monitor engagement and usage patterns (feature_used, login, session_started) with properties like user_activity_level and feature_adoption. Use session recordings to understand behavior of churned users and correlation analysis to pinpoint churn indicators. Set up automated churn prediction models and alerts for at-risk users. Non-technical users can leverage autocapture to track declines in activity, analyze pages churned users stop visiting, and use session recordings to review churned user journeys.

NPS (net promoter score)

Importance: A widely used metric to gauge customer loyalty and satisfaction, indicating a customer's willingness to recommend a product or service. High NPS often correlates with retention and expansion revenue.

PostHog approach: Implement in-app NPS surveys using PostHog's survey feature. Track nps_survey_submitted events with user segment and usage properties. Analyze correlations between NPS and product usage patterns. Non-technical users can easily create surveys, configure triggers, and track completion rates. Dashboards can show NPS trends by segment, and session recordings can analyze user interactions with survey prompts to optimize feedback collection.

Feature adoption

Importance: Measures the extent to which users discover, use, and continue to use specific product features. High feature adoption indicates that users are deriving value, which is crucial for retention, upsell opportunities, and validating product development efforts.

PostHog approach: Track granular feature usage (feature_accessed, feature_completed) with feature name and user segment properties. Use funnel analysis for onboarding flows and session recordings to identify friction. Implement feature flags for controlled rollouts and A/B testing for optimization. Non-technical users can use autocapture for feature page visits and button clicks, analyze user journeys to feature discovery, and create dashboards for adoption rates. Alerts can be set for changes in feature usage.

B2C SaaS

Common business problems & personas

Key business problems
Primary personas & their pain points

Product Managers

Growth Teams

Customer Success Teams

Marketing Teams

Mobile Teams

Key metrics & PostHog

User Activation Rate

Importance: Measures the percentage of new users who complete key onboarding steps and experience the product's core value. High activation is crucial for retention and indicates a successful onboarding experience.

PostHog approach: Track activation events (account_created, onboarding_completed) with properties like activation step and acquisition source. Use funnel analysis to optimize time-to-value, and cohort analysis to track activation rates on dashboards. Session recordings can help identify activation friction points, and alerts can be set for activation rate drops. Non-technical users can use autocapture for onboarding page visits and tutorial interactions to create no-code funnels and analyze user behavior.

Daily/Monthly Active Users (DAU/MAU)

Importance: Measures user engagement and product stickiness by tracking the number of unique users who interact with the product on a daily or monthly basis. A high DAU/MAU ratio indicates strong, consistent user value.

PostHog approach: Track user activity events like session_started and feature_used with properties such as user segment and session duration. Create dashboards for real-time DAU/MAU tracking and trend analysis. Calculate stickiness (DAU/MAU ratio) and use cohort analysis to track engagement over time. Alerts can be configured for significant engagement drops. Autocapture can track page visits and feature interactions, enabling non-technical users to analyze engagement patterns and identify popular features.

Customer Lifetime Value (CLV)

Importance: Represents the total revenue a business can expect from a single customer account throughout their relationship. CLV is a key indicator of long-term profitability and customer loyalty.

PostHog approach: Track all revenue events (subscription_started, purchase_made) with properties like purchase amount and acquisition source. Use cohort analysis to analyze CLV by acquisition month and correlation analysis to identify high-value behaviors. PostHog's predictive analytics can be used for CLV forecasting. For non-technical users, autocapture on purchase pages and upgrade buttons helps track the user journey to purchase and identify which features drive upgrades, with session recordings providing insights into purchase behavior.

Viral Coefficient

Importance: Measures the number of new users an existing user generates, indicating the effectiveness of viral loops and word-of-mouth growth. A coefficient greater than one signifies exponential growth.

PostHog approach: Track viral events like referral_sent and invitation_accepted with properties such as referral type and conversion rate. Use funnel analysis to optimize referral flows and A/B test referral incentives and messaging. Dashboards can show viral coefficient trends. Non-technical users can use autocapture to track share button clicks and referral page visits, using session recordings to understand and optimize referral behavior.

User Retention Rate

Importance: The percentage of users who continue to use the product over a given period. It's a critical metric for sustainable growth, reflecting long-term product value and user satisfaction.

PostHog approach: Track retention events like user_returned and session_started. Create retention dashboards with cohort analysis by acquisition source to track trends over time. Use session recordings to understand the behavior of retained users and correlation analysis to identify key retention-driving features. Set up automated alerts for retention drops. Autocapture allows non-technical users to track user return patterns and feature usage that correlates with retention.

Mobile App Performance

Importance: Measures the responsiveness, stability, and overall user experience of a mobile application. Good performance is essential for user satisfaction and retention on mobile devices.

PostHog approach: Track mobile-specific events like app_opened and app_crashed with properties such as app version and device type. Use PostHog's real user monitoring for performance and Core Web Vitals tracking. Create mobile performance dashboards, set up crash monitoring with alerts, and use session recordings to identify mobile-specific UX issues. Non-technical users can leverage autocapture to track mobile interactions and compare mobile vs. desktop usage patterns.

E-commerce

Common business problems & personas

Key business problems
Primary personas & their pain points

E-commerce Managers

Marketing Teams

Product Teams

Customer Service Teams

Inventory Managers

Key metrics & PostHog

GMV (Gross Merchandise Value)

Importance: Represents the total value of all goods sold over a specific period. GMV is the primary measure of an e-commerce platform's scale and is essential for understanding top-line growth and market share.

PostHog approach: Track all purchase events (product_viewed, add_to_cart, purchase_completed) with properties like product category, price, and quantity. Connect to your e-commerce platform for comprehensive data. Create dashboards for real-time GMV tracking and product performance analysis by category. Use cohort analysis to track customer value over time and set up alerts for unusual GMV patterns. For non-technical users, autocapture on product pages and "add to cart" buttons can track the conversion journey and identify popular products, with session recordings helping to optimize product pages.

AOV (Average Order Value)

Importance: The average amount spent each time a customer places an order. Increasing AOV is a key strategy for maximizing revenue without increasing the number of customers, directly impacting profitability.

PostHog approach: Track cart and purchase events (cart_updated, purchase_completed) with properties like cart value and discount applied. Use funnel analysis to optimize the cart and identify abandonment points. A/B test pricing and product recommendations to find effective upselling strategies. Use correlation analysis to identify behaviors of customers with high AOV. Non-technical users can use autocapture to track interactions on the cart page, analyze abandonment patterns, and use session recordings to optimize the checkout flow.

Conversion Rate

Importance: The percentage of visitors who complete a purchase. This is a critical metric for gauging the effectiveness of the entire customer journey, from landing page to checkout, and is a primary indicator of site performance and user experience.

PostHog approach: Track all steps in the conversion funnel (page_viewed, product_viewed, add_to_cart, checkout_started, purchase_completed) with properties like traffic source and device type. Create comprehensive conversion funnels to identify drop-off points and use session recordings to understand checkout friction. A/B test checkout flows and product pages to optimize the user path. Non-technical users can use autocapture to track all funnel page visits and interactions, creating funnels and using session recordings to optimize conversion paths with no code.

Cart Abandonment

Importance: The rate at which users add items to their cart but leave without completing the purchase. A high cart abandonment rate often indicates friction in the checkout process, unexpected costs, or a poor user experience.

PostHog approach: Track cart interactions like add_to_cart and remove_from_cart. Use session recordings to understand the behavior of users who abandon their carts, and implement exit-intent surveys to gather direct feedback on abandonment reasons. Create funnels that specifically track the checkout process to pinpoint exact drop-off points. This data can inform cart abandonment recovery strategies. Non-technical users can use autocapture to track all cart page interactions and build abandonment funnels to analyze user behavior.

Customer Lifetime Value

Importance: The total revenue a business can expect from a single customer throughout their relationship. CLV is vital for making strategic decisions about marketing spend, customer acquisition, and retention efforts, ensuring long-term profitability.

PostHog approach: Track all customer interactions, including purchase_completed, return_requested, and support_contacted, with properties like purchase history and acquisition source. Create cohort analyses by acquisition month to understand how customer value evolves. Use correlation analysis to identify behaviors of high-value customers and PostHog's predictive analytics for CLV forecasting. Non-technical users can use autocapture on account and order history pages to track engagement patterns and use session recordings to understand high-value customer behavior.

Marketplace

Common business problems & personas

Key business problems
Primary personas & their pain points

Marketplace Operations Managers

Trust & Safety Teams

Product Managers

Growth Teams

Customer Success Teams

Key metrics & PostHog

GMV (Gross Merchandise Value)

Importance: Represents the total value of all transactions between buyers and sellers on the platform over a specific period. It is the primary indicator of a marketplace's scale, liquidity, and overall health, reflecting its ability to facilitate transactions and generate value for its users.

PostHog approach: Track marketplace transaction events like listing_viewed, booking_requested, and transaction_completed with properties such as category, price, seller_id, and buyer_id. Integrate with payment processors for comprehensive data. Use PostHog to create real-time GMV dashboards with breakdowns by category, set up seller and buyer performance tracking, conduct cohort analysis to monitor marketplace growth, and create alerts for unusual transaction patterns. Non-technical users can use autocapture to track listing views and booking requests, creating funnels to analyze the path to a completed transaction and using session recordings to optimize the user journey.

Take Rate

Importance: The percentage of GMV that the marketplace captures as revenue (commission or fees). It is a crucial metric for understanding the marketplace's business model effectiveness and profitability. Optimizing the take rate is key to sustainable growth.

PostHog approach: Track commission events like commission_earned from transaction_completed events, with properties for transaction amount, commission percentage, and category. Analyze revenue and profitability by category on dashboards. This allows for identifying opportunities to optimize the take rate, for example by analyzing its drivers with correlation analysis and setting up alerts for significant changes. Non-technical users can build dashboards to monitor take rate trends across different product categories or seller tiers, helping to inform pricing strategy without writing any code.

Supply/Demand Balance

Importance: Measures the equilibrium between the number of sellers (supply) and buyers (demand) on the platform. A balanced marketplace ensures a good user experience for both sides, preventing situations like too few products for buyers or too few customers for sellers, which can lead to churn.

PostHog approach: Track supply-side events (listing_created, service_offered) and demand-side events (search_performed, booking_requested). Use properties like category, location, and search terms to analyze supply-demand gaps on dashboards. Funnel analysis can reveal booking conversion rates, while alerts can notify of imbalances, helping to identify and act on new market opportunities. Non-technical users can create dashboards that visualize searches with no results, providing a simple way to spot unmet demand and guide supply-side growth efforts.

Network Effects

Importance: Measures how the value of the platform increases for users as more people use it. Strong network effects create a powerful competitive advantage (a "moat") and are the engine of sustainable, viral growth for marketplaces. It's what makes a marketplace more valuable as it scales.

PostHog approach: Track network interaction events like user_referred, invitation_accepted, and cross_side_activity (e.g., a user being both a buyer and seller). Use properties to distinguish user types. Dashboards can visualize network growth and viral coefficients. Cohort analysis is key to measuring how network effects develop over time for different user groups, and alerts can highlight opportunities for growth. Non-technical users can use autocapture on referral pages and share buttons to analyze the effectiveness of viral loops and optimize the user flow with session recordings.

Trust & Safety Metrics

Importance: Trust is the currency of a marketplace. These metrics, such as user ratings, review rates, fraud reports, and dispute rates, measure the level of safety and reliability on the platform. High trust is essential for encouraging transactions, retaining users, and building a strong brand reputation.

PostHog approach: Track trust-related events like review_submitted, dispute_filed, and fraud_detected, enriched with properties on user reputation and transaction history. Dashboards can monitor trust scores and fraud rates. Session recordings are invaluable for investigating suspicious user behavior and understanding how trust is built (or broken) in user flows. Set up alerts for fraud signals and use correlation analysis to identify key indicators of trust. Non-technical users can create surveys to collect user feedback on trust and use session recordings to review the user journey for those who file disputes.

Developer Tools

Common business problems & personas

Key business problems
Primary personas & their pain points

Developer Relations Teams

Product Engineers

Technical Documentation Teams

Developer Success Teams

Growth Teams

Key metrics & PostHog

Developer Adoption

Importance: Measures the rate at which developers start using a tool, from initial signup to making their first API call. It's the most critical top-of-funnel metric for developer tools, as it indicates the health of the onboarding process and the tool's initial appeal. High adoption is a leading indicator of future growth and product-market fit.

PostHog approach: Track key developer touchpoints like account_created, sdk_installed, and api_call_made with properties for tech stack and company size. Create adoption funnels to analyze the journey from first contact to active use, identifying drop-off points. Use cohort analysis to track developer retention over time and map the developer journey to understand common paths to success. Alerts can signal developer churn risk. Non-technical users, like DevRel teams, can build these funnels and dashboards without code to monitor adoption trends and measure the impact of their initiatives.

API Usage

Importance: Tracks the frequency, volume, and patterns of API calls made by developers. This metric is vital for understanding which features are most valuable, how developers are integrating the product, and the overall health and performance of the API. It directly reflects product engagement and stickiness for a developer-focused product.

PostHog approach: Instrument all API endpoints to track events like api_request and api_error, with properties for the specific endpoint, response time, and error type. Create API performance dashboards to monitor usage, latency, and error rates in real-time. Set up alerts for performance degradation or spikes in errors. Use correlation analysis to understand which usage patterns are associated with retention or expansion. Non-technical users can use dashboards to see which endpoints are most popular and identify which customers are experiencing the most errors.

Documentation Engagement

Importance: For developer tools, documentation is the product. This metric measures how developers interact with documentation, including page views, search queries, and time spent on pages. High engagement indicates that the documentation is useful and helps developers solve problems, which is critical for adoption and reducing support load.

PostHog approach: Track documentation interactions like docs_page_viewed, code_sample_copied, and tutorial_completed, with properties for the page, search terms, and user segment. Use session recordings to see where developers get stuck or confused. Analyze search patterns to identify content gaps and create dashboards to monitor documentation effectiveness. Non-technical users, like technical writers, can use these insights to prioritize content updates and improve the developer experience without needing to write code.

Community Growth

Importance: Measures the health and vibrancy of the developer community around a product (e.g., on GitHub, Slack, Discord). A growing, active community provides social proof, drives word-of-mouth adoption, offers scalable support, and is a rich source of product feedback. It acts as a moat and a powerful growth engine.

PostHog approach: Track community interactions from various platforms by sending events like forum_post_created, github_issue_opened, or community_event_attended. Use properties to segment by contribution level and topic. Create dashboards to monitor community engagement and growth trends. Use cohort analysis to track member retention and identify "power users" who can become community champions. Non-technical users, like community managers, can easily track these metrics to demonstrate the value of their programs.

Support Ticket Volume

Importance: The number of support tickets created by developers. While some tickets are expected, a high volume, especially on recurring themes, points to friction in the product, confusing documentation, or a poor onboarding experience. Analyzing this data is key to improving the product and reducing operational costs.

PostHog approach: Integrate your support system (e.g., Zendesk, Jira) with PostHog to track support_ticket_created and support_ticket_resolved events. Enrich these events with properties like ticket type, priority, and resolution time. Use correlation analysis to link support tickets to specific in-product behaviors or documentation pages, identifying the root cause of developer friction. Dashboards can help monitor support trends and efficiency. This allows non-technical team members to identify which product areas are generating the most support load.

Fintech

Common business problems & personas

Key business problems
Primary personas & their pain points

Risk & Compliance Teams

Product Managers

Engineering Teams

Customer Success Teams

Growth Teams

Key metrics & PostHog

Transaction Volume

Importance: Measures the total number or value of transactions processed by the platform. This is a fundamental indicator of a fintech product's adoption, usage, and overall scale. It directly impacts revenue and is a key signal of market traction and business health.

PostHog approach: Track all financial transaction events like transaction_initiated, transaction_completed, and transaction_failed with detailed properties such as transaction type, amount, currency, and user segment. Use dashboards for real-time monitoring of transaction volume and success rates. Correlation analysis can help understand what user behaviors lead to more transactions, and alerts can be set for unusual spikes or dips in activity. Non-technical users can build funnels to analyze the transaction flow and identify drop-off points without writing any code.

Fraud Rate

Importance: The percentage of transactions that are fraudulent. In fintech, managing fraud is critical for financial stability, maintaining user trust, and meeting regulatory obligations. A low fraud rate is essential for long-term viability and building a reputable platform.

PostHog approach: Track fraud and risk-related events such as fraud_detected, risk_assessment_failed, or verification_completed. Enrich this data with properties like risk factors, fraud type, and user behavior patterns. Session recordings are invaluable for investigating suspicious user behavior to understand fraud vectors. Create dashboards to monitor fraud rates in real-time and set up alerts for emerging fraud patterns. Non-technical risk teams can use session recordings to review suspicious sessions flagged by alerts.

Compliance Metrics

Importance: Measures adherence to financial regulations like KYC (Know Your Customer) and AML (Anti-Money Laundering). For fintech companies, compliance is not optional; it's a license to operate. Tracking these metrics is crucial for avoiding fines, legal penalties, and reputational damage.

PostHog approach: Track all compliance-related events, such as kyc_started, kyc_completed, and aml_check_failed. Use properties to log the compliance type, status, and user segment. This creates a detailed audit trail for regulatory purposes. Dashboards can provide a real-time view of compliance status and help monitor the efficiency of these critical flows. Alerts can be configured to flag compliance failures, allowing teams to act quickly. Non-technical compliance officers can use funnels to analyze and optimize the KYC process.

Customer Acquisition Cost

Importance: The total cost to acquire a new, verified customer. Fintech often has high acquisition costs due to marketing, compliance, and verification expenses. Understanding and optimizing CAC is crucial for ensuring profitability and scaling the business sustainably.

PostHog approach: Track the entire acquisition funnel, from ad_clicked and account_opened to verification_completed and first_transaction. Enrich these events with properties like acquisition source, campaign, and verification costs. Use funnel analysis to identify drop-off points in the onboarding and KYC process. A/B testing can be used to optimize landing pages and onboarding flows to reduce CAC. Non-technical marketers can use dashboards to compare the CAC and LTV across different channels.

Regulatory Reporting

Importance: This tracks the ability of the company to generate accurate and timely reports for regulatory bodies. Efficient and reliable reporting processes are essential for demonstrating compliance and avoiding penalties. While PostHog doesn't generate the reports, it can monitor the internal processes that do.

PostHog approach: Track internal events related to the reporting process, such as report_generated, audit_trail_requested, and compliance_check_completed. Use properties to specify the report type and its status. This provides visibility into the operational health of the reporting systems. Dashboards can be used to monitor the success and timeliness of report generation, and alerts can be set up to flag any failures or delays in the process, ensuring the compliance team is aware of any issues.

Healthcare/Medtech

Common business problems & personas

Key business problems
Primary personas & their pain points

Clinical Teams

Compliance Officers

IT/Engineering Teams

Training Teams

Product Managers

Key metrics & PostHog

Patient Outcomes

Importance: This is the core metric for any healthcare product, measuring the actual health impact on patients. Demonstrating positive patient outcomes is crucial for clinical validation, provider adoption, regulatory approval, and building patient trust. It is the ultimate measure of product value and efficacy.

PostHog approach: Track key events in the patient journey, such as treatment_plan_started, outcome_measured, and follow_up_completed. Use properties to segment by treatment type, patient demographics, and specific outcome metrics. Cohort analysis can track how outcomes trend over time for different patient groups. Dashboards can visualize progress towards clinical goals, and correlation analysis can help identify which product features are linked to better outcomes. Non-technical users, like clinicians, can use dashboards to monitor patient progress without writing code.

Compliance Metrics

Importance: Healthcare is a highly regulated industry (e.g., HIPAA in the US). Compliance metrics track adherence to these regulations, particularly around data privacy and security. Failure to comply can result in severe penalties, loss of trust, and legal action, making it a foundational requirement for any MedTech product.

PostHog approach: Track all compliance-related events, such as hipaa_audit_trail_accessed, data_access_logged, and patient_consent_obtained. Properties should include the user role, type of data accessed, and audit results to create an immutable log. Dashboards can provide a real-time view of compliance activities, and alerts can be set up for any unauthorized access attempts or compliance failures. Non-technical compliance officers can use these dashboards to monitor activity and generate reports.

User Adoption

Importance: Measures how effectively healthcare providers (doctors, nurses, etc.) are integrating a new tool into their daily work. Low adoption by clinicians can undermine the intended benefits of a technology, regardless of its potential. High adoption is key to realizing efficiency gains and improving patient care at scale.

PostHog approach: Track user interactions such as feature_used, workflow_completed, and training_module_completed. Segment by user role (e.g., doctor, nurse) using properties. Adoption funnels can show where users drop off during onboarding. Session recordings are invaluable for understanding how clinicians use the product in a real-world context. Alerts can flag low adoption in specific departments. Non-technical training teams can analyze session recordings to improve their training materials.

Clinical Workflow Efficiency

Importance: Measures the time and effort required for clinicians to complete tasks using the product. In the high-pressure healthcare environment, time is a critical resource. Improving workflow efficiency can reduce clinician burnout, lower operational costs, and allow more time for direct patient care.

PostHog approach: Track workflow events from start to finish: workflow_started, step_completed, workflow_completed. Use properties to capture the duration of each step and the user role. Funnel analysis is perfect for identifying bottlenecks where users get stuck or take too long. Dashboards can monitor average completion times for key workflows. Non-technical managers can use these funnels to identify areas for process improvement without needing technical assistance.

Data Accuracy

Importance: In healthcare, critical decisions are made based on patient data. Inaccurate or incomplete data can lead to misdiagnosis, incorrect treatment, and serious patient harm. This metric tracks the integrity and reliability of the data within the system, which is fundamental to patient safety.

PostHog approach: Track data entry and validation events like data_entered, data_validated, and error_detected. Use properties to specify the data type, validation method, and error type. Create dashboards to monitor data quality trends and error rates. Correlation analysis can help identify if specific user roles or workflow steps are associated with higher error rates. Alerts can notify teams of spikes in data entry errors, allowing for swift investigation.

Content/Media

Common business problems & personas

Key business problems
Primary personas & their pain points

Content Teams

Product Managers

Marketing Teams

Editorial Teams

Revenue Teams

Key metrics & PostHog

Engagement Rate

Importance: Measures how actively users are interacting with content beyond just viewing it (e.g., likes, shares, comments, time spent). It's a key indicator of content quality and audience resonance. High engagement suggests that the content is valuable, which is crucial for building a loyal audience and driving retention.

PostHog approach: Track engagement events like content_viewed, time_spent_on_page, video_played_to_75%, and article_shared. Use properties to segment by content type and user segment. A custom "engagement score" can be created using formulas in PostHog to weigh different interactions. Cohort analysis can track how engagement evolves for different user groups. Non-technical editors can use dashboards to see which articles are most engaging to inform their content strategy.

Content Performance

Importance: Provides a holistic view of how individual pieces of content contribute to business goals, from views to conversions. Understanding what content performs well is essential for optimizing content strategy, allocating production resources effectively, and maximizing the ROI of content creation.

PostHog approach: Track the content lifecycle with events like content_published, content_viewed, and content_shared, enriched with properties like category, author, and format. Use correlation analysis to identify the attributes of successful content (e.g., "how-to" articles over 1500 words drive the most shares). Dashboards can rank content by performance, and alerts can notify teams when a piece of content starts trending. Non-technical content teams can use these insights to double down on what works.

User Retention

Importance: Measures the percentage of users who return to the platform over time. For media companies, retention is the lifeblood of the business, as it's far more cost-effective than acquisition. High retention indicates that users find ongoing value in the content, which is key for long-term growth and subscription revenue.

PostHog approach: Track retention by monitoring user_returned or session_started events. Use PostHog's retention cohorts to analyze how retention differs by acquisition source or first content consumed. Correlation analysis can identify behaviors (e.g., subscribing to a newsletter) that are leading indicators of retention. Churn prediction models can help proactively identify at-risk users. Non-technical marketers can use cohorts to understand the long-term value of users from different campaigns.

Ad Revenue

Importance: For ad-supported media companies, this metric directly measures financial performance. Optimizing ad revenue involves balancing user experience with monetization, making it crucial to track metrics like impressions, click-through rates (CTR), and revenue per user.

PostHog approach: Track ad-related events like ad_impression, ad_click, and ad_revenue_generated. Use properties to segment by ad type, placement, and user segment. A/B test different ad placements and formats to see what generates the most revenue without harming engagement. Dashboards can monitor ad performance in real-time, and alerts can flag underperforming ad units. Non-technical revenue teams can use these dashboards to track progress against revenue goals.

Subscription Metrics

Importance: For subscription-based media companies, metrics like conversion rate, subscriber LTV, and churn are the ultimate measure of business health. They track the ability to convert casual readers into paying subscribers and retain them, directly reflecting the perceived value of the premium offering.

PostHog approach: Track the entire subscription funnel with events like paywall_hit, subscription_started, subscription_renewed, and subscription_cancelled. Use funnel analysis to identify drop-off points in the conversion process and properties like plan type to segment subscribers. Cohort analysis is essential for tracking subscriber LTV and churn over time. Non-technical product managers can use funnels to optimize the checkout flow and A/B test different paywall strategies.

Who we do business with

Growth | Source: https://posthog.com/handbook/growth/sales/who-we-do-business-with

We firmly adhere to laws in countries where we do business, and welcome everyone abiding by those legal restrictions to be our customers - paid or free, in all but a few very exceptional circumstances:

In these cases, we may choose not to do business with the customer.

Sanctioned countries and companies

US laws mean we may also be prohibited from working with certain companies, due to ongoing US sanctions. In this case we do not have discretion - we are banned from working with these companies entirely.

If you need to check if a particular company appears on a US sanctions list, you can use the US Treasury's Sanction Search. In particular, you should be mindful of companies that sign up which are based in the following territories:

US sanctions mean that we are not allowed to offer services at all to _any_ companies based in:

Update for June 2024 US sanctions against Russia

In June 2024, the US Treasury's Office of Foreign Asset Control issued updated sanctions against Russia which prohibit the sale or supply of services to individuals or organizations in Russia. The sanctions take effect on September 10, 2024 and continue indefinitely.

We must comply with these sanctions, so in August 2024 we contacted impacted individuals to let them know we would make the following changes on September 9th, 2024:

There are some exemptions to the sanctions including any service to any entity located in the Russian Federation that is owned or controlled, directly or indirectly, by a U.S. person.

If a customer believes they've been incorrectly impacted by our response to these sanctions, or have further questions about them, ask them to contact sales@posthog.com so we can investigate.

Checking whether we can do business with a customer

If you work in Sales, CS & Onboarding, or Support and are not sure if we are able to work with a customer you are dealing with, ask in #legal and one of the team will be able to let you know either way. For the most part, these edge cases are to do with customers attempting to work around sanctions in their country, though other edge cases can also occur.

Customers who track adult or other potentially offensive content aren't automatically excluded - we have content warnings set up in Zendesk for them. If you are working with their account more regularly as part of the Sales or CS & Onboarding teams, we also recommend that you avoid logging in as them, and that you provide any training using demo data.

Why buy PostHog

Growth | Source: https://posthog.com/handbook/growth/sales/why-buy-posthog

AKA our Value Proposition, these are some of the things we've found useful to share when chatting to customers about why PostHog is different and better than our competitors. As a company, the primary user persona we are building for are Product Engineers, so we focus on them first. We then provide messaging for the other roles we may encounter in an inbound sales cycle, and still want to be successful when selling to them.

Product Engineers

One-liner

We help you debug and ship your product faster.

Summary

By integrating PostHog into your app, you’ll be able to track and diagnose errors, roll out and test new features and gain a better understanding of your user behavior. With that greater understanding, you'll then be able to take action on it and respond to your user needs quickly and effectively. Getting all of these capabilities through one SDK means you reduce the overhead of maintaining your app and can focus on shipping your product.

Use cases

Product Managers

One-liner

Self-serve analytics without needing to ask your engineers or data team for help.

Summary

After your engineers integrate the PostHog SDK, you’ll be able to self-serve analytics without asking your data team for insights. We automatically track user interactions with your app and then let you tag key events for use in analytics. You’ll also be able to navigate from the data to individual user interactions to see how users interact with your app and make informed product decisions, and then finally use behavioral triggers to send feedback surveys and more all without engineering effort.

Use cases

Marketing

One-liner

A familiar analytics experience with all of the integrations you need to decide where to focus your marketing efforts.

Summary

By deploying our simple JavaScript snippet on your website you’ll capture all of the data you need to measure channel performance, and then visualize that data in a familiar format without any additional report writing. Optionally hook up Stripe or other revenue sources to measure revenue attribution.

Use cases

Data Engineers

One-liner

A complete developer platform which fits into your existing data stack.

Summary

Using PostHog's CDP lets you aggregate data from multiple technologies and platforms. It takes a few clicks to set up exports of that data to your data warehouse, and your product and engineering teams can self-serve their own analytics from within PostHog.

Use cases

General talking points for all roles

Per-product sales enablement

The product marketing team has created sales enablement materials, covering some product information and general objection handling for specific products. These exist as Google Docs as they are living documents, but are listed below.

If there are additional products you'd like to see this sort of material for, let the team know in the #team-marketing Slack channel.

AI/LLM Observability

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/ai-llm-observability

What is the job to be done?

"Help me understand how my AI features perform, what they cost, and how users interact with them."

This is the fastest growing segment of our customer base. AI-native companies are adopting PostHog at a high rate, but often only for LLM Observability or only for Product Analytics. The cross-sell opportunity is significant because AI products have unique observability needs that span multiple PostHog products.

The buyer persona is distinct: AI engineers care about model-level metrics (latency, cost, token usage, accuracy) first, user-level analytics second. Leading with the AI story opens the door to everything else.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually LLM Observability or Product Analytics. Two common patterns:

  1. Model-first: AI engineer wants to understand model performance: latency, cost, token usage. They start with LLM Observability for tracing and cost attribution, then realize they need to understand how users interact with the output (Product Analytics), whether the output is actually good (AI Evals), and how to test improvements (Experiments).
  2. Product-first: AI product team is building a product with AI features and starts with Product Analytics to track user behavior. They realize they need model-level metrics alongside user metrics, which pulls in LLM Observability. From there, they want to evaluate quality (AI Evals) and test prompt/model changes (Experiments).

Primary expansion path

LLM Observability → + AI Evals → + Product Analytics (user behavior) → + Experiments (prompt/model testing) → + Error Tracking → + Session Replay

The logic of each step:

Alternate expansion paths

Starting from Product Analytics: An AI product team already using PostHog for product analytics. They add LLM Observability to get model-level metrics alongside their user behavior data. From there, AI Evals and Experiments are natural adds.

Starting from Error Tracking: Team catching model failures with Error Tracking. They realize traditional error tracking misses quality regressions (model responds but with worse output). AI Evals fills this gap, pulling in LLM Observability for the full model-level context.

Business impact of solving the problem

AI-native companies are the fastest growing customer segment. Getting in early with LLM Observability means PostHog becomes the default platform as these companies scale. AI-native startups that adopt PostHog at seed stage often grow into significant accounts.

The cross-sell opportunity is uniquely strong. AI products sit at the intersection of multiple PostHog use cases: model observability (AI/LLM Obs), user behavior analytics (Product Intelligence), release management for prompt/model changes (Release Engineering), and error tracking for model failures (Observability). One AI customer can reasonably adopt products from 4+ use cases.

No one else has this combination. Langfuse and Helicone do LLM tracing. Amplitude does product analytics. Sentry does error tracking. No one connects model performance → output quality → user behavior → business outcomes in one platform. That's PostHog's pitch.

AI Evals is the bridge product. For any account building AI features, AI Evals connects AI/LLM Observability to Product Intelligence (are users struggling based on output quality?) and Release Engineering (did a prompt change cause a quality regression?). It's a natural entry point into multiple use cases from a single product.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | AI Engineer | ML Engineer, AI Engineer, Applied AI | Model performance, cost optimization, latency, quality | "Can I see cost per query by model, trace individual calls, and detect quality regressions?" | | AI Product Manager | AI PM, Product Lead (AI features) | User experience of AI features, adoption rates, business impact | "Can I see how users interact with our AI features and whether they drive retention?" | | AI Founder | Founder, CTO at AI-native startup | All of the above. Cost control. Speed. Not paying for 5 tools. | "How fast can I set this up and how much does it replace?" | | AI Product Engineer | Full-stack engineer building AI features | Instrumentation, debugging, prompt iteration cycle time | "How easy is it to instrument? Can I see trace-level detail for debugging?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | LLM Observability is active | Product usage data | AI/LLM Obs use case is live. Full expansion path available. | | Company tags include "AI" or "LLM" or "ML" | Company info / tags | AI-native or AI-building company. This use case is likely relevant even if they haven't adopted LLM Observability yet. | | High Product Analytics usage + AI company | Product usage + company type | They're using analytics but haven't connected model-level metrics. LLM Observability is the add. | | Customer mentions Langfuse, Helicone, or "LLM costs" in notes | Vitally notes / conversations | Direct signal. They're thinking about AI observability and may be using a competitor or building it in-house. |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | LLM-related custom events (e.g., llm_generation, ai_response) | Event property explorer | They're tracking AI events in Product Analytics. LLM Observability would give them model-level detail. | | High LLM Observability trace volume | Product usage metrics | Active AI instrumentation. Ripe for AI Evals and Experiments. | | Experiments on AI-related features | Experiments list | They're already A/B testing AI features. Validate they're using LLM Obs for model-level measurement. | | Error Tracking exceptions from AI/model code | Error tracking events | Model failures are happening. LLM Observability gives context beyond the stack trace. |

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | Langfuse | Open-source LLM tracing, prompt management, evals | Broader platform (product analytics, experiments, replay, error tracking); user behavior metrics; not just model metrics | More mature LLM-specific features; open-source community; purpose-built prompt management | | Helicone | LLM request logging, cost tracking, caching | Broader platform; user behavior connection; experiments; not a single-purpose tool | Simpler to set up for basic LLM logging; built-in caching/rate limiting features | | Braintrust | LLM evals, logging, prompt playground | Broader platform; user behavior metrics; production monitoring not just offline evals | More mature eval framework; better prompt playground and iteration workflow | | Datadog LLM Monitoring | LLM tracing as part of broader APM | Product analytics integration; user behavior; better pricing for AI-native startups | Full APM stack; enterprise-grade; part of existing Datadog deployment for bigger companies |

Honest assessment: Our strongest position is with AI-native startups and teams building AI features inside existing products. The pitch is "one platform for everything" instead of Langfuse + Amplitude + Sentry + a flag tool. We're weaker against teams that want the deepest possible LLM-specific tooling (Langfuse's prompt management and eval framework are more mature). We're also weaker against enterprise teams already embedded in Datadog. Our sweet spot is AI teams that want model performance connected to user outcomes in one place, without managing 4 vendors.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | LLM Observability feature set is newer than Langfuse | Teams expecting Langfuse-level prompt management and eval detail may find gaps | Be honest about maturity. Position the breadth of the platform (analytics, experiments, replay) as the differentiator. Langfuse is great for pure LLM tracing; PostHog is better when you also need to understand user behavior and business impact. | | AI Evals may not support all evaluation frameworks | Teams with custom eval pipelines may want more flexibility | Check current eval capabilities. For custom frameworks, PostHog's API and data warehouse can integrate with existing eval pipelines. | | Session Replay for AI chat interfaces can be noisy | Chat-based AI products generate a lot of replay data per session | Configure sampling rules. Focus replay viewing on sessions with error events or low AI quality scores. |

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | LLM Observability only | AI Evals | They can see model metrics but don't know if the output is actually good | "You can see your model's latency and cost. But do you know if the quality held up after your last prompt change?" | | LLM Obs + AI Evals | Product Analytics | They know model performance and quality. They don't know how users interact with the output. | "Your model is fast and the quality is high. But are users actually accepting the suggestions and converting?" | | LLM Obs + Product Analytics | Experiments | They see model metrics and user behavior. They want to improve. | "You can see GPT-4o costs more but users seem to prefer it. Want to run a proper A/B test to quantify the difference?" | | AI feature releasing changes | Release Engineering (Feature Flags) | They're changing prompts/models and want controlled rollout | "When you change your prompt, do you ship to everyone at once? Feature flags let you roll out to 5% first and measure before going wide." | | AI features in PostHog | Product Intelligence (for the product team) | AI team is in PostHog. The broader product team should be too. | "Your AI team uses PostHog for model metrics. Has the product team seen what they can do with funnels and retention for non-AI features?" | | Error Tracking for AI errors | Observability (full stack) | They're catching AI errors but not traditional application errors | "You're tracking model failures. Are you also catching the non-AI exceptions? Error Tracking works for your entire stack." |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | "You need to understand your model costs, catch quality regressions, and see how users interact with your AI features, all without hiring a data team or buying 4 tools." Speed and simplicity. One platform. | LLM Observability, AI Evals, Product Analytics, PostHog AI | Founder, AI engineer, founding PM | | AI Native — Scaled | "You're scaling AI features across your product. You need cost attribution by team/feature, automated quality evaluation, prompt/model experimentation, and the ability to connect model performance to business outcomes." | LLM Observability, AI Evals, Product Analytics, Experiments, Error Tracking, Session Replay | Head of AI/ML, AI PM, VP Eng | | Cloud Native — Any (building AI features) | "You're adding AI features to an existing product. PostHog already tracks your users. Now connect model performance to user behavior so you can optimize the AI experience alongside everything else." The pitch here is extending their existing PostHog usage, not adopting a new tool. | LLM Observability, AI Evals (added to existing PostHog stack) | Engineering team building the AI feature, PM who owns the AI feature |

Customer Experience

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/customer-experience

What is the job to be done?

"When a customer runs into an issue, we're able to quickly understand exactly what happened, identify the problem, and verify a fix, without bouncing between multiple tools or wasting engineering time trying to reproduce it."

Most companies don't have a customer experience system. They have tickets in one place, errors in another, logs somewhere else, analytics owned by product, and engineers manually trying to reproduce bugs. The goal of this use case is to help a company build a unified debugging workflow where support, product, and engineering share the same context.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Session Replay or Product Analytics. Common entry scenarios:

  1. "We can't reproduce bugs": Support needs to see what happened instead of relying on screenshots and user descriptions. Session Replay is the direct answer.
  2. "Something is breaking but we don't know why": Product notices drop-offs or support volume spikes and needs visibility into what's causing them. Product Analytics surfaces the pattern, Session Replay provides the detail.

Primary expansion path

Product Analytics → + Session Replay → + Error Tracking → + Logs / LLM Observability → + Surveys

The logic of each step:

This expansion happens naturally because each step removes a layer of uncertainty.

Alternate expansion paths

Starting from Session Replay as a replacement for another session recording tool. They adopt Session Replay to replace Hotjar, FullStory, or LogRocket. Expand by introducing autocapture (Product Analytics), Error Tracking for structured bug data, and Group Analytics for account-level views.

Business impact of solving the problem

Engineering time savings. If bug reproduction drops from 2 hours to 30-60 minutes, teams get fewer context switches, fewer escalations, and more roadmap velocity. Even modest improvements here can easily justify the cost of the entire PostHog contract.

Escalation reduction. When support can view replay, check errors, and inspect logs, they resolve more issues without pulling in engineering. That means the roadmap doesn't stall and customer response times improve.

Revenue protection. When enterprise customers report issues, speed and clarity matter. Being able to say "here's exactly what happened and here's the fix" builds trust. Slow, unclear debugging erodes it.

AI risk mitigation. For AI-powered products, LLM Observability catches the things that would otherwise go unnoticed: hallucinations that are hard to trace, prompt regressions, and latency spikes. Without it, product credibility degrades quietly.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | | --- | --- | --- | --- | | Support Leader | Head of Support, Support Ops | Faster resolution, fewer escalations | MTTR, escalation rate | | Engineering Lead | EM, Staff Eng | Reproducible bugs, fewer interruptions | Debugging time, context switches | | Product Manager | PM, Product Lead | Understanding friction, user-reported issues | Drop-off rates, issue frequency | | AI Lead | Head of AI, Applied AI Eng | Model reliability, output quality | Output quality, latency, trace coverage | | CS Leader | VP CS, Head of CS | Customer trust, proactive issue resolution | NPS trends tied to product issues |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | | --- | --- | --- | | Users with a support title | User list in Vitally | They're already bringing support folks into PostHog. CX workflow is emerging organically. | | High session replay spend / volume | Product spend breakdown, usage metrics | They're investing heavily in replay. This use case helps them get more value from that spend by connecting replay to errors, logs, and surveys. | | High support ticket volume | vitally.custom.supportTickets | They're dealing with a lot of customer issues. PostHog can help them debug faster. | | Multiple user roles in PostHog (eng + support + product) | User list, admin emails | Cross-functional usage signals that CX workflows are already forming. |

PostHog usage signals

| Signal | How to Check | What It Means | | --- | --- | --- | | Session Replay filtered by error events | Replay usage patterns | They're connecting replay to debugging. The CX workflow is clicking. | | Person profile lookups increasing | Product Analytics usage | Support or CS is investigating individual users. Group Analytics could formalize this. | | Error Tracking adoption alongside replay | Product spend data | They're building the debugging stack. Logs and surveys are natural next steps. | | Console log / network tab usage in replays | Replay engagement metrics | They're using replay for technical debugging, not just UX review. Strong CX signal. |

Health score implications

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Where we are strongest: We win when teams want behavioral and technical context in one place, engineering and product collaborate closely, AI is part of the product, and speed and simplicity matter more than enterprise ceremony.

Where we are weaker: We're not the right fit when deep distributed tracing or advanced APM is required, enterprise ITSM workflows (ServiceNow, Jira Service Management) dominate the support stack, or security policies prohibit session replay. In those cases, we complement rather than replace.

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | | --- | --- | --- | --- | | FullStory | Session replay + digital experience analytics | Error tracking, logs, AI observability, experiments all in one platform; developer-first; better pricing | More mature DXP features; enterprise CX tooling; dedicated support workflow integrations | | LogRocket | Session replay + error tracking + performance monitoring | Broader product suite (analytics, flags, experiments, surveys); AI observability; consolidation story | Purpose-built for debugging workflows; tighter Jira/Zendesk integrations out of the box | | Hotjar | Session replay + heatmaps + surveys | Full analytics platform; error tracking; feature flags; engineering-grade tooling | Simpler UX for non-technical users; lower barrier to entry for marketing/UX teams | | Sentry | Error tracking + performance monitoring + session replay | Deeper product analytics; session replay tied to behavior data; AI observability; surveys | More mature error tracking; broader language/framework support; larger install base | | Datadog | Full observability: APM, logs, metrics, errors, RUM | Product analytics integration; session replay depth; significantly cheaper | Complete observability stack (APM, traces, metrics); enterprise-grade; massive ecosystem |

Honest assessment: Our strongest position is against teams already using PostHog for analytics or feature flags who are paying separately for a replay/debugging tool. The consolidation pitch is concrete and saves money. We're weaker against teams with deeply embedded ITSM workflows (ServiceNow, PagerDuty integrations) or teams that need enterprise-grade distributed tracing. Our sweet spot is product-led companies where engineering, product, and support are closely aligned and want one platform for the full debugging loop.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | | --- | --- | --- | | No native ticketing system integration | Support teams using Zendesk/Intercom can't auto-link replays to tickets | Share replay URLs manually in tickets. Data Pipelines can push events to external tools. Webhook integrations available for some platforms. | | Logging is beta | Teams expecting production-grade centralized logging may find gaps | Set expectations on maturity. For teams with existing logging (ELK, Papertrail), PostHog logging complements rather than replaces initially. | | Session replay privacy controls require configuration | Sensitive data in replays may block adoption for regulated industries | PostHog has extensive privacy controls including masking, blocking, and network payload filtering. Requires upfront configuration. | | No APM or distributed tracing | Can't replace backend performance monitoring for complex microservice architectures | Be honest about the roadmap. Position PostHog as the user-facing debugging layer. Backend APM stays in their existing tool (Datadog, New Relic) for now. | | Mobile replay limitations | Mobile session replay is newer and less mature than web | Check mobile replay docs for current platform support. Set expectations on feature parity with web replay. |

Exceptions / edge cases:

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Objection handling

| Objection | Response | | --- | --- | | "We already have a session replay tool (Hotjar/FullStory/LogRocket)" | PostHog connects replay to errors, logs, analytics, and surveys in one platform. With separate tools, your support team still has to switch between 3-4 tabs to debug one issue. Consolidating also saves on vendor costs. | | "Our support team isn't technical enough for PostHog" | The replay viewer is visual and intuitive. Support doesn't need to write queries. They search for a user, watch the session, and share the link. We can do a training session to get them comfortable. | | "We need this integrated with Zendesk/Intercom" | You can paste replay links directly into tickets today. For automated workflows, Data Pipelines can push events to external tools via webhooks. | | "Session replay has privacy concerns" | PostHog has extensive privacy controls: input masking, DOM element blocking, network payload filtering, and more. We can configure these during onboarding. HIPAA BAA is available with the Boost package. | | "We're not sure this justifies adding another tool" | If you're already on PostHog for analytics or flags, this isn't another tool. It's enabling more of the platform you already pay for. If you're not on PostHog yet, the free tiers let you evaluate without financial risk. |

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | | --- | --- | --- | --- | | Session Replay only | Error Tracking | They're watching replays to find bugs. Structured error data makes this systematic instead of manual. | "You're watching sessions to find bugs. What if errors were automatically captured and grouped so you could see which ones affect the most users?" | | Session Replay + Error Tracking | Logging | They have frontend context but need backend visibility when debugging server-side issues. | "You can see the user's session and the error. But what was happening on the server at the same time?" | | Session Replay + Error Tracking | Product Intelligence (for the product team) | Support and engineering are in PostHog for debugging. The product team would benefit from the same analytics for feature development. | "Your support team is using PostHog to debug issues. Has your product team seen what they can do with funnels and retention in the same platform?" | | Replay + Errors + Analytics | Surveys (NPS/CSAT) | They're debugging reactively. Surveys let them detect frustration proactively and tie it to specific sessions. | "You're great at debugging reported issues. But how do you find the frustrated users who never file a ticket?" | | Replay + Errors (debugging AI features) | LLM Observability | Traditional debugging misses AI-specific issues: prompt quality, hallucinations, latency. | "You're catching errors in your AI features. But are you seeing when the model gives a bad answer that isn't technically an error?" | | Replay + Errors (engineering in PostHog) | Release Engineering (Feature Flags) | Engineering is in PostHog for debugging. Feature flags for safe releases is a natural add. | "You're tracking bugs after releases. What if you could gate features behind flags and roll back without a deploy?" | | Group Analytics + Person Profiles | Data Infrastructure (Data Warehouse) | They want to combine PostHog user/account data with CRM or billing data for a complete customer view. | "You're looking at users in PostHog. What if you could see their Stripe revenue and HubSpot status alongside their product behavior?" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | | --- | --- | --- | --- | | AI Native — Early | "Your AI features will break in ways that aren't exceptions. PostHog lets support see the user's session, engineering sees the error, and you can trace the LLM call that caused it. All in one place, free tier included." | Session Replay, Error Tracking, LLM Observability | CTO, founding engineer | | AI Native — Scaled | "Support escalates AI issues to engineering because they can't see what the model did. PostHog gives support replay + LLM traces so they can triage without pulling engineers off the roadmap." Bridge to AI/LLM Observability and Product Intelligence. | Session Replay, Error Tracking, LLM Observability, Logging, Surveys | VP Eng, Head of Support, AI Lead | | Cloud Native — Early | "Stop asking users to send screenshots. Session Replay shows you exactly what happened. Error Tracking catches it automatically. Support and engineering share the same context." | Session Replay, Error Tracking, Person Profiles | CTO, Head of Support, founding engineer | | Cloud Native — Scaled | "Your support team escalates everything because they can't see errors or logs. PostHog gives them replay + errors + backend logs so they can resolve more issues without pulling in engineering." Consolidation pitch: replace FullStory/LogRocket + Sentry with one platform. | Session Replay, Error Tracking, Logging, Group Analytics, Surveys | VP Eng, Head of Support, VP CS | | Cloud Native — Enterprise | "Multiple teams, multiple products, and debugging context spread across 5 tools. PostHog gives support, engineering, and product a shared view: replay, errors, logs, and satisfaction data tied to the same user and account. Fewer escalations, faster resolution, better customer trust." | Full CX stack + Enterprise package (RBAC, SSO, dedicated support) | VP Eng, VP CS, Director of Support, CTO |

Data Infrastructure

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/data-infrastructure

What is the job to be done?

"Help me unify product data with business data and get it where it needs to go."

This is the "stickiness" use case. Once PostHog is part of a company's data infrastructure, receiving data from Stripe, HubSpot, and databases AND feeding data out to their BI layer, it becomes very hard to rip out. This also makes their product data more valuable as it is enriched with additional business context. Data infrastructure customers also tend to have the highest retention rates.

However, this is also the hardest use case to sell into. Data teams are skeptical of analytics tools playing in the data engineering space. Product maturity matters a lot here.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Data Warehouse or Batch Exports. Two common patterns:

  1. Data out (Batch Exports first): Data team wants to export PostHog event data to their existing warehouse (Snowflake, BigQuery, Redshift) so it can be queried alongside other business data in their BI tool. This is the "PostHog as a data source" entry point. They're not replacing their warehouse. They're adding PostHog data to it. Ideally, we want PostHog to be the hub of their data, but this is typically an indicator that they're beginning to think of their data holistically.
  2. Data in (Data Warehouse first): Data team (or product/other team) wants to bring external data into PostHog to enrich product analytics. "Show me retention by Stripe plan" or "Which HubSpot leads are actually active in the product?" requires combining PostHog events with external data. This is the strongest entry point because it keeps teams inside PostHog for analysis.

Primary expansion path

Data Warehouse (bring external data IN) → + Product Analytics (query unified data) → + Batch Exports (send PostHog data OUT)

The logic of each step:

Alternate expansion paths

Starting from Realtime Destinations: They want to push PostHog events to downstream tools in real-time. Conversion events to ad platforms (Meta, Google Ads). User activity to CRM (HubSpot, Salesforce). Alerts to Slack. This pulls in Data Pipelines and naturally leads to "if we can push data out, can we pull data in?" which is the Data Warehouse.

Starting from Product Analytics (HogQL power users): Advanced analytics users writing HogQL queries hit the ceiling of PostHog-only data. They want to join against their Stripe data, their CRM data, or their database. Data Warehouse is the answer.

Business impact of solving the problem

This is the highest-stickiness use case. When PostHog is both receiving data from Stripe/HubSpot/databases and feeding data out to Snowflake/BigQuery/BI tools, ripping it out means rebuilding multiple data pipelines. This creates deep infrastructure-level lock-in that goes beyond any single user or team.

Data infrastructure customers have the highest retention rates. Accounts with active batch exports and warehouse connections churn at significantly lower rates than analytics-only accounts. The integration depth creates switching costs that product satisfaction alone doesn't.

However, this is the hardest use case to sell into. Data teams are skeptical. They've built their stack around tools like Fivetran, dbt, Snowflake, and Looker. They see PostHog as an analytics tool, not a data infrastructure tool. Credibility with data engineers requires demonstrating real technical capability, not just talking about consolidation.

The "lightweight warehouse" pitch resonates with early-stage companies. Teams that don't yet have a Snowflake/BigQuery setup find PostHog's Data Warehouse attractive because it gives them warehouse capabilities (join external data, run SQL) without a separate warehouse vendor. For these teams, PostHog isn't replacing their warehouse. It is their warehouse.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | Data Engineer | Data Eng, Analytics Eng, Data Platform | Pipeline reliability, query performance, schema flexibility, not maintaining custom ETL | "Is the sync reliable? Can I run complex joins? What's the query latency on large datasets?" | | Data Team Lead | Head of Data, Director of Analytics, Data Lead | Tool consolidation, cost, team productivity, data governance | "Does this reduce our pipeline maintenance burden? What's the cost vs. Fivetran?" | | Product Ops / BizOps | Product Ops, RevOps, BizOps | Unified view of product and business data, self-serve dashboards | "Can I see product usage next to Stripe revenue and HubSpot pipeline without asking the data team?" | | Founder (early stage) | CTO, technical founder, first data hire | Not building a data warehouse yet. Wants unified analytics without a complex stack. | "Can I query my Stripe data alongside PostHog events without setting up Snowflake?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | Active batch exports | active_batch_exports in Vitally traits | They're already exporting data. The Data Warehouse (bringing data in) is the natural next step, and they're likely not thinking of PostHog being their data warehouse. | | Active external data schemas | active_external_data_schemas in Vitally traits | They've connected external data sources. They're using PostHog as a data platform, not just analytics. | | High rows synced (30 day) | rowsSyncedLast30DaysIfSendingData | Significant data movement. Data Infrastructure is an active use case. | | Customer mentions Fivetran, Snowflake, or "data warehouse" in notes | Vitally notes / conversations | Data team is involved. This use case may be relevant. | | HogQL usage is high | Usage metrics | Power users writing SQL. They're likely to want to query across external data too, or have more complex analytics needs/capabilities. |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | Batch exports configured and running | Pipeline configuration | They're exporting data. Explore whether bringing data in (Data Warehouse) would add value. What are they doing with the PostHog data in the warehouse? | | External data sources connected (Stripe, HubSpot, etc.) | Data Warehouse source list | Active Data Infrastructure use case. Look for expansion: more sources, more query complexity. | | HogQL queries joining external data | Saved insights with warehouse tables | They're doing unified analysis. This is the power use case. Encourage more connections. | | High realtime destination volume | Pipeline metrics | They're pushing events to downstream tools. Explore whether they need more destinations or more complex transformations. They may also be solving in point solutions when they could simplify in PostHog. |

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | Snowflake / BigQuery | Cloud data warehouse | We have analytics built on top; no BI tool needed for product questions; simpler for teams that just need PostHog + business data | Real data warehouse: unlimited scale, advanced SQL, mature ecosystem, governance | | Fivetran | Managed data pipelines (sources to warehouse) | We're the analytics platform AND the pipe; data stays in PostHog for analytics; simpler for early-stage teams | Far more source connectors; more mature data governance; enterprise-grade reliability | | Census / Hightouch | Reverse ETL (warehouse to business tools) | We push data from PostHog directly, no warehouse intermediate step needed; simpler architecture | More destination integrations; audience management features; built for marketing/ops teams | | Segment | CDP (collect events, route to destinations) | We're the analytics platform AND the pipe; no separate CDP needed | More destination integrations; more mature event collection; established in enterprise CDP workflows |

Honest assessment: We are not trying to replace Snowflake or BigQuery. For teams with a mature data stack (Fivetran + Snowflake + dbt + Looker), PostHog's Data Warehouse is a complement, not a replacement. Batch Exports feed PostHog data into their stack; Data Warehouse brings their data into PostHog for product-specific analysis. The full replacement pitch only works for early-stage teams that don't have a warehouse yet and want PostHog to serve double duty. Early stage teams may also have experience with the complexity of layering in data systems, so may be more open to centralizing tooling, and Batch Exports always allow teams to not feel vendor lock-in. Be calibrated about which accounts can realistically adopt this as infrastructure vs. a convenience feature.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | Data Warehouse query performance at very large scale | Teams with billions of rows in external sources may hit performance limits | PostHog's Data Warehouse is optimized for product analytics query patterns, not general-purpose warehousing. For very large datasets, batch exports to Snowflake/BigQuery may be more appropriate. | | Source connector coverage doesn't match Fivetran | Some niche data sources may not be supported | Check available sources. For unsupported sources, the API and S3/GCS import paths can bridge the gap. | | Data engineering teams may not trust PostHog as a warehouse | Credibility gap: "you're an analytics tool, not a data platform" | Don't oversell. Position as a complement to their existing stack (batch exports out, key sources in) rather than a full replacement. Demonstrate HogQL query capability with their actual data to build credibility. | | Batch export latency may not meet real-time requirements | Teams needing sub-minute data freshness in their warehouse | Batch exports are periodic (hourly default). For real-time needs, use Realtime Destinations instead. Set expectations on latency during evaluation. |

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | Batch Exports only | Data Warehouse (bring data in) | They're pushing PostHog data out. Bringing business data in would let them do unified analysis in PostHog directly. | "You're exporting PostHog data to Snowflake. What if you could bring your Stripe data into PostHog and skip the context-switch?" | | Data Warehouse (Stripe connected) | Revenue Analytics | They've connected Stripe data. Revenue Analytics gives them pre-built MRR, LTV, and churn dashboards. | "You've got Stripe connected. Have you seen Revenue Analytics? It gives you MRR, churn, and LTV dashboards out of the box." | | Data Pipelines to CRM | Growth & Marketing | They're pushing data to HubSpot/Salesforce. The growth team could use more of the marketing analytics stack. | "You're syncing data to your CRM. Has the marketing team seen Web Analytics and Marketing Analytics for attribution?" | | Data Warehouse + Product Analytics | Product Intelligence (for the product team) | They're doing unified data analysis. The product team should be using the full analytics suite. | "Your data team is doing advanced queries. Are your PMs using funnels, retention, and session replay for product decisions?" | | Data team in PostHog | Any use case for other teams | Data team is in PostHog and advocates for it. Expand to product, engineering, or growth. | "Your data team loves PostHog. Which other teams could benefit? Product? Engineering? Growth?" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | "You don't need a data warehouse yet. PostHog connects to Stripe and your database, so you can query everything in one place without setting up Snowflake." Lightweight warehouse pitch. | Data Warehouse, Product Analytics (HogQL) | CTO, founding engineer, first data hire | | AI Native — Scaled | "You're scaling and your data team is building a proper stack. PostHog batch exports feed your warehouse, and Data Warehouse brings key business data in for product analytics." Complement, not replace. | Data Warehouse, Batch Exports, Pipelines | Data team lead, analytics engineer | | Cloud Native — Early | "Same as AI Native early. PostHog as the lightweight warehouse for teams that don't want a separate data stack yet." | Data Warehouse, Product Analytics (HogQL) | CTO, first data hire | | Cloud Native — Scaled | "Your data stack is mature. PostHog fits in as both a data source (batch exports to Snowflake) and an analytics destination (Data Warehouse pulls in Stripe/HubSpot). No custom ETL needed." | Batch Exports, Data Warehouse, Pipelines | Data engineering team, analytics engineering team | | Cloud Native — Enterprise | "Multiple teams, multiple data sources, complex pipeline requirements. PostHog integrates bidirectionally with your existing stack and gives product/growth teams self-serve analytics over unified data." Governance, reliability, and scale matter here. | Full Data Infrastructure stack + Enterprise package | Head of Data, Director of Analytics, Data Platform Lead |

Appendix: PostHog data maturity

| Stage | Primary Tool | Data Sources | Who Owns | PostHog Position | |---|---|---|---|---| | 1 | Point solutions (GA, prod DB) | Scattered | Nobody | Not yet adopted | | 2 | PostHog | Product events | Prod/Eng | Primary analytics | | 3 | PostHog + Data Pipelines | Product + Business | Cross-functional | Hub for analytics | | 4 | PostHog + Data Pipelines + Warehouse | Everything | Cross-functional | Source of truth | | 5 | PostHog + Batch Exports + External warehouse | Everything | Data Team | Source + destination |

Growth & Marketing

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/growth-and-marketing

What is the job to be done?

"Help me understand what drives acquisition, conversion, and revenue, and automate actions based on user behavior."

Guidance: This is probably the most underserved use case in our current motion. We have the products — Web Analytics, Marketing Analytics, Workflows, Product Tours, Pipelines, Revenue Analytics, Surveys — but we rarely lead with this story. Marketing teams are spending $10k+/month on Segment, Mixpanel, GA4, and various CDPs to do what PostHog can do in one place. Don't sell individual products here. Sell the consolidation of their marketing data stack.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Web Analytics, Product Analytics, or Experiments. Three common patterns:

  1. Marketing-first: Marketing team wants to replace GA4 or understand channel attribution. They start with Web Analytics for traffic and referrer data, then quickly want to connect that to downstream conversion events (Product Analytics) and campaign spend (Marketing Analytics).
  2. Growth-first: A growth engineer or product-led growth team is already using PostHog for product analytics — building funnels, tracking activation, measuring retention. They want to connect the top of funnel (how users found us) to the bottom (did they convert and retain). Web Analytics and Marketing Analytics extend their existing setup upstream.
  3. CRO / Experimentation-first: A Growth PM or CRO specialist wants to run A/B tests on signup flows, pricing pages, or onboarding sequences. They come in through Experiments, which requires Feature Flags, and Feature Flags require engineering to implement. This is a natural multithreading play: the growth team defines the experiment, engineering implements the flag, and now both teams are in PostHog.

Primary expansion path

Web Analytics → Marketing Analytics → Product Analytics (funnels/retention) → Experiments + Feature Flags → Data Pipelines (to CRM/ad platforms) → Workflows / Product Tours → Revenue Analytics → Surveys

The logic of each step:

Alternate expansion paths

Starting from Product Analytics (growth engineering): A growth team already deep in PostHog funnels and experiments. They expand upstream into Web Analytics and Marketing Analytics for channel attribution, and downstream into Workflows and Product Tours for activation automation.

Starting from Surveys: A product or CX team is running NPS or CSAT surveys. They want to connect low scores to actual behavior (what happened right before someone gave a 3/10?), which pulls in Product Analytics and Session Replay. The growth team then sees the survey infrastructure and wants to use it for exit-intent and post-signup feedback.

Starting from Experiments (CRO / Growth PM entry — the engineering bridge): A CRO specialist or Growth PM wants to A/B test their signup flow. They come in through Experiments, which creates a Feature Flag under the hood. The flag needs to be implemented in code, so engineering gets pulled into PostHog. This is high-value for three reasons: (1) it makes the account sticky — once feature flags are in the codebase, they're not easy to rip out; (2) it creates a multithreading opportunity — you now have both the growth team and engineering as active users; and (3) it's a bridge to Release Engineering — once engineering is using flags for experiments, they often realize they can use the same infrastructure for progressive rollouts and kill switches.

Business impact of solving the problem

The buyer is different from other use cases. Growth and Marketing targets growth engineers, marketing leads, demand gen managers, CRO specialists, and GTM engineers. In most organizations, these are separate from the product analytics buyer (PM) and the engineering buyer (EM/platform). They often have their own budget and their own stack. Winning this buyer opens a parallel revenue stream within the same account.

Marketing stack consolidation is a real, quantifiable cost savings. Companies routinely spend $10k+/month across GA4, Segment, Mixpanel, Amplitude, CDPs, and various point solutions. The consolidation argument is concrete: fewer vendor contracts, fewer integrations to maintain, one source of truth for conversion data.

This use case gives newer products a reason to exist. Workflows, Product Tours, Marketing Analytics, and Revenue Analytics are all relatively new PostHog products with lower attach rates. Without a use case frame, they're standalone features looking for a buyer. Within Growth and Marketing, each one has a clear role and a natural "next step" in the conversation.

Growth and Marketing creates demand for other use cases. Once a marketing team is in PostHog and sees the depth of product analytics, they pull in the product team (Product Intelligence). Once the growth team is running experiments, engineering gets involved (Release Engineering). This use case is a wedge into broader platform adoption.

Experiments and Feature Flags are the stickiness and multithreading lever. When a CRO or Growth PM starts running A/B tests, feature flags get embedded in the codebase. That's a fundamentally different level of integration than a marketing team viewing dashboards. Flags are in production code, maintained by engineers, and not easy to remove. More importantly, it gives TAMs a natural path to multithread: you now have a growth/marketing champion and an engineering champion using the same platform.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | Growth Engineer | Growth Eng, PLG Engineer, GTM Engineer | Conversion funnels, activation metrics, experiment velocity, pipeline reliability | "Can I build a full-funnel view from ad click to paid conversion in one tool?" | | Marketing Lead | Head of Marketing, VP Demand Gen, Marketing Ops | Channel attribution, ROAS, campaign performance, cost per acquisition | "Can I see which campaigns actually drive revenue, not just clicks?" | | CRO / Growth PM | Growth PM, CRO Specialist, Head of Growth | Conversion rate optimization, experiment velocity, activation rates. Needs engineering to implement experiments, making this persona the key multithreading catalyst. | "Can I run experiments on our signup flow and measure revenue impact? How fast can engineering implement a test?" | | Founding Growth | Founder, first growth hire at early-stage startup | All of the above. Wearing all hats. Speed, simplicity, not paying for 5 tools | "How fast can I set this up and how many tools does it replace?" | | Marketing Analyst | Marketing Analyst, Data Analyst (Marketing) | Data accuracy, attribution modeling, cohort analysis, reporting | "Can I trust this data? Can I build reports without engineering help?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | Web Analytics is active but no other products adopted | Product usage data | They came in through the marketing door — there's a full expansion path waiting | | Customer mentions GA4, Segment, or CDP in notes | Vitally notes / conversations | They have marketing stack pain and may be open to consolidation | | Multiple marketing/growth team members invited | User list in Vitally | The growth team is in PostHog, not just engineering — this use case is live | | Low Pipelines / Workflows usage despite high analytics usage | Product spend breakdown | They're analyzing but not acting — Workflows and Pipelines are natural next steps | | Experiments or Feature Flags usage initiated by growth/marketing team (not engineering) | Product usage data + user roles | The CRO/Growth PM persona is active — this is the engineering bridge moment |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | UTM parameters appearing in event properties | Event property explorer | They're tracking acquisition sources — Marketing Analytics is a natural add | | Funnels built around signup/checkout/activation | Saved insights | Growth team is active and measuring conversion — ripe for Experiments and Workflows | | Experiments created but low flag evaluation volume | Experiments list + flag usage | Growth team is trying to experiment but engineering hasn't implemented the flags yet — TAM opportunity to facilitate the handoff | | Feature flags being used primarily for experiments (not releases) | Flag list + experiment linkage | Growth-driven flag usage — explore whether they'd also use flags for progressive rollouts (Release Engineering cross-sell) | | Web Analytics pageview volume growing | Product usage metrics | Marketing is driving more traffic — they'll want attribution and ROAS soon | | Batch exports configured to ad platforms or CRM | Pipeline configuration | They're already trying to close the data loop — deeper Pipelines usage is the play |

Health score implications

Command of the Message

Discovery questions (current state)

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | GA4 | Web analytics, basic attribution, Google Ads integration | Full-funnel beyond the website; first-party data; product analytics depth | Deepest Google Ads integration; free tier is very generous; universal adoption | | Segment | CDP — collects events and routes them to destinations | We're the analytics platform and the pipe; no need for a separate CDP layer | More destination integrations; more mature data governance | | Amplitude | Product analytics with some marketing analytics features | Broader product coverage (flags, replay, surveys, workflows); better pricing | More mature marketing-specific features (audiences, campaign impact) | | Mixpanel | Product analytics focused on funnels and retention | Broader platform (web analytics, flags, replay, workflows); no sampling | Deeper mobile analytics; some marketing teams prefer the UX | | HubSpot Marketing Hub | Marketing automation, email, CRM, basic analytics | Engineering-grade analytics; deeper funnel analysis; experiments | Native CRM integration; better email deliverability; non-technical UX | | Heap | Auto-capture product analytics | We also auto-capture, plus flags, experiments, replay, surveys, workflows | Retroactive analytics (virtual events) is a strong pitch for non-technical teams |

Honest assessment: Our strongest position is against teams using 3+ tools to do what PostHog does in one. The consolidation pitch is genuine. We're weaker against teams deeply embedded in the Google ecosystem (GA4 + Google Ads + Looker) where switching cost is high. We're also weaker against HubSpot where marketing automation is the primary need. Our sweet spot is technical growth teams and PLG companies where the growth engineer is the buyer.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | Marketing Analytics is beta — feature set is still maturing | Some customers may expect parity with GA4 or dedicated attribution tools | Set expectations during onboarding. Position as "growing fast" and highlight the advantage of attribution data living alongside product analytics. | | Workflows is new — not as feature-rich as mature marketing automation | Teams expecting advanced email sequencing, lead scoring, or complex branching may find gaps | Position as behavior-driven automation, not a full HubSpot replacement. For heavy email automation, PostHog complements an existing tool via Data Pipelines. | | Product Tours is alpha — limited customization | Teams with complex onboarding needs may hit walls | Position as the integrated option. For advanced tooltip/modal UX, keep a dedicated tool and use PostHog for analytics + experimentation. | | Pipeline destination coverage may not match Segment's breadth | Some niche destinations may not be supported | Check available destinations before promising. Data Warehouse + Batch Exports covers the most common needs. Webhook destination can bridge gaps. | | Non-technical marketing users may find the UI intimidating | Adoption risk: marketing team tries PostHog, finds it too "engineering-y," and reverts to GA4 | Lead with PostHog AI for querying. Build pre-configured dashboards during onboarding. Web Analytics UI is intentionally simpler — start them there. |

Exceptions / edge cases:

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Objection handling

| Objection | Response | |---|---| | "We already use GA4 and it's free." | GA4 is great for basic web traffic. But can it show you which channels drive users who activate and pay, not just visit? Can it send real conversion events back to your ad platforms? PostHog starts free too, and it goes all the way to revenue. (Web Analytics · Funnels) | | "We need Segment for our data pipelines." | What destinations are you sending to? PostHog has built-in Data Pipelines for the most common ones. You may not need a separate CDP layer if PostHog is already collecting the events. Let's look at your current destinations and see what's covered. | | "Our marketing team isn't technical enough for PostHog." | That's exactly why we built PostHog AI — your marketing team can ask questions in plain English. Web Analytics is also designed to be simple and familiar. We'll set up dashboards during onboarding so they have value from day one. | | "Marketing Analytics is beta — can we trust it?" | Fair concern. The core data infrastructure is built on the same battle-tested PostHog platform that handles billions of events. The beta label means we're still adding features, not that the data is unreliable. And your feedback directly shapes the roadmap. | | "We'd need to rip out our whole marketing stack to use PostHog." | You don't have to rip out anything on day one. Start by adding PostHog alongside your existing tools. Once you see the value of having attribution, funnels, and automation in one place, the consolidation happens naturally. Data Pipelines keeps your existing tools fed. | | "Workflows seems basic compared to HubSpot/Braze." | It is newer. The trade-off is that PostHog Workflows is triggered by real product behavior data, not just email opens and form fills. If you need complex email nurture sequences, keep your email tool and use PostHog for behavior-driven automation. They complement each other via Data Pipelines. | | "Our growth team wants to experiment but engineering is too busy to implement flags." | That's actually a common starting point. The first experiment is the hardest because engineering needs to set up the Feature Flag SDK. But once the SDK is in place, subsequent experiments are much faster. Most teams find that after the first 2 to 3 experiments, the loop is smooth. And engineering now has flag infrastructure they can use for their own releases too. |

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | Web Analytics + Marketing Analytics | Product Analytics (funnels, retention) | They can see traffic and channels but need to connect it to actual user behavior and conversion | "You know which channels bring traffic — but do you know which channels bring users who retain?" | | Product Analytics (funnels) | Experiments + Feature Flags | They've identified drop-off points and want to test fixes | "You've found the drop-off. Want to test whether a new flow actually improves conversion?" | | Product Analytics + Experiments | Workflows + Product Tours | They know what works from experiments and want to operationalize it | "You proved the new onboarding works in an experiment. Now let's roll it out as a Product Tour for everyone." | | Experiments + Feature Flags (growth-driven) | Release Engineering (for the eng team) | Engineering is already implementing flags for experiments — they can use those same flags for progressive rollouts | "Your engineering team is already using feature flags for growth experiments. Have they considered using the same infrastructure for all their releases?" | | Web Analytics + Product Analytics | Data Pipelines | They're analyzing conversion but not feeding it back to ad platforms or CRM | "You're measuring real conversions — are you sending those back to Meta and Google so their algorithms can optimize?" | | Funnels + Workflows | Revenue Analytics | They're driving and automating conversion but need to measure the revenue impact | "You've automated re-engagement. Now let's see which cohorts and channels drive the most LTV." | | Any Growth & Marketing products | Session Replay | They see a funnel drop-off but don't know why | "Your checkout funnel drops 40% at step 3. Want to watch what users are actually doing at that step?" | | Growth & Marketing stack established | Product Intelligence (for the product team) | Marketing/growth is in PostHog — the product team should be too | "Your growth team already uses PostHog for funnels and experiments. Has the product team seen what they can do with cohorts and retention analysis?" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | "You need to get users to your AI product, get them activated, and understand what channels work, all without hiring a data team." Speed matters. Experiments are high-value early. | Web Analytics, Product Analytics (funnels), Experiments, Feature Flags, PostHog AI | Founder, first growth hire, GTM engineer | | AI Native — Scaled | "You're scaling acquisition and need to optimize spend, automate onboarding, and connect marketing data to product engagement." | Web Analytics, Marketing Analytics, Product Analytics, Experiments, Feature Flags, Pipelines, Workflows, Revenue Analytics | Head of Growth, Growth Engineering Lead | | Cloud Native — Early | "You're investing in growth for the first time and want to build it right. One tool for attribution, funnels, experiments, and engagement." | Web Analytics, Product Analytics, Experiments, Feature Flags, Surveys | Founder, first PM, growth engineer | | Cloud Native — Scaled | "Your marketing stack is fragmented and expensive. Consolidate attribution, conversion analytics, engagement automation, and experimentation into one platform." Experiments + Feature Flags are the multithreading lever. | Web Analytics, Marketing Analytics, Product Analytics, Experiments, Feature Flags, Pipelines, Workflows, Product Tours, Revenue Analytics | VP Growth, Head of Growth, CRO, Marketing Ops | | Cloud Native — Enterprise | "Multiple teams, multiple products, multiple markets, and none of them agree on the numbers. PostHog gives you a single source of truth for acquisition, conversion, and revenue across all properties." | Full stack. Pipelines and Revenue Analytics are especially important. | VP Marketing, CMO, Head of Growth, Marketing Ops |

Observability

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/observability

What is the job to be done?

"Help me know when things break, understand why, and fix them fast."

This is where our roadmap is heading and where significant market opportunity exists. The long-term vision is a full observability stack that competes with Datadog and Sentry on their home turf, but with the massive advantage that our observability data is connected to product analytics data. No other vendor can tell you "this API endpoint is slow AND here's the business impact in terms of user drop-off and revenue loss."

Separating this from Release Engineering is important because the buyer is often different (SRE/platform team vs. product engineering), the competitive landscape is different (Datadog/Sentry vs. LaunchDarkly), and the expansion path is different.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Error Tracking. Team wants to catch exceptions and regressions. Common entry scenarios:

  1. Sentry replacement: They're paying for Sentry and want to consolidate into PostHog (which they're already using for analytics or flags). Error Tracking is the direct replacement.
  2. First observability tool: Early-stage company that hasn't invested in error tracking yet. PostHog's free tier (100K exceptions/month) lets them start without a new vendor relationship.
  3. Session Replay → Error Tracking: They're already using Session Replay for debugging and discover that errors surfaced in replays could be tracked systematically with Error Tracking.

Primary expansion path

Error Tracking → + Session Replay (error context) → + Logging → + Product Analytics (impact analysis)

The logic of each step:

Future expansion (roadmap dependent)

As APM and tracing ship, the path extends: Logging → APM → Tracing, completing the full observability stack. Position this honestly: name the vision, be transparent about what's available today vs. what's coming.

Business impact of solving the problem

Observability data connected to product analytics is a moat. Every other observability tool (Datadog, Sentry, New Relic) can tell you "this endpoint threw an error." Only PostHog can tell you "this error affected 500 users, 30 of whom were in the middle of checkout, resulting in an estimated $15k in lost revenue this week." That's a fundamentally different conversation with engineering leadership.

Session Replay as error context is a killer feature. Sentry shows you a stack trace. PostHog shows you the user's actual experience. For frontend and full-stack debugging, this is dramatically faster for reproduction and resolution.

Consolidation play for accounts already using PostHog. If they're already on PostHog for analytics or flags, adding Error Tracking and Logging means one fewer vendor (Sentry, Datadog) to manage. The consolidation saves money and reduces context-switching.

This use case has the highest growth ceiling. The observability market is enormous (Datadog alone is $25B+). Our story gets stronger with every product we ship in this space.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | SRE / Platform Engineer | SRE, Platform Eng, Infrastructure Eng | Reliability, alerting, mean time to resolution, not getting paged at 3am | "Will this catch issues before users report them? How fast can I triage?" | | Backend Engineer | Backend Eng, API Engineer, Server-side Eng | Stack traces, log correlation, reproducing bugs efficiently | "Can I see what happened on the server when this error fired?" | | Product Engineer | Full-stack Eng, Frontend Eng | User-facing bugs, reproduction, understanding the user impact of errors | "Can I see the user's session when this error happened?" | | Engineering Manager | EM, VP Eng, Director of Eng | Team velocity, incident metrics (MTTR, error rates), cost of observability tooling | "How does this reduce our incident response time? What does it cost vs. Sentry/Datadog?" | | Founder (early stage) | CTO, first engineer | Catching bugs before users complain, not paying Datadog prices | "Does this work out of the box and is it affordable?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | Error Tracking is active but low product count | Product spend breakdown | They've started with errors. Full Observability expansion path available. | | Customer mentions Sentry or Datadog in notes | Vitally notes / conversations | Competitive displacement opportunity. Consolidation pitch. | | High Session Replay usage with error-related viewing patterns | Product usage data | They're using replay for debugging already. Error Tracking formalizes this. | | Engineering-heavy user base, no PM users | User list in Vitally | Engineering-first account. Observability and Release Engineering are the primary use cases. |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | Error tracking exceptions growing week over week | Product usage metrics | They're instrumenting more of their stack. Good adoption signal. | | Session Replay filtered by error events | Replay usage patterns | They're connecting replay to error debugging. The integration is clicking. | | High error volume but no alerting configured | Error tracking settings | They're collecting errors but not acting on them. Help them set up alerts. | | Product Analytics queries referencing error events | Saved insights | They're starting to connect errors to business impact. Encourage this. |

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | Sentry | Error tracking, performance monitoring, session replay | Deeper product analytics integration; business impact context; flag/experiment connection; better pricing | More mature error tracking features; broader language support; larger install base; dedicated performance monitoring | | Datadog | Full observability: APM, logs, metrics, errors | Product analytics integration; session replay depth; much cheaper | Complete observability stack (APM, traces, metrics); enterprise-grade; massive ecosystem | | New Relic | Full observability: APM, logs, errors, distributed tracing | Product analytics integration; session replay; simpler pricing | Complete observability stack; mature enterprise features |

Honest assessment: Our Observability story is credible but incomplete. Error Tracking + Session Replay + Logging is a meaningful starting point, and the connection to product analytics is genuinely differentiated. But we don't have APM or tracing yet. We can't position PostHog as a full Datadog replacement today. The honest pitch is: "For error tracking, we're better than Sentry because of the user context. For full observability, we're building toward it, and in the meantime, the product analytics connection gives you something no other observability tool offers." Be transparent about what's available today vs. what's on the roadmap.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | No APM or tracing yet | Can't replace Datadog for teams that need full backend observability | Be honest about the roadmap. Position PostHog as complementary for now: errors + replay + analytics in PostHog, APM in their existing tool. The consolidation play gets stronger as we ship more. | | Logging is beta | Teams expecting production-grade centralized logging may find gaps | Set expectations on maturity. For teams with existing logging (ELK, Papertrail), PostHog logging complements rather than replaces initially. | | Error Tracking language/framework support may lag Sentry | Sentry supports a very wide range of languages and frameworks | Check Error Tracking docs for current support. For unsupported frameworks, generic exception capture via the API may work. | | No built-in on-call/incident management | Teams wanting PagerDuty-style incident workflows won't find it here | PostHog alerts can trigger webhooks to PagerDuty, Slack, etc. Error Tracking is about detection and context, not incident management workflows. |

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | Error Tracking only | Session Replay | They see stack traces but can't reproduce the user experience | "You can see the error. Want to see exactly what the user was doing when it happened?" | | Error Tracking + Session Replay | Logging | They have frontend error context but need backend logs | "You can see the user's session. But what was happening on the server at the same time?" | | Error Tracking + analytics correlation | Product Intelligence (for the product team) | They're connecting errors to user impact. The product team would benefit from the same analytics. | "You're measuring error impact on users. Has your product team seen what they can do with funnels and retention in the same platform?" | | Error Tracking (engineering in PostHog) | Release Engineering (same engineering team) | Engineering is in PostHog for errors. Feature flags for safe releases is a natural add. | "You're tracking errors after releases. What if you could gate features behind flags and roll back without a deploy?" | | Error Tracking for AI features | AI/LLM Observability | Traditional error tracking misses AI quality regressions | "You're catching exceptions, but are you catching when your model starts giving worse answers? That's a different kind of 'error.'" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | "You're shipping fast and breaking things. PostHog catches errors and shows you the user's experience when they hit a bug. No Sentry bill required." Error Tracking + Session Replay is the sweet spot. | Error Tracking, Session Replay | CTO, founding engineer | | AI Native — Scaled | "Your AI features have failure modes that traditional error tracking misses: hallucinations, slow responses, quality regressions. PostHog catches the technical errors AND lets you evaluate output quality." Bridge to AI/LLM Observability. | Error Tracking, Session Replay, Logging, AI Evals | VP Eng, Platform Lead, SRE | | Cloud Native — Early | "Stop finding bugs from user complaints. Error Tracking catches exceptions automatically, and Session Replay lets you see exactly what happened. 100K exceptions/month free." | Error Tracking, Session Replay | CTO, founding engineer | | Cloud Native — Scaled | "Your team is juggling Sentry, Papertrail, and Datadog. PostHog consolidates error tracking, logging, and user context into the platform you already use for analytics." Consolidation pitch. | Error Tracking, Session Replay, Logging, Product Analytics | VP Eng, SRE Lead, Platform team | | Cloud Native — Enterprise | "Multiple teams, multiple services, and incident context spread across 5 tools. PostHog gives you errors + logs + user sessions + business impact in one platform. No more switching between Sentry, Datadog, and Amplitude during an incident." | Full Observability stack + Enterprise package | VP Eng, Director of SRE, Platform leadership |

Product Intelligence

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/product-intelligence

What is the job to be done?

"Help me understand what users do, why they do it, and what to build next."

This is our bread and butter. Most accounts start here. The risk is they stay here as a single product analytics customer and never expand. The opportunity is that Product Intelligence naturally creates demand for the other use cases once teams start acting on what they learn.

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Product Analytics. Customer starts tracking events, builds dashboards, creates their first funnel. Then they hit the ceiling of quantitative data alone: "I can see that users drop off, but not why."

Primary expansion path

Product Analytics → + Session Replay → + Surveys → + Experiments → + Revenue Analytics → + Workflows / Product Tours

The logic of each step:

Alternate expansion paths

B2B accounts with Group Analytics: B2B SaaS companies almost always need company-level analytics alongside user-level. If they're B2B and not using Group Analytics, that's a significant upsell opportunity. Group Analytics lets them answer "which companies are most engaged" not just "which users."

Starting from Session Replay: Some accounts come in through Session Replay first (debugging, QA, customer support use cases). They realize they need Product Analytics to quantify what they're seeing qualitatively. The expansion path reverses: Replay → Analytics → Surveys → Experiments.

Product teams that ship AI features: If the product has AI components, AI Evals can proactively surface where users are struggling based on output quality. This bridges Product Intelligence into AI/LLM Observability.

Business impact of solving the problem

This is the use case with the largest existing install base. Most PostHog accounts start with Product Analytics. The expansion opportunity isn't convincing them to adopt PostHog. It's convincing them to go beyond a single product and use the full Product Intelligence stack.

The Workflows and Product Tours close-the-loop story is powerful. You identify a drop-off point (analytics), you understand why users leave (session replay, surveys), and now you can actually fix it by guiding users through the right path (product tours) or re-engaging them when they disengage (workflows). That's a complete insight-to-action cycle that no competitor offers in one platform.

Product Intelligence creates demand for other use cases. Once the product team is deep in PostHog, they pull in the growth team (Growth & Marketing use case) for acquisition and activation. Once they're running experiments, engineering gets involved in rollouts (Release Engineering). This is the gateway use case.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | Product Manager | PM, Senior PM, Head of Product | Feature adoption, retention, user journeys, proving impact to leadership | "Can I see which features drive retention and prove ROI to my VP?" | | Product Engineer | Full-stack eng on a product team | Fast instrumentation, reliable data, not maintaining a data pipeline | "How fast can I instrument this and how reliable is the data?" | | UX Researcher | UX Researcher, Design Lead | User behavior patterns, qualitative + quantitative, session-level detail | "Can I watch real user sessions filtered by the cohort I'm studying?" | | Designer | Product Designer, UX Designer | How users interact with new designs, A/B testing UI changes | "Can I see the before/after impact of my design changes?" | | Founder (early stage) | Founder, CTO at seed/Series A | All of the above. Finding product-market fit. Speed. | "Does this help me figure out what to build next?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | Product Analytics is the only paid product | Product spend breakdown | Classic single-product account. Full expansion path available. | | High insight/dashboard creation per active user | Engagement metrics | Product team is actively using PostHog for analysis. They're ready for deeper tools. | | Session Replay is free-tier only or not used | Product usage data | They're doing quantitative analysis without qualitative context. Session Replay is the obvious next step. | | B2B company without Group Analytics | Company type + product spend | Major upsell opportunity. B2B companies need company-level analytics. | | Multiple PM or design roles in the user list | User list in Vitally | Product team is in PostHog, not just engineering. This use case is live. |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | Funnels and retention insights being created regularly | Saved insights | Product team is actively measuring conversion and retention. Ripe for Experiments. | | Session Replay enabled but low viewing rate | Replay settings vs. replay views | They've turned it on but aren't using it. Needs onboarding or a nudge to connect it to their analytics workflow. | | No experiments running despite active analytics | Experiments list | They're identifying problems but not testing solutions. Experiments is the next conversation. | | Dashboards shared across multiple users | Dashboard sharing settings | They're collaborating on insights. Good health signal and potential for team expansion. | | High event volume, low survey usage | Product usage metrics | They have the traffic to run surveys but haven't started. Low-hanging cross-sell. |

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | Amplitude | Product analytics, cohorts, experiments | Broader platform (replay, flags, surveys, workflows); better pricing; open source | More mature ML features (predictions, audiences); larger enterprise install base | | Mixpanel | Product analytics, funnels, retention | Broader platform; no sampling; replay + surveys + flags included | Some teams prefer the UX; strong mobile analytics | | Hotjar | Session replay + basic surveys | Engineering-grade analytics alongside replay; experiments; flags | Simpler UX for non-technical users; purpose-built for UX research | | Heap | Auto-capture product analytics, session replay | Also auto-capture, plus flags, experiments, surveys, workflows | Retroactive analytics (virtual events) is a strong pitch | | Pendo | Product analytics + in-app guides | Deeper analytics; experiments; open source; better pricing | More mature in-app guides; stronger enterprise PM workflow features |

Honest assessment: Our strongest position is the breadth of the platform. No competitor offers analytics + replay + surveys + experiments + workflows + product tours in one tool. We're weaker against Amplitude in very large enterprises where their ML features and enterprise sales motion are more mature. We're weaker against Hotjar/Pendo for non-technical product teams who want a simpler, more opinionated UX. Our sweet spot is technical product teams at companies with engineers who value depth, flexibility, and not paying for 5 separate tools.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | Product Tours is alpha, limited customization | Teams with complex in-app onboarding needs may hit walls | Position as the integrated option. For advanced tooltip/modal UX, keep a dedicated tool (Appcues, Pendo) and use PostHog for analytics + experimentation. | | Workflows is new, less mature than dedicated engagement tools | Teams expecting Braze-level email sequencing will find gaps | Position as behavior-driven automation, not a full lifecycle marketing replacement. Complement with existing tools via Data Pipelines. | | No built-in heatmaps | Some UX teams expect heatmaps as part of the qualitative toolkit | Session Replay provides more context than heatmaps (full session vs. aggregated click positions). Toolbar provides some click-map functionality. | | Learning curve for non-technical PMs | PMs used to Amplitude's guided UX may find PostHog's flexibility overwhelming initially | Lead with PostHog AI for querying. Build pre-configured dashboards during onboarding. Start with simple funnels and retention, not HogQL. |

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | Product Analytics only | Session Replay | They see the numbers but not the why | "You can see 40% drop off at step 3. Want to watch what's actually happening?" | | Product Analytics + Session Replay | Surveys | They're forming hypotheses from replays and want direct user input | "You're watching sessions and seeing confusion. Want to ask users directly what's tripping them up?" | | Product Analytics + Surveys | Experiments | They've identified problems and want to validate fixes | "You know the problem. Let's test whether your proposed fix actually works before building it fully." | | Experiments running | Revenue Analytics | They're testing changes but measuring proxy metrics, not revenue | "Your experiment improved conversion by 15%. But did it actually increase MRR?" | | Analytics + Experiments mature | Workflows + Product Tours | They know what works and want to operationalize it | "You proved the new onboarding flow works. Now let's guide every new user through it automatically." | | Product team in PostHog | Growth & Marketing (for the growth team) | Product team is in PostHog. Growth team should be too. | "Your PMs are using PostHog for product decisions. Has the growth team seen what they can do with funnels and experiments for conversion optimization?" | | B2B account, no Group Analytics | Group Analytics add-on | B2B companies need company-level analytics | "You're tracking individual users. But do you know which companies are most engaged and which are at risk?" | | Product team using flags for experiments | Release Engineering (for the eng team) | Engineering is implementing flags for experiments. They can use them for releases too. | "Your engineers are already deploying feature flags for experiments. Have they considered using the same infrastructure for all their releases?" |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | Product Intelligence looks different here. There's no UX researcher. A GTM engineer or founding PM is looking at funnels, activation rates, and conversion. Frame it as "understand what makes users stick" not "deep behavioral research." | Product Analytics (funnels, retention), Session Replay, Experiments, PostHog AI | Founder, founding PM, GTM engineer | | AI Native — Scaled | Starting to formalize the product function. May have a dedicated PM. AI Evals becomes relevant as a bridge: evaluating AI output quality is product intelligence for AI products. | Product Analytics, Session Replay, Surveys, Experiments, AI Evals, Revenue Analytics | PM, Head of Product, AI Product Lead | | Cloud Native — Early | First real analytics investment. They need to find product-market fit. Speed matters. Don't overwhelm with features. Start with funnels and retention, add replay and surveys as they mature. | Product Analytics, Session Replay, PostHog AI | Founder, first PM, product engineer | | Cloud Native — Scaled | Dedicated product team with PMs, designers, maybe UX researchers. They want depth: cohort analysis, retention by feature, experiment velocity. Workflows and Product Tours become relevant for operationalizing insights. | Full Product Intelligence stack. Group Analytics if B2B. | Head of Product, VP Product, UX Research Lead | | Cloud Native — Enterprise | Multiple product teams, multiple workloads. The play is expanding PostHog from one team to many. Standardization and governance matter. RBAC (Enterprise package) becomes relevant. | Full stack + Group Analytics + Enterprise package | VP Product, CPO, product ops |

Release Engineering

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/release-engineering

What is the job to be done?

"Help me ship faster without breaking things, control who sees what, and validate that changes actually work."

What PostHog products are relevant?

Adoption path and expansion path

Entry point

Usually Feature Flags. Engineering team wants controlled rollouts. Common entry scenarios:

  1. Progressive rollout: Team wants to ship a risky change to 5% of users, monitor, then expand gradually. Feature flags give them the gate; they quickly want metrics to know when it's safe to expand (Experiments).
  2. Kill switch: After a bad deploy that took hours to roll back, engineering wants instant off-switches for new features. Feature flags are the answer.
  3. Growth team bridge: The growth team wants to run an A/B test on the signup flow. Experiments requires Feature Flags, which requires engineering to implement. Engineering gets pulled into PostHog through the growth team's request. (See the Growth & Marketing playbook for this entry path.)

Primary expansion path

Feature Flags → + Experiments → + Session Replay (for debugging and rollout validation)

The logic of each step:

Alternate expansion paths

Starting from Experiments (growth-driven): The growth team wants to A/B test, which requires engineering to implement flags. Engineering discovers they can use the same flag infrastructure for all their releases. This is the reverse entry: growth team is the catalyst, engineering becomes the power user. The growth team stays in Growth & Marketing; engineering lands in Release Engineering.

AI product teams: After a prompt or model change, engineering wants to verify quality hasn't regressed. AI Evals catches regressions that traditional error tracking misses. This bridges into AI/LLM Observability.

Business impact of solving the problem

This is a different buyer than Product Intelligence. Release Engineering targets engineering managers, platform teams, and individual developers. In most organizations, these are separate from the product analytics buyer (PM). Selling to engineering unlocks a parallel revenue stream from the same account. Two budget holders, two champions, much stickier account.

Feature Flags in the codebase are sticky. Once feature flags are integrated into the release workflow and embedded in production code, they're very hard to rip out. This isn't a dashboard someone stops logging into. It's infrastructure that engineering depends on for every deploy. This makes Release Engineering accounts among the most defensible in our book.

The tight integration between flags and experiments is genuinely differentiated. LaunchDarkly has flags but weak experimentation. Standalone experimentation tools (Statsig, Eppo) have experiments but aren't integrated with the broader analytics platform. PostHog connects flags → experiments → product analytics → session replay in one tool.

Experiments + Feature Flags create the multithreading bridge. When growth wants to experiment and engineering implements the flags, both teams are in PostHog. This is one of the best ways to get multithreaded in an account if you aren't already.

Personas to target

| Persona | Role Examples | What They Care About | How They Evaluate | |---|---|---|---| | Engineering Manager | EM, VP Eng, Director of Eng | Release velocity, incident rate, rollback time, team productivity | "Will this make my team ship faster with fewer incidents?" | | Platform Engineer | Platform Eng, DevEx, Infrastructure | Developer experience, flag management at scale, API reliability | "How does this scale to thousands of flags? What's the API latency?" | | Individual Developer | Senior Eng, Staff Eng, Product Engineer | Fast to implement, doesn't slow down CI/CD, good SDK quality | "How many lines of code to add a flag? Does the SDK suck?" | | Founding Engineer | CTO, first engineers at early-stage startup | Speed, simplicity, not paying for LaunchDarkly's enterprise pricing | "How fast can I set this up and how much does it cost?" |

Signals in Vitally & PostHog

Vitally indicators this use case is relevant

| Signal | Where to Find It | What It Means | |---|---|---| | Feature Flags is the primary or only paid product | Product spend breakdown | Engineering-first account. Full Release Engineering expansion path available. | | High flag evaluation volume, low experiment count | Product usage data | They're using flags for rollouts but not measuring impact. Experiments is the next conversation. | | Customer mentions LaunchDarkly in notes | Vitally notes / conversations | Competitive displacement opportunity. They may be paying LaunchDarkly prices for flags alone. | | Engineering-only users (no PMs or marketing) | User list in Vitally | Engineering-first adoption. Release Engineering is the primary use case. Product Intelligence is the cross-sell. |

PostHog usage signals

| Signal | How to Check | What It Means | |---|---|---| | Feature flags created frequently but no experiments | Flag list vs. experiments list | They're using flags for rollouts but not measuring impact. Low-hanging Experiments adoption. | | Flags with high evaluation volume | Flag usage metrics | Flags are in production, integrated into the codebase. High stickiness. | | Session Replay enabled but not filtered by flag variant | Replay usage | They're recording sessions but not connecting them to rollout debugging. Onboarding opportunity. | | Multiple flags per user/team | Flag list + creators | Multiple engineers are using flags. Good health signal and potential for team-wide adoption. |

Command of the Message

Discovery questions

Negative consequences (of not solving this)

Desired state

Positive outcomes

Success metrics

Customer-facing:

TAM-facing:

Competitive positioning

Our positioning

Competitor quick reference

| Competitor | What They Do | Our Advantage | Their Advantage | |---|---|---|---| | LaunchDarkly | Feature flags, targeting, enterprise flag management | Experiments included; analytics integration; session replay; far better pricing | More mature enterprise flag management; larger feature set for complex targeting rules; bigger enterprise install base | | Statsig | Feature flags + experimentation + analytics | Broader platform (replay, surveys, workflows); open source | Purpose-built for experimentation; strong warehouse-native story; more advanced statistical methods | | Eppo | Warehouse-native experimentation | Broader platform; doesn't require a data warehouse; integrated replay | Warehouse-native means they use your existing data; more advanced statistical methodology | | Split.io | Feature flags + experimentation | Broader platform; better pricing; integrated analytics | More mature enterprise integrations |

Honest assessment: Our strongest position is against teams paying LaunchDarkly prices for flags alone and not getting experiments included. The "flags + experiments + analytics in one platform" pitch is genuine and saves money. We're weaker against teams that need very complex flag management at enterprise scale (LaunchDarkly's core strength) or teams that want warehouse-native experimentation (Eppo's pitch). Our sweet spot is engineering teams that want the full loop: flag a feature, measure its impact, debug issues with replay, all in one tool.

Pain points & known limitations

| Pain Point | Impact | Workaround / Solution | |---|---|---| | Flag management UX is simpler than LaunchDarkly's | Enterprise teams with hundreds of flags may want more organizational features | PostHog flags work well at scale. For very complex targeting, review the multivariate flags and payloads documentation. | | No built-in flag approval workflows | Some enterprise teams want PR-style review before a flag goes live | Use existing code review processes (flags are in code). PostHog audit logs track changes. | | Statistical methodology is Bayesian | Teams preferring frequentist methods may push back | Bayesian is faster to reach conclusions and easier to interpret. For teams that insist on frequentist, this is a real limitation. |

Getting a customer started

What does an evaluation look like?

Onboarding checklist

Cross-sell pathways from this use case

| If Using... | They Might Need... | Why | Conversation Starter | |---|---|---|---| | Feature Flags only | Experiments | They're gating features but not measuring impact | "You're rolling out features safely. But do you know if they're actually working? Experiments are included with your flags." | | Feature Flags + Experiments | Session Replay | They're measuring impact but can't debug qualitative issues | "Your experiment shows the control winning. Want to watch what users in the losing variant are actually experiencing?" | | Feature Flags (engineering-driven) | Product Intelligence (for the product team) | Engineering is in PostHog. Product team should be too. | "Your engineers use PostHog for releases. Has your product team seen the analytics? They could track feature adoption and retention without a separate tool." | | Feature Flags (for growth experiments) | Growth & Marketing (for the growth team) | Growth team initiated the experiments, engineering implemented the flags. Expand the growth side. | "Your growth team started the experiments. Have they explored Web Analytics and Marketing Analytics for attribution?" | | Feature Flags + Experiments | Error Tracking / Observability | They're catching issues via experiments but want proactive error detection | "You're catching regressions through experiments. Error Tracking would catch exceptions before they show up in your metrics." | | AI product releasing prompt/model changes | AI/LLM Observability | They need to detect quality regressions that error tracking won't catch | "After your last prompt change, did output quality hold up? AI Evals would tell you automatically." |

Internal resources

Appendix: Company archetype considerations

| Archetype + Stage | Framing | Key Products | Buyer | |---|---|---|---| | AI Native — Early | "Ship fast, break nothing. Feature flags let you deploy AI features to a subset of users and measure quality before going wide." AI Evals is especially relevant here. | Feature Flags, Experiments, AI Evals | CTO, founding engineer | | AI Native — Scaled | "Your engineering team is growing and releases are getting riskier. Feature flags give everyone a safety net, and experiments make sure every change is measured." | Feature Flags, Experiments, Session Replay | VP Eng, Platform Lead | | Cloud Native — Early | "Stop doing all-or-nothing deploys. Ship behind a flag, measure the impact, roll back in one click if something breaks." Speed and simplicity matter. | Feature Flags, Experiments | CTO, founding engineer | | Cloud Native — Scaled | "Multiple teams shipping to the same product. Feature flags give each team independent release control. Experiments ensure changes are measured, not just shipped." | Feature Flags, Experiments, Session Replay | VP Eng, EM, Platform team | | Cloud Native — Enterprise | "Standardize your release process across teams and BUs. Feature flags + experiments give you a consistent framework for safe, measured releases at scale." Governance (audit logs, RBAC) matters here. | Feature Flags, Experiments, Session Replay + Enterprise package | VP Eng, Director of Platform, DevEx Lead |

Use-case selling

Growth | Source: https://posthog.com/handbook/growth/use-case-selling/use-case-selling

We sell products. Customers buy solutions.

When we pitch "add Surveys," it sounds like we're trying to increase their bill. When we pitch "here's how to close the loop on why users drop off," it sounds like we're solving their problem. Same product. Different framing. Very different conversion rate.

Use cases are how we sell. Products are how we bill. A use case is a discrete problem a team is trying to solve, supported by a combination of PostHog products. Billing, metering, and packaging don't change. What changes is how we talk about it, how we organize around it, and how we measure adoption.

Each use case has a full playbook with discovery questions, competitive positioning, expansion paths, objection handling, and onboarding checklists.

The seven use cases

| Use case | Job to be done | Core buyer | |---|---|---| | Product Intelligence | "Help me understand what users do, why they do it, and what to build next." | PMs, designers, product engineers, founders | | Release Engineering | "Help me ship faster without breaking things." | Engineering managers, platform teams, developers | | Observability | "Help me know when things break, understand why, and fix them fast." | SREs, platform engineers, DevOps | | Growth & Marketing | "Help me understand what drives acquisition, conversion, and revenue." | Growth engineers, marketing leads, CRO, GTM engineers | | AI/LLM Observability | "Help me understand how my AI features perform, what they cost, and how users interact with them." | AI/ML engineers, AI PMs, AI founders | | Data Infrastructure | "Help me unify product data with business data and get it where it needs to go." | Data engineers, analytics engineers, product ops | | Customer Experience | "Help me quickly understand what happened, identify the problem, and verify a fix." | Support leaders, engineering leads, CS leaders |

Product coverage matrix

| Product | Primary use case | Secondary use cases | |---|---|---| | Product Analytics | Product Intelligence | Growth & Marketing, AI/LLM Obs, Customer Experience | | Session Replay | Product Intelligence | Release Engineering, Observability, AI/LLM Obs, Customer Experience | | Feature Flags | Release Engineering | | | Experiments | Release Engineering | Product Intelligence, AI/LLM Obs, Growth & Marketing, Customer Experience | | Error Tracking | Observability | AI/LLM Obs, Customer Experience | | Surveys | Product Intelligence | Growth & Marketing, Customer Experience | | Web Analytics | Growth & Marketing | | | Marketing Analytics beta | Growth & Marketing | | | Revenue Analytics | Growth & Marketing | Product Intelligence | | Workflows | Growth & Marketing | Product Intelligence | | Product Tours beta | Growth & Marketing | Product Intelligence | | LLM Observability | AI/LLM Obs | Customer Experience | | AI Evals | AI/LLM Obs | Product Intelligence, Release Engineering | | Data Warehouse | Data Infrastructure | | | Data Pipelines / Batch Exports | Data Infrastructure | Growth & Marketing | | PostHog AI | Horizontal (all) | | | Logging beta | Observability | Customer Experience |

Playbook structure

Every use case playbook follows the same sections, so TAMs know where to find what they need:

  1. Job to be done
  2. Relevant PostHog products (with doc links)
  3. Adoption and expansion paths
  4. Business impact
  5. Personas to target
  6. Signals in Vitally & PostHog
  7. Command of the Message (discovery, negative consequences, desired state, outcomes, metrics)
  8. Competitive positioning
  9. Pain points & known limitations
  10. Getting a customer started (evaluation scope, onboarding checklist)
  11. Objection handling
  12. Cross-sell pathways to other use cases
  13. Internal resources
  14. Company archetype considerations

Not running out of money

Handbook Front Door | Source: https://posthog.com/handbook/finance

Stay calm and default alive

We don't optimize for short-run revenue growth, but we do make sure we have enough money to never feel dependent on future fundraising.

If we average 5% MoM growth, we are default alive (i.e. we'll become profitable before we run out of capital). If we average 7.5% we'll hit $100m by the end of 2026.

Maintaining a strong financial position helps us optimize for long-term revenue growth. For example, we've removed products and revenue for long-term gains.

Fundraising principles

Rule #1: Never have to fundraise – and only fundraise if all the following are true:

How do we spend it

PostHog grows by shipping, whereas most software companies grow linearly with the number of salespeople they hire.

The advantage of our approach is that it's more efficient – $1 spent on product will _forever_ improve things, unlike investing $1 in cold calls. We can easily choose to be profitable if we just sit default alive and let revenue grow "automatically" based on the product we have already shipped.

The disadvantage is that scaling an engineering team is, in our opinion, harder than scaling a sales team. Since engineers' work very heavily overlaps, there is more complexity to getting this right. We may not be able to grow beyond a certain rate, no matter how much we spend.

The final disadvantage is that it's harder to predict how fast we'll grow compared to a company that grows by hiring salespeople with targets, so it takes more thought and often requires more faith!

Future

Handbook Front Door | Source: https://posthog.com/handbook/future

TL;DR: Mid term, it's $100 million ARR by 2026, working backwards from there. Longer term, outcompete top-down competitors worth $50 billion. If we get that far, we'll have helped _tens of millions_ of engineers build better products.

Will PostHog sell?

What motivates us is building an epic product and company.

We're excited by:

We're not excited by:

$100M by 2026

We want to hit $100 million in annual revenue by the end of 2026.

We've set this since it's ambitious and keeps us accountable for some kind of financial output, but working backwards we can do it. We need around 7% monthly revenue growth to hit this.

Secondaries over selling

Everyone at PostHog has options in the company – that means they can be a shareholder. This keeps everyone focused on our long-term interests.

However, at different stages of the company's life, the value of each person's stock may be hundreds of thousands, millions, or even tens of millions of dollars. If someone doesn't have much capital but has $10 million in stock options, they'll start wanting us to sell.

As a result, we aim to let people sell some of their stock when we fundraise and once their stock has very significantly increased in value. This helps keep everyone focused on building something bigger and longer term. What we can offer will depend on what we can negotiate every time we fundraise, but this is our general philosophy.

How you can help

Handbook Front Door | Source: https://posthog.com/handbook/help

People who work at PostHog have come from all sorts of different backgrounds – large companies, small companies, agencies, single-founder startups, and so on.

Their modes of working necessarily differ from each other and from PostHog, but we expect you to work in some fairly specific ways.

Being the transparent company that we are, we want to let you know about those expectations, because you can only meet expectations when you are aware of them!

Generally, our values are a great place to start, as is as the handbook page on culture, but here are a few specific ways to apply those values, and reinforce our culture as we grow:

Getting yourself up and running quickly

If you're new, the default goal is to be able to get work done autonomously.

This will require:

Everything else is a means to this end! We often do onboarding in person to accelerate all the above. This usually takes around a week.

Ask for help, but only after you've tried first

If you've just joined us, there's a lot you probably don't know. That's okay! However, we _do_ expect that you try to help yourself. Here's a framework to use as a guide:

You can also try self-serving an answer in our #ask-max Slack channel. It's trained on our handbook and documentation, so it's capable of answering both questions about internal processes and procedures, as well as product-related questions.

If you don't get the context you're looking for, try #ask-posthog-anything where team members are willing to point you in the right direction. Take a moment to explain how you've tried to help yourself and linking to resources. That saves others valuable time searching the docs again, or typing up a suggestion to do just that.

Don't expect perfection

PostHog is a startup. As solid as our stack / product / CI / dev experience is for a company of our size (super solid, tbh), it might not be the extremely-well-oiled machine you had at BigCo. If something doesn't Just Work, follow the framework above to get help.

We're all human - you shouldn't expect perfection for adhering to our culture, either. But you should help others learn how to stick to our culture, especially new joiners. We're all prone to occasional lapses, and it takes everyone on the team nudging each other in the right direction to keep us all on track. If you notice something happening all the time, take it upon yourself to make it better - see the next section!.

Make it better

If you run into something you found that is confusing or needs fixing, we expect you to update the docs or handbook at minimum, and if you're keen then definitely improve the experience yourself. For example, CI is _everyone's_ job. If it sucks, fix it.

That being said, there is often a _reason_ why things are the way they are. That reason might be "because no one wanted to fix it," but it also might be "because it broke yesterday and we're on it" or "we've carefully considered this before and decided to make it this way."

We encourage you to step on toes, but don't be a bull in a china shop. Context is oftentimes your best friend – gather it up and keep it close.

Don't wait for someone else

We expect you to be proactive about answering questions in your domain, even very early on after you are hired – e.g. after the first week. Look in the code. Read the docs. Find the answer.

Being wrong is way better than being silent – if you are wrong, someone will correct you. If you are silent, you're not doing your job.

Similarly, if you need something to get done, you are responsible for making _sure_ it gets done. This is not your team lead's job or some other team's job - if you need it, you own it. _Most_ of the time this means doing it yourself (see section on helping yourself above); other times it means getting the right people together to understand the urgency and do it with you. But at the end of the day, the responsibility rests on you.

Have an opinion

You definitely don't need to have opinions on everything, but you should absolutely have opinions on your area of expertise.

If you don't have opinions on your area, you are realistically then just waiting for someone to tell you what to do, which is very much at odds with our autonomous way of working.

Opinions can take a bit to form, and that's okay – you don't need to have them on day one. But we expect you to start forming them rather early on, even if it's just on little things.

Look around corners

We expect you to be thinking through not only the one change you're making right now, but also how that change plays out down the road. What might happen with this code / process / thing in 6 months? Where will that leave my change today?

We do have more senior people on the team (both in industry experience and in their tenure at PostHog), but they shouldn't be the only ones looking ahead – you should be the primary one looking ahead for _your_ changes.

Don't assign issues to people

You can list and categorize issues. If you want someone to see an issue, @mention them and/or Slack them the link.

Don't yolo merge

Do not "yolo merge" – i.e.: force a change to our website or platform without someone else checking it. This should _only_ happen in emergencies, _even_ for simple changes. It is _so_ frequent that we find issues. If you have _any_ doubt, get someone else to look at it first.

PRs > issues > Slack

Bias for action. If you can just pick up the work, do so. We want a culture of individual contribution _not_ of delegation.

It is fine (and encouraged) to pick-up side quests, or to deviate from your goals if you think you should. Especially if something is a quick fix, do it yourself as part of our value that _You're the driver_.

If you aren't able to make a change yourself, create an issue in GitHub. Avoid simply relaying to-dos in Slack as a means of getting someone to pick up a task. It's hard to track and easy to forget.

Do things as publicly as possible by default

For discussions, public repos are the best place. Then private ones, then Slack public channels, then Slack private channels or DMs. This is part of our _"Make it public"_ value, and helps with general context setting for the wider team, which means everyone can work more autonomously.

There are only a few exceptions to what we can't share publicly, for example if you are discussing security concerns, specific customers (for privacy reasons), revenue, or growth numbers (since these cause signalling issues with investors or competitors).

Internally, _everything_ can be shared apart from people issues – such as HR / personal (i.e. recruitment or health data).

Be proactive with community questions

Don't _only_ help the community when you're the person on support hero in your small team. No matter what your goals may be, if you can quickly ship fixes to real life user problems, then you are going to build goodwill, word-of-mouth growth, and a better product all in one swoop.

You can find these in posthog.com/questions.

And if you don't work here...

Apply for a job at PostHog!

How we get users

Handbook Front Door | Source: https://posthog.com/handbook/how-we-get-users

Over 100,000 users have signed up to PostHog.

Most companies build their product with a particular user in mind. We build _everything_ around our ideal customer profile.

So when it comes to marketing and sales, we are optimizing for developer experience.

Why we're like this

We've met a lot of successful founders in our space who are full of regret, despite leading companies with well over $100m in annual revenue! The one regret they _all_ had in common was letting go of their growth engine (people recommending their product to each other) and getting focused on sales. They all wound up exiting. That's why they told us this stuff.

The way this pans out? As they got bigger, they gradually shifted from building for users toward building for buyers, hoping to optimize for revenue growth. Over time, that killed their word-of-mouth growth, which caused them to have to work harder for each sale. So they got more salesy, and so on. They became companies that focused on making a bunch of money _by_ building a product, instead of being companies focused on building a great product.

We won't make this mistake.

For us, marketing is creating useful content

Our is small. Way, _way_ smaller than our competitors'. Winning on volume of content is out of the question. So we'd better win on quality.

This constraint has worked out pretty well for us. Distribution is pretty easy when the thing you're working on is good enough to generate word-of-mouth growth – and this helps build an enduring developer brand.

We even hire full-stack developers into our marketing team to make sure we can cover the full depth needed in a lot of our tutorials, docs, and posts.

Things you won't find our marketing team doing: removing information from our website to increase conversion, focusing on paid ads, or letting colleagues ship content they aren't proud of.

We happily spend lots of money on our website

Most companies call it their "marketing website". You already know it's going to be crappy.

We treat our website as a product. With real investment. When we were just a couple of people, we realized that our website _is_ our sales team – since our users would want to self-serve as much as possible.

When we started out, we also realized that all our competitors had crappy marketing websites.

And, as with so much that we do, we get an _increasing return_ on quality. If we do things noticeably better than everyone else, then we're remarkable. That results in word-of-mouth growth.

We make it extremely easy for you to buy PostHog

Most sales teams do a bunch of low-quality cold outbound that harm their company's reputation and 99.999% of the time is ignored. And then once they've got you on a call, they pepper you with MEDDPICC questions before actually letting you see the product. Who knows, maybe 3 meetings later they'll share some pricing too!

We do things a little bit differently. Customers _buy_ from us, we don't _sell_ to them. It means we can instead invest our money in shipping more (and better) products, at lower prices than our competitors, to provide a sustainable advantage.

Fun fact: the total spend we have on marketing and sales per customer we acquire pays itself off within 3 months of them signing up for a paid plan. "Best in class" is considered to be one year...!

How we make money

Handbook Front Door | Source: https://posthog.com/handbook/how-we-make-money

We make money from those that have it and like our products. We don't make money from those that don't.

How we do sales is based on the best experience for our Ideal Customer Profile

I cannot think of any harder group than developers to convince, via a cold-call or email, to buy software. We should focus on inbound.

All the other rules here are based on what we felt would be the best experience for an engineering customer, whilst allowing us to grow revenue in the long run.

Don't let pricing get in the way

Before a user has decided to buy the product, we should let them try it for free. Not only does this mean they can immediately self-serve without having to get budget internally, it also reduces the need for a large sales team to convince them otherwise. When someone is looking for a solution, they are ready to install it – but only if we can get out of the way commercially.

Once a user likes the product, we don't want to create a big decision around continuing to expand their usage with us. (For example, if we suddenly charged a large recurring price per month.) Instead, we charge a tiny fraction of a cent for each extra event they send.

Charge based on what people use, and give users control

Some users want to start with just a little usage of one product. Others replace five products with us. We should price to reflect this. We believe it's better to have a little extra pricing complexity to provide a much better value option, than an "all-in-one" price.

We charge by product _and_ by usage of those products that people need.

Beyond which products we use, we look for other ways to give users control, such as spending limits on session recordings.

These principles mean that they will spend less than they otherwise would have, _but_ it means they'll stick around. We don't want users to churn if they are unhappy with what they're spending; we want them to better manage how they use the platform.

Match the cheapest for each individual product

We can make it up by selling other products to the customer over time. This way, it's always a no-brainer to pick PostHog, we get as much word-of-mouth growth as possible, _and_ our single product competitors can't compete since they have nowhere to go.

Principles for dealing with big customers

The most important thing here is to remain focused on building the best product, not on what a single big customer needs.

Enduringly low prices

Handbook Front Door | Source: https://posthog.com/handbook/low-prices

We want our customers to spend their money on their engineering team, not on buying ten software products.

Here is the list of advantages we have and why they matter.

We can sell multiple products to the same people

So, do you want to buy ten products for $1k each or all ten for $5k? Or, better yet, each one separately?

We can pull this off because we're focused on getting in first – we don't follow the whims of whatever an enterprise may have.

No sales needed

Our competitors spend more on sales and marketing than product development. Nearly all our sales are self-serve _and_ 70% of our customers find us through word-of-mouth growth.

Multiple products, one dataset

We aim to be the source of truth for customer and product data. The products we build all work from this same dataset, instead of ten different vendors all paying to store the same data as each other – each with their own platform teams.

A technical audience who need _docs_, not technical support

Many of our products are traditionally sold to non-technical people. They need more help setting up SDKs or snippets. We work with engineers who simply need good docs. Writing and maintaining those (and an open-source codebase that people can inspect or even fix bugs in themselves) saves thousands of support questions each year.

Using open-source technology

We've often had second-mover advantages.

One of these is that we could use the latest open-source technologies, like ClickHouse. Many of our competitors have had to build their own databases. You can guess which is more efficient for storing tens of billions of events and serving millions of analytics queries. Better them than us!

How we make users happy

Handbook Front Door | Source: https://posthog.com/handbook/making-users-happy

User happiness is fundamentally important. How do we achieve this?

Building products that people want

First, someone internally will suggest an idea. Sometimes this will come from James and Tim, but it has, just as frequently, come from anyone else on the team.

If it requires a new team to build it – which it usually will – we'll start by hiring an ex-founder who is technical. We'll onboard them into the existing team that has the most overlap. This helps get them used to working with our codebase, as well as with the culture we look for from each team.

That person builds the MVP, and the only goal is to figure out if anyone will use it. With some products, the MVP may have more scope if we feel especially confident. Once the new product is in a non-embarrassing state (that won't harm our brand), we add pricing to it and put it on our website. This drives more demand. At this stage, the goal is to get the product to product-market fit in PostHog's platform, which means working with customers until we have five delighted, paying customers.

Once all this is done – which we'd expect to take a few months – we can start to innovate. This usually means some kind of platform play, such as extending the product to enhance everything else we're working on, or shipping another new product that would work well with it.

Engineers talk to users and provide support

You should be _as close as possible_ to your users, feeling whatever they feel, so you have as much information as possible to make the product great.

For established products with a lot of usage questions (how do I create an insight that does X, for example), Customer Success helps with support.

Before a new product is even made, we'll add it to our public roadmap. Once it ships, we'll use our own tools to get customer interviews, feedback, and data, and we'll always aim to "close the loop" with users - coming back with: a pull request, a GitHub issue they can follow in the open, or an explanation of why we can't make a feature they've asked for.

This means the product improves, users are impressed and recommend us to others, _and_ we show users that we listen, encouraging them to keep going through this loop with us, faster and faster.

How we got here

Handbook Front Door | Source: https://posthog.com/handbook/story

Things that influenced us

Books

Other companies

Handbook

Like many things at PostHog, this handbook has scrappy origins.

Tim and James were planning on launching on Hacker News, and wanted to look as mature as possible. We felt that few people would want to use a flaky new startup's product seriously. So we asked ourselves: how do we signal that we're mature?

We looked around at some big, boring companies and realized they all had huge footer sections on their websites with lots of links! How do we produce a lot of content to add to our footer when the product, at that time, was so simple? The answer: we should write up how we want to work.

Once we started writing the handbook, we realized it would transform our company. Every team member, and even strangers on the internet, could suggest changes. If you're doing something in public, you're going to think it through better. Ultimately, it made us treat the company as our product.

It's a classic example of getting information by doing, rather than by planning too carefully.

Timeline

January 2020: The start

PostHog was founded by James and Tim on January 23, 2020.

We started working together on a startup in August 2019. Our first idea was to help engineers manage technical debt. It didn't work out, but we realized the power of treating growth as an engineering problem. We also knew that many engineers struggle to understand the impact they have on the people who use what they build.

There are plenty of product analytics tools out there, but all the alternatives are SaaS-based. While they're powerful, they can be frustrating for developers. From our perspective, these tools can be problematic because:

February 2020: Launch

We got into Y Combinator's W20 batch, and, just a couple of weeks after starting, realized that we needed to build PostHog.

We launched on Hacker News with our MVP, just four weeks after we started writing code. PostHog was our sixth idea – we had been pivoting almost once a month for half a year. Boy were we relieved!

The response was overwhelmingly positive. We had over 300 deployments in a couple of days. Two weeks later, we'd gone past 1,500 stars on GitHub.

Since then, we've realized we weren't just onto a cool side project, we were onto what could be a huge company. It turned out there were a lot of developers who liked us who wanted a better choice, built for them.

April 2020: $3m seed round

After we finished Y Combinator, we raised a $3.025m seed round. This was from Y Combinator's Continuity Fund, 1984 Ventures.

As we started raising, we started hiring. We brought on board Marius, Eric and James G.

May 2020: First 1,000 users

We kept shipping, people kept coming!

October 2020: Billions of events supported

This was a major update – PostHog started providing ClickHouse support. Whilst we launched based on PostgreSQL, as it was the easiest option to ship quickly, this enabled us to scale to billions of events.

November 2020: Building a platform

We realized that our users, whether startups, scale ups or enterprises, have simple needs across a broad range of use cases in understanding user behavior.

PostHog now supported product analytics, feature flags, and session replays.

December 2020: $9m Series A

We kept growing organically and took the opportunity to raise a $9M Series A, topping our funding up to $12M led by GV (formerly Google Ventures).

Our focus remained firmly product, engineering and design oriented, so we increased our team in those areas.

We now had employees in ten countries, and it still felt like day one.

Everyone takes a mandatory two weeks off over Christmas to relax.

June 2021: $15m Series B

We raised a $15m Series B a little ahead of schedule, led by existing investor Y Combinator.

We're now focused on achieving strong product-market fit with our target segment in 2021.

Our team had grown to 25 people in 10 countries.

September 2021: Product-market fit achieved for PostHog Scale

We achieved product-market fit for our open-source product and PostHog Scale.

Our revenue quickly rose as a result. Now we needed to optimize it.

We were 30 people in 12 countries.

January 2022: Sales comes from our team, not our founders

We hired two Customer Success experts dealing with all inbound requests. We hired two more engineers, since most questions customers have are technical.

December 2022: 6x revenue growth

We had a fantastic year. While the tech market crashed, we grew 6x and reached millions in revenue, with a sub-two-month CAC payback period. We set $10m ARR as our next goal, with a gross margin of 70% – both of which should mean we've got all the metrics needed for the next fundraise.

We optimized revenue growth by implementing a product-led CRM for our customer success team, adding to our content team size, and creating a two-person growth engineering team. These teams all make a big difference!

We deepened all of our product areas significantly – we frequently win deals as a standalone session recording, feature flagging or experimentation tool. Session recording usage started to match product analytics usage.

Our infrastructure is far more stable and scalable – much more of it runs as code. We can now offer EU- or US-based hosting for our customers' data.

We're now 38 people in lots of countries. We're not adding lots of headcount over the next 12 months, though. We're staying lean and letting revenue continue to rise rapidly.

February 2023: Focus on mass adoption

We're doing well at monetizing high-growth startups due to our optimization work, averaging over 15% MoM for the last six months.

We've decided to double down on mass adoption of the platform in high potential startups instead of focusing on enterprise. Simply, this will better help us increase the number of successful products in the world. As a result, we've removed support for paid self-hosted deployment and are doubling down on our open source and cloud projects. We have released a free tier of PostHog.

We went from "product analytics with some extra stuff thrown in" to "Product OS" and started charging for session replay separately.

In the product, we're working on making the experience slicker, and we have plans for a standalone quality CDP in Q2.

March 2023: Decided to ship a warehouse

For a long time, we were happy competing with lots of $1-2 billion companies, each providing point solutions. We felt our market was just the sum of all of theirs.

But we kept seeing companies streaming their PostHog data to a warehouse - such as BigQuery. We even lost our then-largest customer for this reason - where their source of truth became their warehouse instead of PostHog.

So we decided we would ship our own warehouse, enabling us to remain the source of truth for customer and product data. This would let us offer a better integrated service to our customers, and meant we could work on a bigger challenge.

August 2023: Growth continues

We've doubled revenue so far this year without any increase in headcount. We've hit 15.7% MoM for the last 12 months. Our CAC payback is now just five days. Our numbers are exceptional. We even discounted several of our products. We've added ten extra roles and will be profitable in around a year.

We have user surveys and the data warehouse in private beta. Other products are being positioned as first-class products of their own (AKA "The Great Unbundling"). This means we can make it clearer for new folk to get what we do, give more ownership (which means more speed) to our own teams, and we can compete on commercials more effectively.

Our infrastructure has become pleasingly stable. The biggest challenge is scaling our data pipeline, and making sure we give as much responsibility as we can to each small team owning each product for their own pipelines, where rational to do so.

October 2023: Winning the internet

We are often mentioned as an alternative to product analytics tools.

We are winning the internet when we get more of this for our _other_ products. We don’t have to win everything, but we need to get into the comparison each time. This is _starting_ to happen, but to win the Internet, we need to see this happening daily.

Multiple products are early like the warehouse, ETL, surveys (used a lot but not paid), feature flags and experimentation (first revenue), CDP (pipelines being rebuilt, webhooks next), notebooks, and web analytics.

There is a lot of supporting work needing to be done. This includes:

January 2024: Well, that was good

That was quite the year.

We wound up quadrupling our revenue, but only increasing our net headcount by three people in 2023. Last year, we validated that we could get multiple products to product-market fit (like feature flags and experimentation).

We built more integrations between our products like HogQL, notebooks, CDP, and the data warehouse.

Now, we are doubling down. We shipped a lot in Q4 2023, but every product could be improved a lot. We're caring about the craft of our products:

Products are not limited to engineers working on the app. It includes what customer success, marketing, and ops are working on. Everything can be considered a product.

Each team should be aiming to feel proud of what they've built by the end of the quarter.

April 2024: We're now the default for startups

54% of the first Y Combinator batch this year adopted PostHog.

Tim and James turned up to talk at batch events and we were surprised at the number of groupies wearing PostHog merch – our merch is really cool now, we've gone way beyond the logo-on-a-black-t-shirt standard.

As far as we can tell, we're in the top three products used by YC companies.

July 2024: Price cuts ftw

We cut pricing by up to 80% for our two most popular products, including for our existing customer base. This was popular with users and led to faster growth.

We've started doing growth reviews for almost every product we have. We run through each product's metrics (revenue/usage/support/performance) and feedback / reasons for any churn that has happened, so we can truly treat each small team like a startup. This session is designed so the engineering team leads may choose to reprioritize work, or not.

October 2024: 100,000 customers, and speeding up – more products and more people

We hit 100,000 customers either paying or free, and over a quarter of a million users. We've started hiring a lot faster as growth has continued this year. We're now 65 ish people with ~9 products.

We've added some people in sales, but it is strictly (i) sales assist, talking to people that have asked to speak to us, and (ii) cross sell to existing customers.

We do _not_ do outbound, so we can remain efficient and either hire more engineers or cut our pricing for our customers so more of them recommend us!

We've hired a sales engineer super early (Mine, she's awesome) and we're really working on the culture in this team proactively.

Strategy-wise, we're just leaning into our basic three principles, which we're seeing more and more evidence are working well:

  1. All the tools in one – We want to go wider still. We think we can provide _every_ piece of SaaS that startups use, starting with those closest to customer data. We want to expand to a customer support product, the marketing and sales stack of tools too.
  1. Get in first – Don't go upmarket. We're closing enterprises regularly, but we're not trying that hard here. We're trying to stay away from complex migrations for users who use many products already.
  1. Be the source of truth – Our own data warehouse is now available and very popular.

Revenue is in the low $10s of millions of ARR. We're very strongly default alive and will struggle to not end up profitable next year. Every time we get close to being profitable, we start speeding up hiring.

Revenue growth is fast enough and we're getting so many unprompted offers for investment (that we aren't taking) that money isn't really a meaningful constraint anymore. Whilst we have a great grip on each product's individual performance, our understanding of cross sell is a little weak, so we're working on that now.

Our marketing is getting weirder. It's more and more fun. We've commissioned a puppet, coming in January. Watch this space. Our newsletter, Product for Engineers, now has 20,000 subscribers and it's growing fast.

We're realizing that the more ambitious we are, the easier it gets – customers get excited, investors get excited, employees get excited. We can now see a real path to being a $100bn+ company and changing how software teams work industry-wide.

Strong team

Handbook Front Door | Source: https://posthog.com/handbook/strong-team

_You're the driver_ is one of our values. 90% of a startup's problems are solved by just having the right group of people in ~~the building~~ Slack.

Personality traits that cause people to be successful here

Genuine builders. Some people do jobs for the money. Those that have truly found their passion are _far_ stronger.

Easy to work with. People who are low ego, flexible, energetic, and upbeat will raise those around them. We often, but don't exclusively, hire those with more experience since it's easier for them to contribute meaningfully. Things can and do get _very_ hard here – whether it's scaling, shipping complex products, handling a stream of support requests, or trying to ship something that touches multiple teams. We need those who won't get disheartened, and will collaborate, iterate, and ship their way out of anything. We proactively reward those that do these things, not those that self-promote.

Will join us on the journey. Some people are inspirational to work with – they lift others up. We have a _huge_ opportunity at PostHog, and it often feels like we've caught lightning in a bottle. Anyone joining the company at this stage could make this the last job we all ever need. We want people that will push to get this done, for each other's sake. We don't hire mercenaries. We need to feel people here are producing the best work of their lives.

Drivers not passengers. Proactive people that can fully own projects and get them done (or make sure they get help) are what we need. For many of our roles, while it isn't a common job title, internally we have the concept of product engineers – people who can take high-level requirements, decide what to build, do so with customers, and keep iterating.

Great (and terrible) reasons to join us

Let's start with why you _should_ join:

Why you _should not_ join:

A small group of stronger people and compensation

When we raised our Series A, one of the first things we did was to make sure we didn't lose our existing team (at least for pay reasons!) _before_ we added more people to it. This is still true today – we proactively review everyone's pay three to four times a year and increase it if people have leveled up.

When it comes to churn due to pay, fairness is just as important as the absolute level. We do this in line with a transparent pay system that we even make public. We aim to pay generously and fairly between people.

For options, we offer the most generous terms possible as it feels like the right thing to do. We think this makes it as likely as possible people can see huge upside if we are successful (making it easier to raise _and_ more realistic that people will actually get money from them). That motivates everybody.

One of the hardest parts about building a high performance team, is letting people go when they aren't performing. We are decisive and do this faster than many others would. We offer four months severance when we let people go for performance reasons to give people more time to move on – and so it's easier for us to make a change if we need to.

While we will give direct feedback, if we don't see this being responded to quickly ahead of letting someone go, we will part ways, so people can find a job they are better suited to, and so we can find a team member better suited to the job. The end result is that everyone on the team is contributing meaningfully.

Growing the team beyond hiring

We hire insanely talented people to build products ourselves, but sometimes acquihires or acquisitions help us move faster by adding engineering capacity and expertise. These situations are rare so we’re often reactive with these - but we’ve set clear principles to make sure we handle them consistently.

Acquihiring

This is an efficient way to onboard great engineers without all the complexity of an acquisition. For us, an acquihire means closing down your old company as you and your team join PostHog purely as talent, which we will match up with our product teams where it makes sense.

Acquiring products

IP inside PostHog

By default, we are not interested in acquiring IP. If we can build something ourselves, we will. The only exception is IP with deep technical value we don’t believe we can replicate internally.

If we want a product inside PostHog but lack the domain expertise, we might acquire the team behind it - with the expectation your team rebuilds the product natively into our platform and migrates users. This would be the case where, without domain knowledge, projects might take us an unreasonable amount of time to ship or get deprioritized. Even then, we will only do this if the price is right. We generally won’t pay a premium as the value comes from the team’s expertise, not the legacy product.

IP outside PostHog

We do not acquire products that sit completely outside our platform (e.g. an IDE). Our strategy changes often, and owning something disconnected would create pressure to keep it alive.

The exception would be if the product technically lives outside the platform but directly enhances PostHog’s value (e.g. a new way to use PostHog data inside a terminal), where we may consider an acquihire or paying a premium.

Existing customers

We generally do not want to convert any existing customers you have into PostHog customers directly. They may be different from our ICP and put pressure on the team to build something different than what we would otherwise plan to offer.

Your customers are of course welcome to sign up for PostHog and use our existing products and new ones once they are launched, but we don't make promises to these customers about features or support for their existing workflows.

Acquisitions for marketing

We are not interested in acquiring companies just for their audience or marketing. That’s a distraction, and we’re confident in the strength of our own marketing team. If we want more marketing, we’ll invest in it directly.

Values

Handbook Front Door | Source: https://posthog.com/handbook/values

These are the principles for the behavior we care about.

You're the driver

We hire people that are really great at their jobs, and get out of their way. There are no deadlines, very minimal coordination and you won't have us breathing down your neck.

In return, we ask for extraordinarily high ownership. To succeed you need to be intrinsically motivated.

Great people at PostHog can take very high level direction, and ship quickly to find out as quickly as possible if our plans can survive contact with customers!

Being the driver means getting stuff done _yourself_. We've had non technical people create hardware products, coding in C++, we've got designers that will write Tailwind and React - rather than just create the file in Figma. Our salespeople answer technical questions without engineers as backup - and if they don't know the answer, they educate themselves more deeply for next time. We like people to go full stack instead to reduce the number of dependencies.

Building a company isn't a solo sport. We're Ted Lasso (although I've not watched to the end, I hope they win) not Wolf of Wall Street). We expect you to take high ownership of the _company_ and _your team_ being successful. This means when you see something wrong, you fix it or give direct feedback - it's not ok to watch your colleagues fail.

Make it public

We default to transparency with everything we work on. That means we make a lot of things public: our code, our handbook, our roadmap, how we pay (or even let go of) people, what our strategy is, and who we have raised money from.

Internally, a culture of transparency looks like managers telling you to raise feedback directly with the person it concerns instead of solving problems for you, it means changing teams around in public Slack channels, it means detailed financial information, live updates on fundraising and board slide access.

Being transparent externally helps us achieve our mission - we write about what we're working on so the world can take advantage of the lessons we're learning, and so they know how to work with us better. Knowing that thousands of people will read our handbook pages forces clearer thinking. And, for free, we can build trust in a way other vendors just choose not to.

There are a few things that we are internally transparent about, but that should not be shared publicly. Anything related to our company financials is strictly confidential and should not be shared externally, including our current revenue numbers, ACV, burn rate, etc. Anything in a public press release is fine to share!

Do more weird

So much about how we work is different.

Weirdness can just be the absurd lengths we are willing to go to. It can mean redesigning an already world-class website, for the 5th time. It can mean shipping _literally_ every product that relates to customer data, with teams of just one to five people competing with $200bn+ companies, successfully.

We aren't weird for the sake of it. We want the company perfectly optimized for our strategy. We have small teams when very few others do, because we are going to build 50+ products. We post billboards of our founders' faces because no one else is brave enough thus it stands out. Even the little things - like having _pricing_ on our pricing page!

We've even written a guide on how you can do more weird.

Why not now?

_Why not now?_ means getting things done _proactively_, _today_. You do not need consensus to do things – focus your energy on shipping what's most valuable for our customers and the company, then take ownership of making it happen, not on getting buy in from others. You certainly shouldn't wait until next quarter if your new idea makes more sense to work on than your previous goal.

We have learned the clearest lessons at PostHog by doing things, not from hypothesizing about them. If we're debating doing something, just trying it is the best way to learn. Doing more planning is rarely the right way to figure out if something will work, doing the thing is the answer by default here.

Sometimes this approach might mean you ship something that others don't agree with. You will need to be willing to throw away work sometimes, because the upside – not needing to get lots of approval to do stuff and being able to take more bets – means we all move so much faster that mistakes are a lot less costly.

_Why not now?_ doesn't just mean shipping huge product features. It may mean diving into a small customer support issue quickly to delight them – this is one of the main reasons people recommend us to others.

Optimistic by default

We have a lot of control over our direction, and we've been very well served by shooting for the best case scenario every time we make a decision. You'll hear us say things like "play offense, not defense", "how do we 10x this", "how do we win in 10 years' time". Aiming for the best possible upside and sometimes missing is much better than never trying.

This is especially true when we think of new ideas - any big new thing can sound pretty silly at first, almost by definition. You'll hear PostHog war cries like "we haven't built our defining feature yet, maybe this". It never is, but that's _exactly_ the point. What we've already done is less important than what we do next. If we make new ideas painful to share with others, they'll eventually stop coming.

At a simple level, we want to be surrounded by people that are enthusiastic, passionate and happy. PostHog is a group of people working together with a shared goal. A positive, encouraging atmosphere simply means everyone is going to have a lot more fun and will be able to stick around for the full adventure here.

Put more grandiosely, PostHog is wildly ambitious, and with that, a level of optimism is _required_. You cannot change the world without first _believing_ you can change the world. People not believing is probably a bigger deal than people not being able to.

Deciding which products we build

Handbook Front Door | Source: https://posthog.com/handbook/which-products

Providing all the tools in one is a core part of our strategy.

Shipping them in the right order is key to a fast return on investment from every new product.

How we pick new products

Until products are built and launched, it's hard to predict which ones will do well. Because of this, we want to be working on a mix of new products at any given time. Some we're very sure will do well, others might be more of a bet with a potentially big outcome. This guidance is therefore less prescriptive that it could otherwise be.

Products we know will work well if we ship them:

Products we're less excited about building:

How new products get built

Sometimes the Blitzscale team will decide a new product needs to be built. They'll find someone internally to run it, ideally someone who's been at PostHog for at least 6 months (we tried getting new people to ship new products, but they often struggled to ship quickly).

Other times you might have an idea for a great product we should build. In that case, use the New Product RFC template. You might choose to hack together a prototype of the product to demo and show off, which you should do! Blitzscale only needs to get involved if you want to start working on this product full time. At that point, we are choosing whether to invest a pretty serious amount of money into launching it, so we want to get that right.

For a complete walkthrough of the product lifecycle, see releasing new products and features.

Next products on deck

From our roadmap, here's what we're currently working on:

And these are the products we think we'll focus on next:

How to pick which feature within an existing product to build

In the early days, you'll be shipping the main few features that your category of product has as standard. In product analytics, this would be something like (1) capturing events, (2) trends, (3) funnels, (4) retention, and (5) person views.

Once this is done, you'll get a stream of feature requests and bug reports from users. You can't go too wrong if you listen to these and, by default, prioritize those that help us get in first, first. For example, with our data warehouse, we picked multi-tenant architecture because we wanted startups to be able to get started for free or very little initial cost - even though a single tenant approach would have given us an MVP faster. Sometimes, if sales are asking, you may choose to prioritize a feature for a big customer earlier, but you should never do this when you wouldn't have shipped it at some stage anyway. However, be cognizant of how often you do this, and whether now is the right time to be shifting your persona focus.

Later on, you can then _innovate_ several ways:

Who we build for

Handbook Front Door | Source: https://posthog.com/handbook/who-we-build-for

We define who we build for as ICP (ie, the company) and the Persona (ie the actual person using the product).

Our current ICP

AKA our ideal customer profile.

We build for the people building products at high-growth startups.

Marketing and customer success should primarily focus on this ICP, but should also develop **high-potential customers – customers that are likely to later become high-growth customers (e.g. PostHog itself during YC). We should be in maintenance mode for hobbyists**, such as engineers building side projects. We want to be the first tool that technical founders add to their product.

| &nbsp; | High-growth startup | | --- | --- | | Description | Startups that have product-market fit and are quickly scaling up with new customers, hiring, and adding more revenue. | | Criteria | - 15-500 employees<br />- $100k+/month in revenue _or_ very large number of consumer users<br />- Raised from leading investors<br />- Not yet IPO'ed | | Why they matter? | - Able to efficiently monetize them<br />- Very quick sales cycle<br />- Act as key opinion leaders for earlier-stage startups/slower moving companies<br />- Strong opinions on what they need - helping us build a better product | | Examples | PostHog anytime from their Series B to IPO, Supabase, ElevenLabs |

Our current Persona

Persona is the job title or role of the person actually using a product in PostHog. Each team will focus more or less on different members of the product team. This is detailed on their team pages.

As companies get bigger, the type of person that uses a product changes. As an example:

Each product should start with a single persona, usually an early person (preferably engineer) at a startup. Teams should make sure to build a really good product with PMF for that single persona. As the product and user-base matures, new personas will emerge as users. You only serve that new persona if you've found PMF and satisfied requirements for the initial persona.

You still need to keep your initial personas happy too, which is tricky, but important as that initial persona is how we get in first.

How do you know if you have PMF and satisfied requirements? Look at churn. If the initial persona is churning from your product, you still have work to do to retain that persona before moving onto others. If instead the product has been handed off to another persona in the org, and _they_ are churning, that's an indication that you may need to start supporting the needs of this next persona.

We've not always been successful at building products for personas other than engineers. We're now at a stage where we need to be in order to continue growing.

Churn?

If a team does not currently support a persona, and that persona churns off of using that product, we are okay with that, as long as that doesn't cause the customer to churn off of PostHog entirely. We should try to support those personas to gracefully move off of PostHog. For example: we are okay with sales people churning off to a CRM, and we'll provide exports to export PostHog data to those systems.

Why does PostHog exist? Our mission and strategy

Handbook Front Door | Source: https://posthog.com/handbook/why-does-posthog-exist

Our mission

Equip every developer to build successful products.

Why is that our mission?

Since the beginning, we've believed that engineers should be way more involved in making product decisions than they've been historically. In order to help them do that, we've built a collection of tools for engineers.

Similar tools to the ones we've built have existed for a long time, but they were always built with other users in mind. By building things like product analytics, session replays, feature flags and a data warehouse for engineers first, we give engineers the ability to make product decisions themselves. This massively increases the speed at which engineers can make good decisions.

The other way PostHog helps engineers is by combining _all_ the tools they need into one product. This avoids a ton of work integrating and linking up various products, both when integrating and ongoingly.

We try to help engineers from the very beginning, when their product is just being built. We do that by having generous free tiers, and no need to talk to sales to get started.

Our strategy

1. Be the source of truth for all product context

Building a successful product is hard; doing so when you don't understand your customers is even harder. It's wild that no one has already provided a complete record of everything engineers need to ship products. This has happened because the entire industry has focused on integration instead of consolidation.

Traditionally, as companies scale, their data warehouse becomes the source of truth, and non-warehouse native tools (like product analytics) become less relevant as engineers lose trust in the data they collect, simply because they are misused and divorced from the source of truth. Every company winds up with a huge mess of data spaghetti, with their business logic still spread across dozens or hundreds of tools.

We provide developer infrastructure - by providing _every_ tool engineers need in one place, we can:

2. Provide every tool engineers need to build successful products

We aim to offer every tool engineering teams need to debug, understand, and improve their products. From session replay for debugging to feature flags for safe deployments, we help engineers ship better code faster.

We can then get our AI to work across all of them together, whilst making every individual tool cheaper than the rest of the market - since we provide so many we can charge less. This means engineers get better tools at a fraction of the cost of piecing together solutions from multiple vendors.

3. Get in first

Since developers exist first in a startup, by getting in with them early, we are naturally upstream of every other tool they might have considered using. Although anyone can pick up our products (and lots of mature companies certainly do), this means we can best deliver developer infrastructure to early stage companies, and so should focus there by default.

_Once_ we land a customer, we then let them pull _us_ upmarket as they grow. But not before. We don't want to hire a big enterprise sales team and go upmarket before our existing customers are there. This keeps us efficient and able to stay focused on building tools that engineers actually want to use.

4. Automate the iteration process

Because we have all the context on both users and the product, we can automate large chunks of the cycle of shipping -> observing -> iterating.

Secret master plan

We're a wide company with small teams

Handbook Front Door | Source: https://posthog.com/handbook/wide-company

Part of our strategy is to provide all the tools in one for evaluating feature success.

Speed

This means we need to ship a _lot_ of products into one platform. We can see a need for at least 20. That's a lot of engineering work.

After we'd started hiring, we asked ourselves a question – how could we structure the company to optimize for speed above everything else?

I happened to go to an excellent talk by Jeff Lawson, the CEO of Twilio. It made me realize I should be asking, "Who ships more per person, a startup or an enterprise?" Clearly the former. So we structured PostHog like a series of startups.

Small teams

We decided that we should split PostHog into a series of small teams, each working like its own startup, fully owning at least one of our products.

As with any startup, the principles that govern these small teams are:

Minimal hierarchy

We deliberately keep the number of levels and people managers at PostHog to the absolute minimum we can get away with. This maximizes team member autononomy _and_ increases shipping speed, as you don't need to run things past a manager or wait to get something signed off the vast majority of the time.

This means that, if you need something or need to flag an issue, you are strongly encouraged to communicate _directly_ with the person or team working on the thing you care about. We want to avoid people going up and down the org chart via managers as much as possible. 90% of the time, this approach means you'll get what you need faster. 10% of the time, this might cause a tiny bit of confusion if what you are asking for doesn't beautifully align with that team's objectives. We believe that trade-off is ok - we'll figure it out.

We have a tiny exec team - this is what they are responsible for:

For _everything else_, you and/or your small team should be able to decide this or talk directly to the teams involved. This includes deciding which feature to build next within a particular product. We trust you to bring in the right people as you feel appropriate, relative to the scale of what you're doing.

PostHog is _not_ a good place for managers who are territorial and prefer for all communication to go through them for 'efficiency'. Over time, doing this would undermine autonomy and cause our best people to quit!

Titles based on what you do

Companies give out titles to people that primarily show how senior they are.

This means titles, as adopted by the wider world, imply that _seniority_ is more important than what people do. We do not believe that seniority should determine how decisions get made - people should own decisions in their area of the business. We trust every employee to fully own their area of the business.

When you are prompted to put your title somewhere like LinkedIn, please just say as clearly as you can what you are focused on. Please do not focus on how senior you are. Feel free to be weird with it.

In other words, instead of your title being "Senior Engineer at PostHog" (which is not a title that exists at PostHog anyway), it's actually "Product Analytics Engineer at PostHog."

Goal setting

When you build a startup from scratch, you are in an existential crisis. One day you might be building a gym, the next day a software product for accountants. The problem changes. At PostHog, we give each small team a product to build. (James and Tim focus on _which_ products we should build, as they often need sequencing.)

Once we had product-market fit, and we had reached 15 people or so, we realized we needed to set some kind of goals. We started by using OKRs as they're pretty standard.

However, one of our engineers one day told me, "I realized I needed to change my objective. Then I started rewriting my OKRs into the handbook. I realized I was spending time stressing about the wording of it, which was going to have zero impact on what I knew I had to build." That seemed silly, so instead we make a point of calling them just "goals". We intentionally don't sweat the wording.

Another best practice we choose to ignore is "goals should be output driven". It sounds great in principle, but what is going to happen after a product team, which is nearly every team here, sets an output driven goal like "improve activation by 20%"?

Either the team will decide on some things it should build, or they won't manage to figure out what to build to do this. In either case, if a team knows what it should achieve, it should then figure out which things it needs to ship, and write those things down instead. It's clearer, and clearer is faster.

And if that list turns out not to be helping our metrics? Switch the goal to a new thing.

Building a world-class engineering environment

Handbook Front Door | Source: https://posthog.com/handbook/world-class-engineering

We know we've got to be quick to build all the tools in one. So we better have a world-class engineering environment that lets us build everything. How do we do that?

No product management by default

Engineers decide what to build. If you need help, our product managers (we have four today) will give you coaching.

If an engineer at PostHog believes they should work on X, they can build X. We'd prefer you ship ten things quickly (and make a couple of mistakes) than plan too much. You will tend to gather more information by _doing_ rather than _planning_.

There are _some_ exceptions - for example, where we need to work on architecture, but we leave it down to you to decide when you should plan more or just get started.

Transparency is fuel for autonomy

In nearly any company, having each engineer decide what to work on would fail. Why? They simply would lack enough context over what the company is aiming for, or what everyone else is up to.

PostHog is exceptionally transparent. You're reading our public handbook after all.

It starts with hiring

Finally, we hire people we think will flourish in an autonomous environment. We often hire people with broader rather than narrower skill sets, who are more flexible. They've often started (and often failed) their own startups. They're low ego and flexible. They're builders at heart who love innovating and working like this.

One of the things we've learned is the very strongest engineers are usually those who want autonomy the most, and so freedom is a great way to attract and retain world-class talent. Now that we're lucky enough to have people like this already here, people see PostHog as a destination company, accelerating further our access to some of the best people in the world at what they do.

A high percentage of our employees are engineers

If we want to ship a lot, we need to figure out how we can have most of capital go into engineering.

We have zero outbound sales, and a hyperefficient go-to-market, largely driven by self-serve. Since we focus on engineers, we have less customer support and set-up handholding than all our competitors.

80% of the company are shipping product.

Deep work

When you're doing engineering, you're in the business of building up large, abstracted models in your head of how the code works. That takes time and requires focus. Doing a ton of meetings is a great way to screw this up.

We therefore have meeting-free days every Tuesday and Thursday. We encourage you to call it out if things are going into your calendar on these days. Since we also are all remote, these usually give you lengths of uninterrupted time to get your work done.

The only exceptions to this rule are for customer success and recruitment, who may need to have external meetings with users or candidates on these days in order to do their jobs.

Campaigns and coupons

Marketing | Source: https://posthog.com/handbook/marketing/campaigns-and-coupons

We run promotional campaigns with partners (e.g., newsletters, influencers) that offer exclusive benefits to their audiences via coupon codes.

How it works

  1. Campaign setup: Campaigns are created in Billing Admin with a strategy defining what benefits are granted (e.g., free addons, increased limits, credits)
  2. Code distribution: Coupon codes are exported as CSV and shared with the partner for distribution to their audience
  3. Redemption: Users visit /coupons/{campaign-slug} to redeem their code (requires paid PostHog subscription)
  4. Expiration: Benefits can automatically expire after the campaign period (e.g., 12 months)

Onboarding flow integration

When new users sign up via a campaign link (e.g., posthog.com/signup?next=/coupons/lenny), they're shown the coupon redemption page early in onboarding:

  1. User signs up with ?next=/coupons/lenny query param
  2. After signup, they're redirected to /onboarding/coupons/lenny instead of directly to /coupons/lenny
  3. They can claim the coupon or skip and continue to product setup
  4. After claiming/skipping, they proceed to the normal onboarding flow (use-case selection or products page)

This ensures new users see the coupon offer before diving into product configuration.

Note: Existing (already onboarded) users bypass this and go directly to /coupons/:campaign.

Example: Lenny's Newsletter

When launched, our partnership with Lenny's Newsletter offered their annual subscribers:

Redemption page: /coupons/lenny

Creating a new campaign

For technical implementation details, see the internal billing docs.

Co-marketing

Marketing | Source: https://posthog.com/handbook/marketing/co-marketing

PostHog complements a lot of other software companies. Since we’re active in the startup ecosystem and built around integrations, co-marketing opportunities come up naturally.

Who takes the lead with co-marketing?

Sales, engineering, or support will sometimes tag marketing into customer Slack channels where someone mentions co-marketing. There’s no obligation to say yes just because a partner is enthusiastic. If you’re unsure whether something is worth pursuing, ask in the #team-marketing Slack channel.

The list of partners we are currently doing or planning co-marketing partnerships with is maintained in this canvas.

If it does seem promising, a product marketer will take the lead and loop in events or other teams as needed.

What this article doesn’t cover:

Integrations and CDP destinations

We have a CDP with 50+ destinations and a data warehouse that connects to tools like databases, Stripe, and HubSpot. Any time we ship an integration, there’s a baseline level of co-marketing we should do:

If you have a new integration which deserves marketing support, the best way to get it is to ask in the #team-marketing Slack channel. The team will discuss and specific product marketer will take responsibility for running co-marketing.

When to level up: Most integrations stop here. But if there's genuine opportunity, then it's worth doing more:

Typically whenever we are pursuing work like this with a partner, we work with them to reciprocate to their audience through their channels.

In some cases where there's big partnership potential, the partnership is of a strategic significance, and the ICP is the same, feel free to explore a joint in-person event that will gather the community of both partners and deliver value from both sides.

Like all PostHog marketing, co-marketing should be useful to the reader. A super simple way to signal compatibility without being promotional is to casually reference partner companies in docs and editorial. However, we avoid case studies where there isn't an interesting story, guest posts, and other marketing 'fluff' content.

For example, PostHog docs might say "If you're routing LLMs (e.g. via OpenRouter)..." while OpenRouter docs say "Track downstream behavior in PostHog..."

We do this out of goodwill anyway in blog posts like this. It helps readers and costs us nothing.

Enterprise integrations

We haven't done much co-marketing with enterprises like Slack or HubSpot because big companies typically move slowly, and we haven't prioritized it.

If you find yourself working with an enterprise integration partner that's actually responsive and interested in co-marketing, go for it. Just don’t let their timeline block other work.

PostHog customers

If a customer is a logo we’d proudly show on the site, represents who we build for, and is getting real value from PostHog, then a case study usually makes sense.

Examples:

Social media co-marketing for case studies naturally follows since most companies are excited to have their story featured. It's usually worth raising an art request for these opportunities.

We also will typically thank customers who participate in case studies and collab content by sending them a merch voucher. We're nice like that.

Startup and ecosystem partnerships

We already run a strong startup program. Accepted companies get $50K in PostHog credits plus access to partner benefits. This is one of the best types of co-marketing because it’s a simple value exchange: we help their users, they help ours.

However, we are very selective about which teams we partner with here because these partnerships usually offer outsized benefits to them. As a rule, we want to have no more than three such partners at once - and it's one-in, one-out.

Examples:

If we're signing anything with legal commitments, that needs to go via #legal. If it's an informal exchange of perks, you can usually just coordinate directly with the partner company.

If you’re giving PostHog credits, check with the to explore the options. See the campaigns and coupons handbook entry for more detail.

When a new startup perk goes live:

As a rule we don't commit to reporting sign-up performance to startup partners, as it just adds overhead and they should have their own methods of tracking. We also don't typically agree to rev share deals as part of this program, as it's a long tail activity.

More questions about startup program partnerships? Head to the project Slack channel.

Other ecosystem partners

Co-marketing goes both ways! PostHog’s startup program is promoted through partner channels like Stripe Atlas perks and the Fin Startup Pack.

We maintain a spreadsheet with most of our current partnerships. This also includes partnerships with VCs and PE firms. For the most part we do no co-marketing with these partners, though this may change. This spreadsheet doesn't list all VC partnerships via GetProven, as these are best tracked directly through that tool.

Co-sponsored events

Events are a great place to co-market and vary from intimate gatherings to large scale meetups. These are higher effort and don’t usually sit under product marketing alone. Tag Daniel early – he’s the best judge of what events and co-sponsorships will actually land.

Examples:

Virtual events can also work for co-marketing, we just try to avoid boring ones. For example, we participated in ElevenLab’s Worldwide Hackathon, which was rad.

Co-branded merch

Merch collaborations are cool, but should be rare. They require real work from the design team and need a clear purpose beyond “we’re partners now.”

A good example is the limited edition t-shirts we did with Supabase, which was way more fun than a press release.

If you think a merch collab makes sense for a co-sponsored event, use the art request template. Please give thought to distribution and lead-times and add this to the request.

What if the partner wants more?

Not every co-marketing play warrants maximum effort. If someone's pushing for more but the product overlap is thin, it's okay to suggest starting smaller. A changelog mention and social post can always expand into more later if there's real traction.

Customer case studies

Marketing | Source: https://posthog.com/handbook/marketing/customer-case-studies

What makes a good case study?

Case studies should make our users look smart, our products look useful, and PostHog look like a company people actually want to talk to.

Things we don't care about:

Things we do care about:

Case studies are typically owned by the . They live in /contents/customers/ and appear on posthog.com/customers. If you have a suggestion for who we should interview, let us know in the marketing channel.

Creating a case study

1. Identify the right customer

Start by asking the PM for that product. PMs do lots of user interviews and can suggest warm leads. You can also post in company Slack channels, but give some context for what you're looking for.

2. Make contact

Got a lead? Before reaching out, search for the company in Vitally. If they already have an assigned Account Executive or CSM, give that person a heads-up — they might already be working on something with the customer or have extra context on what to ask them about.

If there’s no one assigned in Vitally, you’re clear to go ahead and reach out directly.

Some customers have a dedicated Slack channel. If they do, that’s usually the fastest way to coordinate. Otherwise, send an email.

3. Lay the groundwork

Someone agreed to chat? Hooray! Make a GitHub issue to draft some questions, tag any relevant sales/CS people, and note if you’ll need artwork later.

4. Schedule the interview

Who you talk to for interviews doesn’t really matter. Speak to engineers, founders, PMs, or anyone who seems keen to chat. If you’re unsure who to interview, email a few people at the company and see who bites.

We use Calendly for scheduling external meetings, such as demos or product feedback calls. If you need an account, ask Charles to invite you to the PostHog team account.

How to be a good interviewer:

  1. Do some preliminary fact-finding (don't waste time asking general info about the interviewee's company and role)
  2. Come prepared with good, open-ended questions
  3. Relax and have a nice chat (30 minutes is plenty)

5. After the call

Trust your gut — if it feels like a good story, it probably is. Worst case, it’s still user feedback to pass on to other small teams.

If it is worth turning into a case study, draft a PR right away while it's still fresh. Ask at least one teammate to review it to catch any grammar mistakes (or really bad jokes).

Best practices:

6. Review and approval

PR looking good? Tag the customer in Github for review. You're not asking for copy edits – just a quick fact check. Legal and PR teams will sometimes want to be looped in for approval as well. They might also request using Google Docs instead of GitHub.

Do what you need to do. The goal is to get the rubber stamp.

If your draft might include anything private such as screenshots of customer dashboards, keep it in an internal repo like requests-for-comments-internal just to be safe.

7. Publish!

Most people are excited to be featured and will sign off quickly. If you need artwork to go with the case study, use the art or brand request template

Once the case study is merged and live on the website, the last step is to send a merch credit to the participants as thank you.

That's it - you did it!

Marketing Events

Marketing | Source: https://posthog.com/handbook/marketing/events

Want PostHog to be involved in your event? See how we do Community events. Want to start a co-working group for builders? Check out our Community incubator program and submit the form. If you'd like to add an IRL event to the events page, contact Daniel Zaltsman or Kliment Minchev.

We did 45 events in real life (IRL) in 2025 and we're just getting started. While we’re 100% remote and set up to work asynchronously–we've found real benefits in getting together with users in real life. All our public events are showcased on the events page.

Events have to be focused on and valuable to our ICP. We prefer not to be a small fish in a big pond, hence we mostly pass on big conferences. And we prefer pull over push, so we gravitate towards content and formats that educate and activate while avoiding booths, badge-scanning, buying attendee lists, paying to speak, and webinars.

The event formats we prefer (and organize ourselves) fall into one of these:

All plans come together – from conception through to final delivery – on our event management tool which centralizes owners, logstics and feedback in one place.

Community incubator

We connect builders around the world by helping them start IRL micro-communities that gather for recurring co-working sessions. As we know from our own sprints, offsites, and hackathons, we can build a whole lot when we gather in person with other people who have a bias for action.

We have already seen how this format makes a higher impact on communities because of the velocity built over weeks and months of communal work, collaboration, and creativity.

Image: Philadelphia incubator

Taj leading his builder group in Philadelphia

Geographies

The pilot program started in tech hubs mostly in North America, UK, and EU. We now have communities in Austin, Philadelphia, Singapore, New York, Barcelona, and Lahore. At this time, we're open to groups starting in any city with a population of more than half of a million people.

Co-working structure

The focus is on weekly, bi-weekly, or monthly gatherings with small groups of ~10 people. Gatherings typically take place during weekday evenings or weekends and go for about 3 hours.

Discuss → Build → Demo

We suggest starting with a roundtable discussion of the latest dev news and trends, then each builder can set their own goals for the session. Allow at least 2 hours for building, and then close out the session with demos to show off what you built. Outside of co-working, the group is encouraged to get together to connect for an AFK activity such as a walk, bike ride, hike, or local sightseeing.

Venues

The ideal venues for the community incubator are free-to-use spaces conducive to a group comfortably working (accessible, quiet, Wi-Fi-enabled.) They are typically held in tech or VC offices that have an available room, libraries, and co-working spaces. If you have a venue and want to host a builder group, reach out to Daniel directly.

Community events

Community events are in real life (IRL) manifestations of our mission organized by enthusiastic partners and customers. They usually originate when someone has identified an interesting topic or problem set for an event and want to help people move faster, smarter, and more together.

These are some of the event formats we're most actively pursuing:

What community events are not for:

Formulating a purpose and structure

All impactful event follow the principles of user-driven development which stem from the user problem or requests. Who is the ideal attendee profile for your event? They might be your customers, fellow founders, local engineers or any other collection(s) of people. Talk to them first to validate if the event is worth your time.

Put real effort into this first step. Defining the "what, why and how" of an event beforehand will pay off on event day. Let our shared values guide you. Don’t submit your event for support until your answer to “Would I attend this?” is a clear “YES!”

Getting support

Financial support: We are happy to support the growing ecosystem of PostHog users and product engineers more broadly through financial sponsorship. We do this often for events that align with everything outlined on this page. Budgetary support typically falls in the range from $500 to $3,500. When we support monetarily, it almost always involves some added level of engagement.

Speakers: Want a speaker in our ecosystem (team PostHog, customers, partners)? We’ll try our best. When considering speakers for your events try to avoid:

Content: If your speaker(s) are unsure of what to talk about, consider going back to the purpose of the event. Otherwise, we have plenty of material for your inspiration.

Merch: We use the store merch processes to handle distribution of PostHog-branded merch. We tend to be generous with merch for community events. Outline what you had in mind in the issue.

Co-promotion: Most of the time the help requested is in the form of promotion. As a general rule, we don't promote events we aren't supporting or co-hosting ourselves. We decide when to repost community events on our social media channels and email on a case by case basis.

Venue and catering: Identify the vendors and costs and include them in the GitHub issue. If the event will not be possible without monetary support, make that clear. We may support the cost of venue, food, or beverages but require the paper napkin math.

Feedback: You’ll learn more by doing than planning so don’t worry about having every detail complete before submitting for feedback from our team.

Branding it

Our brand is a reflection of us and how we’re experienced by others, including events.

Words: Naming products is hard. Same goes for naming events and writing their descriptions. As a prerequisite, read our primer on writing for developers. Try your best to come up with event names that communicate the 'what?' and will attract the 'who?' And then again ask yourself, "would I attend this?"

Pictures: Every event is improved with a flyer or poster that showcases the essence of the experience. We keep a comprehensive list of brand assets and guidelines on the brand assets page. Share your assets and we’ll give feedback. Depending on the scale and timing of the event, our team may be able to help with branding as well.

Event recaps

Community events are better when organizers share what happened, what you learned, and any follow-up actions. We value feedback and expect the same from event organizers. In addition to what you learned and feedback from attendees, we ask that you share any photos, videos, quotes, data points with our team.

Sponsoring external events

We often get invited to sponsor events - these range in size, location, and audience. We rarely say yes. For these to be a worthwhile endeavor, the sponsorship should be a win-win primarily for the end user and secondarily for us. Hence, it's important that the audience, content, format, and ethos to all align. Even if we don't sponsor financially, we encourage team members to speak at events and we can support with merch. Ask in the #team-irl-events channel.

Speaking at events

If you're interested in attending or speaking at a developer conference, consider submitting a CFP (Call for Papers) to one of these events taking place in 2026. If you don't see an event you're interested in, please add it directly in the reference sheet.

For first-time yappers, reference the speaker's guide. If you need inspiration for a talk, pretty much any practice we use for actual production code is fair game. This includes integrations and implementations with other products. And at this point people are interested in not just what we build but how we did it.

Sponsoring student organizations

Sometimes students at varying universities ask us if we are interested in sponsoring their career fairs, hackathons, or other student-led initiatives. We don't currently participate in these. Although we don't use specific years of experience as a qualifier for hiring, we rarely hire students straight out of school. If there is a custom partnership you have in mind or it involves an existing employee's alma-mater, ask in the #team-irl-events channel.

Exporting a blog post image from Figma

Marketing | Source: https://posthog.com/handbook/marketing/exporting-blog-post-image

Overview

Blog post images are created in Figma. The image appears at the top of each blog post, above the headline. It's also used as the Open Graph image.

Dimensions

Open Graph images are 1200x630, so we stick with those dimensions to keep this simple. (This is approximately double the size they'll be displayed at, making them look nice and crisp on HiDPI screens.)

Export an image from Figma

  1. Custom blog art lives in Figma: Art board &rarr; Blog
  2. Make sure artwork fills the entire frame.
  3. Ensure frame doesn't have a border.
  4. Rename the frame of the image to closely match the blog post title in a slug format. (Ex: writing-for-developers, where we remove capital letters and punctuation, and replace spaces with hyphens. This will become the filename that is uploaded to the server.) It's best to omit articles (a, and, the).
  5. Export the image as a PNG (at 1x).
  6. Save the image and add it to the issue.

The image should be uploaded by the person creating the blog post.

Adding to a blog post

  1. Upload the file to /contents/images/blog.
  2. Make sure the filename matches the reference to the image in the .md file.

Incident comms

Marketing | Source: https://posthog.com/handbook/marketing/incident-comms

These guidelines are for marketers who support engineering during incidents.

For engineers, we have additional guidance on how to declare and handle an incident.

For GTM workflows and templates, see the communication templates for incidents.

Incidents happen. Each one is different and not all incidents require comms, but when they do we need to have clear processes in mind.

For this reason we've kept our guidelines as flexible as possible and focused on providing high-level guidance and responsibilities. In the event that an incident occurs we trust each others' judgement on when to adhere or deviate from these guidelines.

Appointing a Comms Lead -----------------------

During and following an incident, Product Marketing Managers (PMMs) generally assume responsibility for handling customer communication at a broad level.

The role of the Comms Lead typically involves planning how we will respond at a high level by:

Oh no --- all the PMMs are on holiday or asleep!

If this happens, the incident lead may appoint a Comms Lead from the Content Team or another team. If the incident lead fails to appoint a Comms Lead, Team Blitzscale should appoint someone to lead Comms.

Guidelines for Comms Leads --------------------------

These are principles to keep in mind during any incident:

This helps scope the customer impact. Always clarify the impact on feature flags, experiments, and workflows especially. It's always worth asking how it impacts each product and if any data is lost, or merely delayed.

It's better to be slower and correct than fast and wrong. The status page and support tickets usually cover the early phase while details are changing quickly.

We shouldn't send comms unless there's a definite impact and a clear story to tell. If we do send external comms, target owners and admins in impacted orgs where possible, rather than being too noisy.

The status page should be the main place we direct users to during an incident. Extra channels (emails, social posts) are the exception, not the rule. If a post-mortem is created, this supersedes the status page.

Major or critical incidents will often have a public post-mortem – this should usually be the backbone of any wider comms. Don't communicate before resolution unless there is a strong need.

When handling a security incident: align with the incident lead in the incident Slack channel about public communication of security issues before proceeding. E.g. it could make sense to hold back communication of an attack publicly, as this could make the attacker aware that we are investigating already. This could it make harder for us to stop this attack for good. However, in some cases of data breach and security incidents, like the download of malicious packages, it is better to notify users immediately, in case the incident lead has identified that users can take action to prevent the malicious packages from spreading further.

What does the Comms Lead do? ----------------------------

At a high level, the Comms Lead is responsible for how we talk about the incident, not for fixing it. In practice, that usually means:

Read the summaries and updates, follow the thread, and avoid asking for updates just to "check in". Only jump in when you need information for comms, or have a specific ask.

Check-in periodically to ensure the status is updated at least once every six hours, that the current impact is accurately described, and that the incident is closed when needed. The Incident Lead is responsible for these updates.

If we do, you should put together a plan for doing so (below).

When we do decide to communicate:

When do we need to notify users immediately? For security incidents, like the download of malicious packages, in case the incident lead has identified that users can take action to reduce their risk, we should notify users immediately with clear steps how to act on their side. Product downtime that doesn't involve security breaches/attacks should be addressed after the incident is closed and we have the context needed to inform users.

For major/critical incidents you may need to help shape and review the post-mortem with the incident lead and approvers (Tim and/or Ben, and Charles). Once published, use the post-mortem as the primary reference for any follow-up comms (emails, service messages, etc.), rather than rewriting multiple different explanations.

After a data breach/security incident the comms lead should contribute to the post-mortem by transparently addressing the impact, what went well, and what could have gone better.

Often these teams are dealing with the brunt of the customer response and your goal should be support them by giving them the information they need to respond effectively.

Most comms can be handled quickly, but in the event of a long-running issue you should develop a plan to handover or continue monitoring the incident status.

These steps are a starting point, not a script. In practice, the Comms Lead's job is to keep communication accurate, calm, and useful --- and to reduce noise, not add to it.

What does the Comms Lead not do? ----------------------------

The Comms Lead is typically not responsible for:

Overview

Marketing | Source: https://posthog.com/handbook/marketing

How marketing works

Marketing at PostHog is a collaborative effort across several teams. There are six distinct teams that handle different aspects of marketing:

If you're not sure who to talk to, check Who can help me?.

Marketing values

  1. Be opinionated
  2. Pull, don't push
  3. No sneaky shit

1. Be opinionated

PostHog was created because we believed that product analytics was broken, and we had a vision of how it could be much better. We're more than just product analytics now, but the principles are the same.

We need to reflect this vision in our marketing and content, and not dilute it with boring corporate-speak. When we write content, we take a firm stance on what we believe is right. We would rather have 50% of people love us and 50% hate us than 80% mildly agree with us.

We communicate clearly, directly, and honestly.

It's ok to have a sense of humor. We are more likely to die because we are forgettable, not because we made a lame joke once. We have a very distinctive and weird company culture, and we should share that with customers instead of putting on a fake corporate persona when we talk to them. PostHog should not look like a generic software company.

(Sometimes we use terminology like 'value propositions' because that is the standard marketing term for a well-understood concept. That's allowed.)

2. Pull, don't push

_We focus on word of mouth by default._

We believe customers will judge us first and foremost on our product (i.e. our app, our website, and our docs). We won't set ourselves up for long-term success if we push customers into using us.

If a customer doesn't choose PostHog, that means either:

  1. The product isn't good enough
  2. The product isn't the right solution for them
  3. We didn't communicate the product and its benefits well enough

We don't believe companies will be long-term customers of a competitor because they did a better job of spamming them with generic marketing. We know this because we frequently have customers switching from a competitor to us – they are not afraid to do this.

Tackling (1) is the responsibility of everyone at PostHog. The job of marketing teams is to avoid spending time advertising to people in group (2), and making sure we do a great job avoiding (3).

This means:

3. No sneaky shit

Our ideal customers are technical and acutely aware of the tedious, clickbaity, hyperbolic marketing tactics that software companies use to try and entice them. Stop. It's patronizing to them and the marketing people creating the content.

For these reasons, we:

Marketing vision

Beyond PostHog's company mission and strategy, we have some marketing-specific areas we want to focus on.

Things we want to be brilliant at

Things we want to be good at

Things we might want to be good at but haven't tested yet

Things we don't want to spend time on

Influencers

Marketing | Source: https://posthog.com/handbook/marketing/influencers

We work with creators and influencers to make content about PostHog and sponsor placements to drive awareness and sign ups.

We're open to inbound proposals, if you're interested in collaboration with us, you can send emails directly to Adlet Smykov, we're always open to seeing thoughtful proposals.

Some of the influencers we sponsor include:

Sourcing and evaluating influencers

Negotiating with influencers

What should the placement actually look like?

Measuring impact

Some metrics we look at for individual videos include:

We track these on our marketing budget and spending spreadsheet. We also have an influencer marketing performance dashboard in PostHog that can help you get an overall view of different influencer's performance.

Sponsorship

Marketing | Source: https://posthog.com/handbook/marketing/open-source-sponsorship

We do three types of sponsorships - commercial, charitable, and open source.

Measuring attribution directly is basically impossible with sponsorship activities, so we try hard to make sure we are targeting the right channels by validating opportunities properly first. We like to make sure their target audience is in our ICP and test with smaller amounts when possible.

Commercial sponsorship

Although we've done a variety of commercial sponsorships, including newsletter ads, podcasts, and billboards, we're mostly focused on sponsoring influencers to drive awareness and signups to PostHog.

We track these sponsorships in our marketing budget and spending spreadsheet.

Ian Vanagas has the contacts for these people if you want them.

Influencers

See influencers.

Podcasts

We've sponsored podcasts one-off in the past, but have no plans to do them again at the moment. They include:

Newsletter ads

We sponsored a variety of newsletters to drive subscriptions for our newsletter, Product for Engineers, but have put that on pause as we revaluate the quality of the subscriptions we're getting.

Charitable sponsorship

We are looking to partner with charities who are aligned with our mission of increasing the number of successful products in the world. These partners are likely to focus on giving greater access to under-represented groups in tech.

We currently sponsor:

Open source sponsorship

PostHog is an open-source developer platform built on top of many other amazing open-source projects. We believe in open-source and the open-core model. However, many open-source projects go underfunded.

We are investing in open-source, not just as a business, but directly via sponsorship in key projects we benefit from every day. We're doing this for three reasons:

  1. We want valuable open-source projects to continue to be maintained and enhanced
  2. We fundamentally rely on some open-source projects, and it's essential they continue to be maintained and enhanced
  3. We believe the PostHog brand will benefit from the sponsorship

In addition to sponsoring key projects, we also provide a $100/month budget for every team member to sponsor projects that have helped them.

Projects we sponsor regularly

| Project | Author | Why does PostHog sponsor | Sponsored via | Amount/month | | -------------------------------| ---------------- | ------------------------------------------------------------------------------------- | --------------------------------------- | ---------------------- | | [tailwindcss] | [tailwindlabs] | A utility-first CSS framework we use to style our website and app | Directly | $2500 | | [rrweb] | [yz-yu] | Powers our session recording functionality | [Open Collective][open-collective] | $1000 | | [Tiptap] | [ueberdosis] | The headless editor framework that powers our Notebooks feature | [GitHub Sponsors][github-sponsors] | $149 | | [Next.js Boilerplate] | [ixartz] | Boilerplate and Starter for Next JS 14+, Tailwind CSS 3.3 and TypeScript | [GitHub Sponsors][github-sponsors] | $100 | | [Refined GitHub] | [fregante] | Browser extension that simplifies the GitHub interface and adds useful features | [GitHub Sponsors][github-sponsors] | $100 | | [ESLint] | [nzakas] | Find and fix problems in your JavaScript code. | [GitHub Sponsors][github-sponsors] | $10 | | [Prettier] | [jlongster] | Prettier is an opinionated code formatter. | [GitHub Sponsors][github-sponsors] | $10 | | [Jest] | [cpojer] | Delightful JavaScript Testing. | [Open Collective][open-collective] | $10 | | [SwiftFormat] | [nicklockwood] | A command-line tool and Xcode Extension for formatting Swift code. | Directly | $10 | | [detekt] | [arturbosch] | Static code analysis for Kotlin | [GitHub Sponsors][github-sponsors] | $10 | | [Periphery] | [ileitch] | A tool to identify unused code in Swift projects. | [GitHub Sponsors][github-sponsors] | $5 | | [Rollup] | [Rollup] | Next-generation ES module bundler. | [Open Collective][open-collective] | $5 | | [SeaweedFS] | [Chris Lu] | SeaweedFS is a fast distributed storage system for blobs, objects, files, & data lake | Patreon | $100 |

<!-- projects --> [rrweb]: https://github.com/rrweb-io/rrweb [tailwindcss]: https://github.com/tailwindlabs/tailwindcss [Tiptap]: https://github.com/ueberdosis/tiptap [Next.js Boilerplate]: https://github.com/ixartz/Next-js-Boilerplate [Refined GitHub]: https://github.com/refined-github/refined-github [detekt]: https://github.com/detekt/detekt [Periphery]: https://github.com/peripheryapp/periphery [ESLint]: https://github.com/eslint/eslint [Prettier]: https://github.com/prettier/prettier [Jest]: https://github.com/jestjs/jest [Rollup]: https://github.com/rollup/rollup [SwiftFormat]: https://github.com/nicklockwood/SwiftFormat [SeaweedFS]: https://github.com/seaweedfs/seaweedfs

<!-- authors --> [tailwindlabs]: https://github.com/tailwindlabs [yz-yu]: https://github.com/yz-yu [ueberdosis]: https://github.com/ueberdosis [ixartz]: https://github.com/ixartz [fregante]: https://github.com/fregante [arturbosch]: https://github.com/arturbosch [ileitch]: https://github.com/ileitch [nzakas]: https://github.com/nzakas [jlongster]: https://github.com/jlongster [cpojer]: https://github.com/cpojer [nicklockwood]: https://github.com/nicklockwood [Chris Lu]: https://github.com/chrislusf

<!-- other links --> [github-sponsors]: https://github.com/orgs/PostHog/sponsoring [open-collective]: https://opencollective.com/posthog [plugin-server]: https://github.com/PostHog/plugin-server [patreon-seaweedfs]: https://www.patreon.com/c/seaweedfs

Request sponsorship

If you know of a project that is fundamentally important to PostHog, add the project to this page via a PR and tag Charles. If we decide to sponsor, we can set up the sponsorship via either Open Collective or GitHub. To get an invite to Open Collective, create an account first with your posthog.com email address and then ask Charles to invite you.

Anyone on the PostHog team can do this!

Who can help me?

Marketing | Source: https://posthog.com/handbook/marketing/ownership

If you have a general marketing question, go to #group-marketing-and-content in Slack.

If you need help with the website, go to #posthogdotcom.

Here's a quick guide to who to ask if you want help with a specific marketing activity.

<summary>I need a product marketer, but I don't know which</summary>

Product marketing is part of . You can see which PMM is focused on which team on the team page. If it's a team which doesn't currently have an assigned marketer, just ask in #group-marketing-and-content in Slack and tag the team lead.

<summary>I'm interested in running, attending, or speaking at an event</summary>

You should speak to , our resident party planners. Read the events strategy handbook for more.

<summary>I want to launch my product out of beta</summary>

Speak to and read about product launches.

<summary>I need help with documentation</summary>

Your main contact is the , but please read the docs ownership handbook to understand how best to work with them.

If you just need someone to review something, tag Team Docs & Wizard in GitHub.

<summary>Someone wants to write a guest post / is requesting a backlink etc.</summary>

Unless it's someone huge and important with a real audience, "Mark as spam" and "Move to bin".

<summary>Someone wants to partner with us</summary>

Refer them to our partnerships waitlist and let know.

<summary>Someone wants us to sponsor them</summary>

If it's an influencer, newsletter or podcast, refer them to Adlet Smykov.

If it's an event, speak to .

<summary>I want to create a video, or have a video idea we should try...</summary>

To start with, post ideas in the #content-and-video-ideas Slack channel. Alex van Leeuwen and Jordo Dibb on the content team are your main points of contact here.

Please also read How we do video at PostHog. We're still figuring things out, though, so very interested in suggestions.

If your idea is for PostHog Stories (HogTok), hit up Edwin Lim as well.

<summary>A customer is interested in doing a case study with us</summary>

Speak to Joe Martin, Cleo Lant, or Sara Miteva.

<summary>A customer has an issue with merch</summary>

Please share in the #merch channel. Kendal Hall owns fulfillment issues. Lottie Coxon owns merch design and creation. Cory Watilo and Eli Kinsey own the storefront.

<summary>I have a question / problem / suggestion for the website</summary>

The website is owned by Cory Watilo and Eli Kinsey. Generally, the best place to ask is the #posthogdotcom Slack channel.

For larger pieces of work — a new product page, a significant copy overhaul — read Working with the website team for the process to follow.

<summary>Hey, can we run some paid ads for my product?</summary>

We probably are already, but if you have something specific in mind, speak to Brian Young, who is a Growth Marketing Manager embedded in the sales team.

<summary>I need a new hedgehog design, illustration, or other art asset.</summary>

Speak to Lottie Coxon or Daniel Hawkins, but please read Art and branding requests first.

<summary>I need a font, logo, etc.</summary>

See Logos, brand, hedgehogs

<summary>A journalist has contacted me</summary>

Direct them to press@posthog.com, where one of Joe, James, Charles, or Tim can respond. They're the only people who should speak to press. See: Press & PR

Paid ads

Marketing | Source: https://posthog.com/handbook/marketing/paid

The paid ads team exists to do two things only:

We don't do paid ads for general awareness of the PostHog brand - our website, content, and word of mouth are much better (and cheaper) ways to do this.

For now paid ads sits within the , but will become its own thing once we have more people. This page is for paid ads for PostHog in general. If you're looking for paid ads for our newsletter, see the newsletter ads guide.

Channels

We currently run ads on:

We have previously tried and no longer use X, Product Hunt, Carbon Ads, and Google Display, as they did not drive high quality user signups or leads. We usually focus campaigns on users in the US, Canada, UK, Germany and France, as these tend to lead to the most high quality signups and leads.

We work with Hey to manage these channels - they set up the campaigns and ensure that spend is paced properly. We have a shared internal Slack channel, and Brian Young has 2 check-in calls with them each month.

In addition to Hey, we also have a monthly call with Google Partners who provide feedback on performance and competitive analysis on a per product basis as requested.

Mission 1 - Generating new business leads

We have four ways we target new leads:

We use a variety of creative campaigns here which we don't list in the Handbook, as they keep changing over time. Some principles though:

The full flow of how this works can be found here.

Tracking conversion & conversion optimization

Using 3rd party trackers or pixels like Google Tag Manager is against our brand and values, so we use a combination of PostHog, BigQuery, Clay, Clearbit, & Census.

PostHog sends back anonymized (click ID) conversion data to each ad platform with conversion values based on ICP score to improve lead quality via target ROAS bidding. Our goal is to use our ads program as a powerhouse for the sales team and a key tool for onboarding users that will improve both MRR and CAC:LTV ratio.

In order to keep our privacy policy front of mind we've built a bespoke conversion tracking system that uses the following flow:

PostHog > Clearbit > BigQuery > Census > Ad Platforms.

You can learn more about this flow here here.

We take privacy seriously, and follow these principles:

Mission 2 - Converting people to signup

We do this through search ads on Google and Bing, and you can find the master sheet of ad copy here.

We change up campaigns frequently, but generally run campaigns for:

We generally turn these on and off depending on performance and spend, and review copy every 4 weeks. The flow is Brian Young writes copy, Charles Cook reviews. We try both fun and straightforward copy. Even if the fun stuff doesn't convert super well, we keep it if it's doing ok as it helps with our brand - we know people screenshot and share it sometimes.

We aim for as much product coverage as possible unless there are compelling reasons to not do them (e.g. it's just very expensive). We prioritize ads for those products closest to our ICP.

It is typically only worth running paid ads for individual products once they are generally available, with pricing, and where we feel the feature set is broadly on parity with the main competitors.

Landing pages

We use custom presentations that match the style of our website as our landing pages. We have an internal guide on creating presentations.

In addition, PostHog now allows us to copy a URL to share a set of open windows in a specified layout allowing further customization for ad landing pages.

How we work

Brand guidelines and creative

By default, all paid ads visual creative should be based on stuff that already exists in some form on one of:

We take anything we've ever created there, and then repurpose/reformat/reconfigure it as an ad. This minimizes approvals - because these assets have previously been through a round of approval with design, we can use them knowing we don't need to get approval again. The only check required is then between Charles Cook and Brian Young on the concept and/or copy. This means we are doing less creative work, but the upside is that we can move faster (and have a lot of Lego bricks to play with). Brian Young works with Daniel Hawkins on this.

For the copy itself, we also use the search ads copy where we can as a starting point, so we're not repeating work.

If we have a particular campaign in mind that _really does_ require a new, one-off asset, then we request it from Lottie in the usual way.

Budget

Brian Young maintains the media plan, which can be found here.

Growth review

Brian Young runs a monthly growth review with Charles where we look at the main performance metrics for the month prior. Here are the main sheet and commentary. For completeness, this also covers the organic funnel, though the main focus is still paid.

Product Positioning

Marketing | Source: https://posthog.com/handbook/marketing/positioning

_How_ do we name things?

Here's a typical flow:

So, how do we name things:

This has a downside - it's messier from a user perspective, but the upside is that design / "execs" aren't a blocker to getting work out the door. In practice, we rarely push hard on marketing a new thing to users anyway (usually we soft launch stuff) so we think the downside is pretty minimal.

Picking a good name

By default, everything should be positioned as something a _user_ is familiar with, not what is necessarily the most technically accurate description.

For example, when we build new products, we often name them based on what the major competitors are calling themselves.

This means users get it way faster, so we grow more quickly, and it encourages us to build the basic features that a given product needs versus trying to innovate _before_ we hit product market fit with a new product in our platform.

What positioning actually means

Positioning is more than just picking a name. It's about understanding how users will encounter, understand, and use what we're building.

It also means being clear about what problem we're solving and who it's for. Are we building this for someone debugging an issue right now, or for someone planning next quarter's roadmap? The same feature might be positioned differently depending on the context.

We also think about how new capabilities fit into the broader PostHog story. Every new product should reinforce our core positioning: one platform that gives engineers everything they need to build successful products.

Positioning is dynamic

The reality is that positioning changes as products mature. Early on, we might position something narrowly to get feedback from a specific user segment. As it grows and we understand usage patterns, we can broaden or refine that positioning.

We're comfortable with this iterative approach because it means we're not overthinking positioning before we know what users actually want, and how the product fits into the broader market.

Product announcements

Marketing | Source: https://posthog.com/handbook/marketing/product-announcements

Have something you want to announce? Let the Marketing team know! If it's an iterative update, you can also demo it in the all-hands, or post in the #tell-posthog-anything Slack channel.

Product marketers take responsibility for coordinating and publicizing news about PostHog, including product announcements. We also help with incident and maintenance announcements, if needed.

Types of announcement

We classify announcements using the general guidelines below, with full discretion for doing something different.

Minor announcements

Minor announcements involve changes which have no noticeable impact on the experience of most users. They can involve small visual changes, such as UI tweaks, but are more often small bug fixes or back-end changes. They do not require action from users and pose no known risk.

We may typically support minor announcements by:

An example of a minor announcement is the UUID format change.

Medium announcements

Medium announcements involve changes which have a noticeable impact on the experience of some users, but not the majority. They are likely to involve visual or functional changes, such as adding a chart type, but do not introduce wholly new features. They do not require action from users and pose no known risk.

We may typically support medium announcements by:

An example of a medium announcement includes the launch of the NPS app.

Major announcements

Major announcements involve changes which have a noticeable impact on the experience of most users, or require specific action from affected users. They may introduce new features, require product downtime, or include opt-in betas for upcoming work.

We might do anything and everything for a major announcement.

Examples of major announcements include the surveys beta or the analytics pricing change.

New product announcements

New product launches are major announcements. They have their own GitHub template: Launch Plan. Product marketers should always create a launch plan for new product announcements.

For new product announcements we generally apply the following best practices:

Comms should also be aware of the engineering best practices for product launches, so we can be sure that features launch well.

If the product is moving from free beta to paid general availability (GA) you might also want to choose a reward for beta users. Examples of this include giving PostHog AI beta users 30 extra days of unlimited free usage, or giving Workflows beta users a discount code for merch.

PR announcements

We do not typically do public relations for anything other than company-level news. We have separate processes and guides for managing press announcements.

Maintenance communications

Occasionally, we have to conduct scheduled maintenance. When this happens, it's important that we tell users about it in advance if they would experience any disruption.

If you're aware of any upcoming maintenance which would cause disruption, please inform the Support, Marketing, and Customer Success teams as soon as possible. Marketing will ensure that users are notified as the work is planned and completed. Customer Success may wish to inform specific users at the time.

Typically, Product Marketers take responsibility for informing users about maintenance work beforehand by telling users who will be impacted through email and other channels.

When informing users about maintenance, it is important to answer all of the following points:

We typically notify users of upcoming maintenance by email, so the Marketing team will need a way to target the correct users before they can update them. For smaller maintenance updates which will not cause any user updates, engineering teams can also update our status page.

Incidents communications

When an incident is declared the Brand team should join the incident channel as observers, and monitor to make sure that customer comms are handled correctly.

Guide for doing PostHog talks and demos IRL

Marketing | Source: https://posthog.com/handbook/marketing/speaker-guide

You volunteered or have been asked to speak at a dev meetup, give a demo at a conference, or present PostHog to a virtual or in-person audience. Maybe you said yes before you thought too hard about it. That's fine — good talks happen this way. This guide is for preparing and delivering your talk.

For examples from other speaker, reference slides from previous talks. Have any questions? Ask in #team-irl-events or ping whoever put you up to this.

1. Know your room before you write a word

Before you build anything, answer three questions:

Who's in the room? A meetup for early-stage founders is different from a Clickhouse conference. Find out: their persona. Are they product engineers or founders? What sized team do they work at? What stack? What company stage? Also, how many people will be in the room? Ask the organizer — they want your talk to land too.

What format are you filling? Confirm the exact setup:

What else is on the agenda? If you're one of four speakers, you don't want to cover the same ground as the person before you. Get the full lineup.

2. How we talk about PostHog (and how we don't)

No talk should ever be a blatant product or company pitch. Whatever your audience, they didn’t come to this event to receive a pitch (anyone can visit PostHog.com themselves.)

The PostHog voice in talks:

If you finish writing your talk and the word "PostHog" only comes up three times, that's probably good.

3. Build the talk around one true thing

Kelsey Hightower — a best in class technical speaker — doesn't use slides as a crutch. He treats his talk like a live demonstration of a belief. Every word moves toward a single point.

Pick your one true thing → build evidence for it → show it working live

That's the whole structure. You don't need five points. One claim and the proof. Good examples of what a "one true thing" sounds like:

Notice what these have in common: they're useful to the audience whether or not PostHog exists.

4. Your demo is the talk

Software demos should tell a story, not show features. The biggest mistake we can make demoing PostHog at events is simply narrating the UI instead of showing a problem being solved.

Bad: "So here's the dashboard. You can see we have charts. This one is a trend. This one is a funnel..."

Good: "We shipped a new onboarding flow last Tuesday. By Wednesday I was looking at this drop-off and thinking something was wrong. Here's what I found." Then show that.

Pick one real scenario — something that happened at PostHog related to your work, or something a real user told you. Build the entire demo around it.

Demo setup checklist:

If you’re pre-recording your demo, \#team-youtube, has created this helpful guide.

5. Building your slides

A few principles for building out slides:

For feedback on design or help with navigating the PostHog brand assets (Hoggies included), stop by \#team-marketing

6. Practice out loud. Twice minimum.

Reading your talk in your head doesn't count. Your mouth is slower than your brain. The VM Brasseur public speaking guide has a useful rule: practice until the words feel boring to you. If they still feel fresh and interesting when you say them, you haven't done it enough.

Two run-throughs, out loud, at speaking pace, with your actual demo running.

The “cut” rule: If you stumble on a section more than twice in practice, that section is probably bad. Rehearsal reveals structural problems — stumbling usually means the logic isn't clear, not that you need to practice more. Stop, figure out why it's hard to say, and fix the content.

7. The first 60 seconds are everything

Open with something that makes the room lean in:

Don't introduce yourself first. The host does that. You start with the thing. Then you can re-introduce yourself to set the context of why you’re the person qualified to speak on this subject.

8. Prepare to not know something

We always want to encourage Q&A after our talks as it builds conversation and connection. Someone will ask a question you can't answer. Don't bullshit. The right response: "I don't know — but here's how I'd find out, and I'll follow up with you." Then actually follow up.

If you receive a question that you believe is off-topic or unfitting for the setting, you can let the asked know this and express an interest in moving on to the next one.

9. After the talk

---

Examples from previous talks:

Dashboard templates

Marketing | Source: https://posthog.com/handbook/marketing/templates

Dashboard templates simultaneously showcase the use cases of PostHog and make it easier for users to get started. You can find a full list of them on the templates page.

This is "internal" documentation to show PostHog staff how to add new global templates.

Let us know on this GitHub issue if you'd like to see templates that are private for your team.

Creating a new dashboard template

  1. Create your dashboard with all the insights you want on it. Be sure to add descriptions to both.
  1. Open the dashboard dropdown, click "Save as template."
  1. Add variables as objects with the format below. Reference them in your template by adding the ID in curly brackets, like {SIGNUPS}, to replace the placeholder event.
"variables": [
  {
    "id": "SIGNUPS",
    "name": "Signups",
    "type": "event",
    "default": {},
    "required": true,
    "description": "The event you use to define a user signing up"
  }
],
  1. Once done, click "Create new template." Test that it works in the team project.
  1. Create a dashboard image in Figma in the Hoggies file. Make the size of image small (like 396x208). Export and upload to Cloudinary.
  1. With the URL, go to templates tab under dashboards, click the three dots to the far right of your template, and click "Edit." Add the URL to the image_url field and press Update template.
  1. For the website, copy the same hedgehog as a small square thumbnail image (400x400) with a transparent background. Export and upload to Cloudinary.
  1. While you are in Figma, create a 1920x1080 feature image with a couple of the insights. Export and upload to Cloudinary.
  1. In the posthog.com/contents/templates folder, copy another .mdx file from another template, and modify for your new template. Add the thumbnail and feature images you uploaded to Cloudinary.
  1. Open a pull request.
  1. Once merged, click the three dots on the far right again, and click "Make visible to everyone."
  1. To add to EU Cloud, click the three dots to edit the template and copy the JSON. Go to the PostHog EU Cloud instance, create a new blank dashboard, click "Save as template", paste the JSON (minus deleted, created_at, created_by, team_id, and scope), and "Create new template." Add image_url, edit, and test if needed. Finally, make visible to everyone.

Removing a dashboard template

If you ever need to remove a dashboard template, you need to:

  1. Open the templates list
  2. Click on the three dots to the right of the template you want to remove and then click Make visible to this team only. This is a required step before you can delete it.
  3. Click on the three dots again and then click Delete Dashboard.

Be sure of what you're doing as this is a non-reversible action.

Overview

Marketing | Source: https://posthog.com/handbook/marketing/video

The YouTube team's mission is to:

Our model is that of content creators, not a marketing department, as it is for the rest of content. This means that we start from a place of producing great video first that stands alone, irrespective of its connection to PostHog and our products. We have found over the years that if we get this right, the marketing benefits naturally follow, but if we start from a marketing-first perspective, people are never as interested.

As a result, our focus is on awareness, not converting people to sign up or revenue. Other parts of our marketing and product are a much better fit for that.

Who is our audience?

It should be the same as who we are building for, but specifically:

Who is our competition?

Our competition is _not_ our competitors' video content (the bar is too low), it is the other stuff that our audience is generally watching and enjoying. Think popular YouTube creators, podcasts, even viral content on TikTok and Instagram.

Entertaining vs. informative

Videos exist on a spectrum from 100% entertaining to 100% informative. Some are in the middle and do a bit of both. Generally, when we are making a video, we should try to aggressively pursue one path only, as doing both is _extremely difficult_ - the most likely outcome is that we fail at both and end up with something that is ok.

What does success look like in 2026?

What we're working on

Going from most informative to most entertaining:

How to work with the video team

Working with the website team

Marketing | Source: https://posthog.com/handbook/marketing/working-with-website

The website is owned by Cory Watilo and Eli Kinsey. For general questions or quick updates, the best place to start is the #posthogdotcom Slack channel.

For most pieces of work, like blog posts and copy updates, you can ship without needing a review from the website team. However, for larger pieces of work — a new product page, a significant copy overhaul, a new landing page — there's a more structured process to follow.

Why can't I vibecode? You can, but vibecoded stuff tends to be harder for the website team to review and has a tendency to not work well with some existing systems.

Requesting large website changes

1. Draft the content in a Google Doc

Start with words, not designs. Write out the full copy, structure, and any specific requirements. This gives the website team something concrete to work from, and keeps early-stage feedback focused on what matters: the message.

2. Submit it to the website team as a GitHub issue

Open an issue in the posthog.com repo using the Website Request template and link your Google Doc. Include:

3. The website team builds from that and opens a PR

Once the issue is picked up, the website team will build the page. They'll open a PR and tag you when it's ready for review so you can give feedback on changes.

4. Review and give feedback from the PR

Review the PR, leave comments, and iterate from there. This is the right moment to give design and layout feedback — not before, when things are still just ideas.

Why this process

This approach is designed to stop time being spent on designs that don't get used. It also keeps the dynamic clear: the PMM team hands off, the website team builds, and everyone can feedback.

We don't want to overbake or complicate this process. This is as simple as it can be and as complex as it needs to be.

Chrome extension billing case study: Wildfire Systems

Onboarding | Source: https://posthog.com/handbook/onboarding/chrome-extension-billing-case-study-wildfire

Summary

Wildfire Systems implemented PostHog in a Chrome Extension environment. Due to how extensions handle session and identity persistence, they experienced unusually high event volume and feature flag calls, which led to inflated billing.

This document explains the technical causes, the customer's solution, and how to identify similar cases using Metabase.

Technical root cause

| Issue | Explanation | |------|-------------| | PostHog re-initialized on every extension wake | Chrome extensions create a new runtime context when switching from idle to active. Each context re-initialized PostHog without access to prior storage. | | A new distinct_id was created each time | Since local storage is isolated per context, the PostHog SDK could not persist the ID. This triggered a new anonymous ID on each wake cycle. | | identify() was called repeatedly | Each new ID triggered a comparison to the persisted UUID. Since they always differed, identify() was called each time. | | identify() triggered reloadFeatureFlags() | Every call to identify() refreshed feature flags. | | /flags requests were billed, even when quota-limited | PostHog counted these requests toward the usage quota, even if the response returned no flags. | | Added budget mid-cycle had no effect | When the team increased their billing limit, it did not retroactively unlock flags. Only new requests after the monthly reset were allowed. |

Fix implemented by the customer

The Wildfire team applied the correct approach:

  1. Persisted a shared UUID via chrome.storage.local

This ID was generated once, then reused across all extension contexts.

  1. Bootstrapped PostHog with the UUID

On every initialization, the distinct_id was passed via bootstrap.

  1. Avoided calling identify() unnecessarily

The team checked if the existing distinct_id matched the UUID before calling identify().

  1. Minimized /flags requests

Bootstrapped feature flag values were passed during init, reducing the need for real-time flag fetches.

  1. Used PostHog dashboard to monitor

The "My PostHog Billable Usage" dashboard showed real-time data to verify that fixes worked.

How to spot this in metabase

If a customer is using a Chrome Extension without proper initialization, you will often see the following patterns in the usage dashboard:

1. Extremely high identify event counts
2. /flags usage is abnormally high
3. Total event volume appears inflated without a matching frontend footprint
4. Usage patterns appear to "pulse" or reset regularly
5. No batch exports, minimal standard library usage
6. High $set, $identify, $groupidentify volume with few custom events

When you see these signs together, it is a good idea to ask: "Are you using PostHog in a browser extension product or other ephemeral context?"

If confirmed, you can share bootstrapping and identity persistence best practices.

---

Recommendations for extension developers

| Task | Details | |------|---------| | Persist ID manually | Use chrome.storage.local to persist UUID | | Bootstrap identity | Pass the UUID during posthog.init() with bootstrap.distinctID | | Avoid repeated identify() | Only call identify() if the ID has changed | | Reduce /flags usage | Use bootstrap.featureFlags and disable polling if needed | | Monitor proactively | Use the "My PostHog Billable Usage" dashboard | | Educate early | Customers should be aware that extensions require manual handling |

Metabase account analysis playbook

Onboarding | Source: https://posthog.com/handbook/onboarding/metabase-account-analysis

Summary

Metabase dashboards mirror a customer’s PostHog usage so we can diagnose billing, implementation quality, and quick-win optimizations. For audio and video learners, check out these:

While checking the account in Vitally, you can access a dedicated Metabase dashboard for this account directly from the sidebar:

Image: Vitally Dashboard Usage Link

(Note that you may need to configure your properties first to see that by clicking on the "+" button next to properties)

There are two Metabase instances, US and EU, that correspond with the PostHog instance that the customer is on. There might be some visual differences, but accessible data should be roughly the same:

Image: Metabase Customer Usage Dashboard Example

What to pay attention to

1. Billing history

Here you can see an overview of the billing: past bills that went through, bills that were covered with the usage of credits, refunds, and future forecast:

Image: Billing history and credits as seen in Metabase

Looking at credits can be especially relevant for Startup or YC Plan customers (that usually use credits or might run out of credits and also display a forecasted MRR). If we ever issued one-off credits (e.g. for spikes), their usage will also be similarly displayed here.

Refunds are also visible with a red bar:

Image: Refunds as they would appear in Metabase

The forecast corresponds with the forecasted MRR that you can see in Vitally’s sidebar, while if you’re interested in how much has been incurred so far, it’s something you can check directly in Stripe’s invoice:

Image: Vitally forecasted MRR and Stripe links

2. Forecasted bill breakdown by product (all projects)

This is where you can quickly see what constitutes the majority of the bill and what can be a lever to reduce customers' spending. It's the best place to start the conversation around the value they take from PostHog.

Image: Metabase billing circle

Check the highest % to see if we can share some recommendations on how to reduce the spend, see if it’s not caused by improper implementation, and pay attention to whether the user is not billed for the add-ons that they don’t use in practice (e.g., Groups, data pipeline).

3. Billing limits

The default limit for the data warehouse is $500, and $150 for PostHog AI, so that’s something you may see quite often. Seeing other billing limits added might be an indication that someone could benefit from a more long-term solution and a cost-cutting strategy, as once the limit is hit, the data is not ingested anymore and is lost forever. Billing limits are just a temporary patch, but not a solid solution.

Image: Metabse billing limits

4. Projects for the organization

This is where you can see whether any Session Replay controls have been implemented (minimum duration, sampling, feature flags). Any URL/event triggers won't be visible here.

If controls are missing but usage is high, recommend applying them before scaling replay usage. Most users should at least have minimum session duration enabled as most < 2 second recordings are not valuable but still racks up usage and billing.

Image: Metabse session replay controls

Session replay controls must be added for each project separately.

5. Org membership permission level

We use all_admins_owners to send our outreach emails on the onboarding team. You can copy all of them for your first email, and if the list gets too long, you can compare it with the list of active users in Vitally to see who might see your email. After a while, if you haven’t heard back, next time you can experiment with emailing recently active users from Vitally as well.

Image: Metabase admin owner member emails

6. Key event volume (all projects)

The heart of our analysis. You can see the % of the most used event types. You can see whether they’re using Autocapture, custom events, or when they have an unusual spike in $pageleave events. If you see a high ratio of autocapture events, but 0 Actions in the “Actions (by type)” graph, you can assume that they may not take enough value from it.

Image: Metabse key event volume for all projects

7. Total event counts (all projects)

A supplementary chart to the “Key event volume”. Both should be reviewed together. The area chart shows the most used events, and potential unexpected spikes. Events marked by the $ sign are the default PostHog events.

Image: Metabse total event counts for all projects

This graph corresponds with “Billable usage insight” insight within the “My PostHog billable usage” dashboard template. It’s a good idea to show it to users, so that they can keep an eye on their usage and decide if everything they see there is needed for their tracking.

8. Event ratios per person or session (implementation error check)

If you see an unnatural spike in $set or $identify events in the two previous charts, here you can see whether their implementation is correct. Usually, 1-3 calls per session are alright, and things may get tricky if it’s more than 4. If they are using group analytics, pay attention to the groupidentify calls per session as well. Feature flag calls per session can also help indicate further troubleshooting may be needed if it's too high:

Image: Metabse event ratio checks for implementation errors

9. Hog Destinations

If they pay for data pipelines but have no active destinations, flag the mismatch and suggest enabling or removing the add-on. Newer accounts pay by usage (rather than the add-on), so keep an eye out for that as well.

Image: Hog destinations, batch exports, data warehouse syncs

You should also check whether they are using batch exports or data warehouse syncs.

Billing Deep Dive dashboard

The link is already available in our Daily view in Vitally, but make sure you also have it handy in the Account’s sidebar as well. Go to Properties > PostgreSQL> Billing Deep Dive Dash, and click on a pin icon:

Image: Metabase billing deep dive dashboard from within vitally

This dashboard is extremely helpful to dive deeper into the usage of Feature Flags and Session Replay, which is not that clear and easily accessible in the default Metabase dashboard. It gives you more insight into mobile vs. standard web reply, or decide vs. local evaluation requests for feature flags.

It’s really handy when you want to investigate a spike in usage (e.g., due to an error in the implementation) and for how long it lasted. Some users struggling with their config may ask you about it specifically.

Pick the appropriate feature from the Category dropdown, update the filter, and adjust the period.

Here, for example, you can see an error in the implementation of the Feature Flags local evaluation, how it compares to Feature Flags in the front-end, and when the problem has been resolved:

Image: Metabase billing deep dive dashboard example

The breakdown by “product x team_id” helps understand the usage per project (by project ID). It's a very popular feature request that the Billing team is working on so that it can also be accessible within PostHog.

Image: Metabase billing deep dive usage per project

Operational billing actions

Re-authenticate the billing admin

Go to billing.posthog.com with your PostHog Gmail, then billing.posthog.com/admin should load again after login.

Adding credits

Watch this video: How to add credits (Loom)

Image: Adding credits in the billing admin

In the billing admin portal, click “Add” next to Credits, search by Organization ID, set amount and reason, and leave an internal note.

Credits now fund the Stripe balance; legacy “credits expire” fields may still appear. Notify Billing if credits do not stick.

After adding the credits, return to Stripe to ensure the changes were applied correctly.

Rectifying a failed invoice

Watch this video Issuing a credit note (Loom)

Sometimes, you may notice that a customer has deliberately not paid their invoice due to an unexpected spike in product usage or because the bill exceeded their planned budget. In some cases, they haven’t reached out to us for help before the bill was renewed.

In these situations, we're here to help them sort out their usage and billing. However, it's no longer possible to simply add credits—we now need to adjust the already-issued invoice directly in Stripe.

This can be done using credit notes, which allow you to either compensate the full amount or offer a pro-rated relief. Please ensure you have the appropriate Stripe permissions; otherwise, you may not be able to access this option:

After saving the changes, you should see on the main invoice page that the invoice is marked as “Canceled”, and that the credit note has been sent to the user.

Image: Stripe failed invoices and issuing credit notes in lieu of a refund

As a best practice, always engage with the user first to understand their specific situation. This allows them to confirm whether an unexpected event occurred on their end, rather than us proactively rectifying failed invoices. The reason for this is that the customer may churn, and in that case, the invoice might be considered uncollectible rather than refunded.

Lastly, always add a note in Stripe to explain why credits are being issued.

New hire onboarding

Onboarding | Source: https://posthog.com/handbook/onboarding/new-hire-onboarding

Your first few weeks

Welcome to the PostHog's Onboarding team! We only hire about 1 in 400 applicants, so you've done well to make it here! Unlike a lot of companies, we don't have a super-long onboarding process and would prefer you to be up and running with your customer base as quickly as possible.

Here are the things you should focus on in your first few weeks at PostHog to help you achieve that.

Ramping up is mostly self-serve - we won't sit you down in a room for training for 2 weeks. If you're not sure who is supposed to make something below happen, the person responsible is almost certainly you!

Also look at the sales team's onboarding page for guidance on what _not_ to do when you start. In general, there's a lot of good resources within sales to reference (as we were previously one team!)

Day 1

Rest of week 1

Week 2

In-person onboarding

Ideally, this will happen in Week 2-3, with the company of a few colleagues (depending on where we do it and who's around). It will be 3-4 days covering (among others):

Detailed training plan available in the new hire onboarding checklist. This is a checklist for the Manager, you don't have to read it beforehand.

Weeks 3-4

How do I know if I'm on track?

By the end of month 1:

By the end of month 2:

By the end of month 3:

General expectations

Our customers are always central to our work. There’s time and space to work on fun projects, but that should never happen at the expense of our customers. Below are some non-exhaustive tips to help you stay on track during and after the probation period.

Core responsibilities

Communication and ownership

Staying up to date

Everything at PostHog changes really fast. That's how to keep up as a start:

Onboarding conversations playbook

Onboarding | Source: https://posthog.com/handbook/onboarding/onboarding-conversations-playbook

Our customers are busy, self-serve by default, and allergic to anything that feels like a time sink. We deliver the most value when we can talk directly, so it’s worth being intentional and trying creative ways to earn that conversation.

That said, we’ve repeatedly seen customers implement our recommendations even when they never reply. That’s why we don’t gate value behind a meeting - we provide it regardless.

Check out the Getting people to talk to you page in the Sales Handbook and our learnings below. As you experiment, add more and share what worked!

Our guiding principles

Outreach

Your first message is your best chance to earn attention. It should feel like practical help from a real person - not a pitch. Lead with a specific observation, a clear benefit, and an easy next step.

Captivating subject lines

Avoid generic subjects (“Checking in”, “Following up”). Instead, experiment with short, specific lines and anchor them to a specific outcome.

Use the following product signals:

​​- “Are you trying to do [goal]? (I can help)”

Content

Keep it short. Don’t overwhelm the reader. It’s tempting to include every tip and best practice, but concise emails get read and replied to. Share the headline observation and the next step; save the deep dive for the call (or a follow-up).

Set expectations early. If you want consistent engagement throughout onboarding, be explicit about what the program includes and why it’s worth their time. When customers know what to expect and how to use our time, they’re more likely to participate. Setting clear boundaries also helps - what you can help with, and for how long we’re around.

Use prior context to be proactive. Before you hit send, take a minute to scan prior threads. If a customer spoke with Sales during an evaluation, check what came up and reference it (e.g., “I saw you covered X with [Name]”) so your email feels connected. And look for other loose ends too, e.g., an old support ticket, or a question from months ago. Following up with a real solution feels personal, and proactive delight gets noticed.

Checking in

This is where we can have a real impact on product adoption and usage expansion. Think of it as a value-driven "soft cross-sell".

Don’t just repeat yourself. Avoid rehashing the same observations from your first message. If your earlier advice still hasn’t been implemented, send a small, friendly nudge. Otherwise, bring something new:

Mainly, help them get to an “aha” moment, and/or suggest one or two features they’d benefit from, but may not have discovered or had time to try. PostHog features become more powerful when used together (e.g., funnels/error tracking + session replay + PostHog AI). Share a specific guide, an example, or a Loom video, so the customer doesn’t have to poke around to figure it out. You can take some inspiration from Use Case Selling handbook pages.

Lastly, if the customer is trending toward growth (usage, team expansion, increasing volume), it’s okay to mention pre-paid credits and the option of dedicated human support early. Framing it as “when you’re ready” gives them time to consider it and makes a future Sales handoff smoother.

No response?

Review the list of users on the account: who’s active in PostHog, what roles they have, and who is most likely to own outcomes (implementation, analytics, product, engineering) vs. commercial topics (billing/procurement). Choose a small set of the most relevant people (3-4 total) and avoid repeatedly emailing everyone.

Tailor the email to their likely concerns:

A small, human touch can help here! Use what’s publicly obvious or clearly relevant (their product category, their website messaging, their goals). If you genuinely relate (e.g., you’re learning a language and they build a language app), one sentence can be enough to build rapport. That’s also a great tip for the first outreach.

Preparing for the call

Start from a health check

Use Vitally and Metabase to understand the customer’s current setup. For easier access, you can pin the "Engagement Metric Dashboard" custom trait in Vitally, where you can take a closer look at power users in the organization, the usage of AI or error tracking, and more.

You can supplement Metabase analysis with the HogSpy extension to audit the implementation of identify, flags, and experiments.

Then zoom out to learn about their business, their product, and the rest of their stack. The better your context, the faster you’ll get to relevant recommendations.

Lead with their KPIs

Use the customer’s KPIs (usually captured in the booking form) to drive your prep. Ask yourself: what would “success” look like for them? Come prepared with 2-3 concrete use cases tied to those KPIs (e.g., a specific insight type, dashboard, funnel, experiment, etc.). This Handbook page can be a good source of inspiration.

Map the stack and spot opportunities

Check Wappalyzer (login details in 1Password). It’s not always perfectly accurate, but it’s usually good enough to understand the tools they rely on. Use it to identify integrations, suggest Sources/Destinations where it makes sense (e.g., HubSpot),

It might be a great moment to position PostHog as the place where multiple tools can connect under one hood.

Customers respond well when we’re proactive, especially when we show them a path they hadn’t considered. PostHog is most powerful when features compound, so part of prep is identifying the next adoption step that unlocks more value. You can take some inspiration from Use Case Selling handbook pages as well.

Use AI to broaden your angles

AI can help you sanity-check assumptions and surface ideas you might miss. Customer-facing teams at PostHog use PostHog AI, Claude (with PostHog + Vitally MCPs), Cursor, or Antigravity. Use it to generate questions, identify likely “aha” moments, and draft call checklists, then apply human judgment to keep it relevant.

You can also run PostHog AI on the customer instance (visible only to us, no cost incurred) to do the account audit. Prompt below.

<details><summary>PostHog AI prompt</summary> Analyze the organization across the following dimensions using the last 30 days of data.

  1. Instrumentation health
  1. Feature flag usage
  1. Product usage patterns
  1. Session replay
  1. Underutilized PostHog features
  1. Cost optimization

Summarize findings with a prioritized list of recommendations:

Follow-up with: Now go look at their business and domain. What should they be doing to get more use and value out of PostHog?

On the call

-Connect features. Show how features compound and check this Handbook page for inspiration:

Email Follow-up

Onboarding Data

Onboarding | Source: https://posthog.com/handbook/onboarding/onboarding-data

Data architecture overview

Data used by Onboarding Specialists comes from three main sources:

Billing Postgres _admin panel view_

Production Postgres _admin panel view (US)_

ClickHouse

Query capabilities

Vitally integration

We sync customer data between Vitally and PostHog bi-directionally

Data sync pipeline

To Vitally:

From Vitally:

Known limitations

Onboarding pipeline tracking

Pipeline stages

We track customers through defined onboarding stages with automated timestamp capture:

  1. Onboarding segment entry - Customer enters onboarding criteria
  2. Outreach sent - Initial contact via email (manual update)
  3. Customer engagement - Response received (manual update)
  4. Nurture phase - Post-intro call follow-up (manual update)
  5. Completion/churn - Final outcome tracking

Each stage transition is managed through Vitally playbooks with automatic timestamp updates.

Key data tables

For onboarding analysis, these tables provide essential data:

| Table | Purpose | Key fields | |-------|---------|------------| | invoice_with_annual | Billing data with revenue amortization | Revenue (mrr), billing period, type (annual, completed, upcoming, etc) | | vitally_accounts | Customer properties and traits | All Vitally custom traits, health scores, usage | | posthog_organization | Org-level configurations | Settings, feature access, creation date | | posthog_project | Project/team settings | Project configuration, team members | | billing_spike | Usage anomaly detection | Spike timestamps, magnitude, affected metrics |

Onboarding program

Onboarding | Source: https://posthog.com/handbook/onboarding/onboarding-program

Getting started with any new tool can be overwhelming, and PostHog is no exception. We want to make sure you're configured correctly, using the right features for your needs, and seeing real value.

That's why we offer personalized onboarding. Whether you need help with initial setup, want to optimize your billing, or are looking to align PostHog with your business goals, we're here to help.

What to expect

Our onboarding program spans 8 weeks and includes:

Timeline

Here's a typical timeline, though we're flexible and can adjust to your schedule:

Your success is our success

Here's the thing: most teams struggle with knowing what to track, not just how to track it. The first call gets you set up correctly. The second call is where the magic happens. We'll help you decide on the metrics that actually drive your business decisions and create a roadmap for using PostHog strategically.

This is where customers see the biggest ROI, and it's completely free. We highly encourage you to take advantage of it.

Onboarding team

Onboarding | Source: https://posthog.com/handbook/onboarding/onboarding-team

How we work

First and foremost, we’re account-agnostic, which makes us different from other GTM teams. This means that we don’t have our book of customers, and our focus is on being fast, responsive, and available to a huge number of customers. This is precisely why we use, e.g., a Team Link for customers to book the call - they can choose a person closest to their time zone, and both the experience and value provided remain the same across the team.

Day-to-day, we collaborate closely with Account Executives and Account Managers, especially when a customer would benefit from a dedicated PostHog human, and the Support team on solving issues.

Onboarding sessions are a mine of information about our users and their needs, which makes us a fantastic liaison for the Product teams. We share product feedback whenever it surfaces.

Since the Onboarding team is still a relatively new addition to a wider GTM team, we're a highly collaborative and creative bunch who are not afraid to try new ideas, iterate, and build the foundation for the future Onboarding endeavors.

What does this team do?

The core job of an Onboarding Specialist (OS) is to ensure a successful start of the user journey with PostHog. That means making sure that our customers get the most value out of using PostHog, they are aware of best practices, their setup is solid, and they don’t pay for something they don’t need. Ultimately, we serve as the customer's sparring partner in achieving their goals, so we need to understand their needs, their business, and where they’re coming from.

The north star metric for the Onboarding team is 3-month logo retention at 90% from the first $100+ forecasted bill, which can be tracked in the onboarding team retention dashboard.

We also care about net dollar retention for this segment, but we treat it as an auxiliary metric.

Which customers get onboarding?

The segment consists of customers who self-serve PostHog and generate a forecasted bill of over $500. In practice, because billing is metered and in arrears, and we don't know what people will pay when they sign up (or when they first exceed a $100 forecast), so _most_ accounts > $500 forecast are routed to us. We also handle a couple of other segments:

Which customers are out of scope

Since we primarily focus on customers who've signed up and have a forecasted bill, in most circumstances, we're not the right choice to talk to customers who've:

Merch store consultation

Customers who normally fall outside our scope still have a chance to get help! They can buy an Onboarding consultation via our merch store.

After making the purchase, the customer gets a link to book a meeting, and they can contact our Billing team if they can't find an appropriate time slot. The billing team handles issuing credits/refunds accordingly. However, since it's a paid service, we should prioritize these and try to make space in our calendars, if possible.

A few things to keep in mind:

Internally, when someone purchases a call, we get notified in our Slack channel. Check who completed the purchase, look them up in Vitally for more context, and check whether they booked a call. Change the status in Vitally to Paid Call purchased for tracking purposes, and add a note if needed.

Tooling

Check out the list of shared tools.

The team-specific tools for this team are:

How to succeed

How to deal with complex technical issues

Our role is pretty hybrid and lives at the intersection of other teams. As much as we love solving our own problems, escalations may happen. Here’s a brief guide on how to handle them:

How to deepen your knowledge

Onboarding process and tracking

Onboarding | Source: https://posthog.com/handbook/onboarding/onboarding-tracking

The onboarding team operates a high volume, high velocity sales pipeline with all pay-as-you-go (or YC) accounts that are forecasted to spend > $500 and are not otherwise engaged by Sales/CSM. As such, Onboarding is a linear flow moving from initial outreach to confirming the product is configured properly, ending with customers who are happy paying multiple bills. We aim to keep engagements to ~8 weeks, or 2 full billing periods, but in practice, there is some spillover depending on responsiveness.

Principles

Our onboarding program was created to offer necessary help, increase the value our customers get from PostHog, and assist them in achieving their business goals.

The program is guided by a few key principles:

Internal process

Vitaly views

Daily view (link)

Sort your view by the “Next Renewal Date” column to reach out to users in a timely manner. Since our role is focused on proactively providing users with value and setting them up for success, we’ve found it’s best to contact them ~14 days before their bill renews. This gives them enough time to see our email, schedule a call, and implement potential improvements in their setup.

Keep an eye on “Onboarding Pipeline,” which indicates whether the account is New or Onboarding has been initiated.

In the view, you have other useful columns like OS Priority, OS Last Messaged, forecasted MRR, or who’s assigned to the account. All these help in prioritizing your work.

Maintaining good hygiene and attention to detail is key here. Keep labels up to date and make sure not to miss accounts that were recently added to the segment—they might appear at the top of the list among accounts you’ve already worked through.

Remember to add a short summary from meetings in a Note, and if you need to follow up at some point, create a Task with a due date.

Kanban view (link)

A supplementary view that’s great for getting a general overview of progress.

Onboarding program - logic and sequence

There are two paths for customers to progress through the onboarding process: those who engage with us in some way, and those who show little or no engagement.

User engagement is tracked behind the scenes with the time stamp in Vitally, thanks to which we can query relevant data.

For day-to-day operations, these are the statuses we use to track users in the Onboarding Pipeline property:

The last two are not numbered, as they happen "outside" of the regular pipeline.

Note: You may need to add this property to your views in Vitally. It's found under Custom Traits.

For higher-spend accounts ($500+), we have Check-in Onboarding Status property that's triggered between 15-21 day in the Onboarding Journey. It serves us as a visual helper and reminder to circle back to the account, see if our advice was followed, record a Loom video, or share some extra resources. It's a great opportunity to re-engage customers and show them some other PostHog's capabilities that they may not know about.

The complete Onboarding Journey looks as follows:

| Weeks | Actions | | -------- | -------- | | Week 1 | First outreach to a New Account| | Week 2 | | | Week 3 | Extra check-in for $500+ accounts| | Week 4 | | | Week 5 | 2nd outreach (automated) - all accounts| | Week 6 | | | Week 7 | | | Week 8 | Graduation (automated) - all accounts|

The last two stages of the Onboarding Journey are automated with Vitally playbooks. Second outreach prompts users to surface unanswered questions and book a session with us, and the Graduation email is a nice way to conclude the journey and point out other avenues where users can get help. It's also where we ask for feedback about our Onboarding.

Account analysis for outreach and meetings

How this is organized in Vitally via Playbooks

General playbooks
Setting timestamps for each stage
Automations
Other

Segments

Going forward, we only have one main segment: Onboarding Lead. We'll be retiring Onboarding - engaged as soon as we have worked through all the legacy accounts.

We also use the Onboarding Lead 100-500MRR auxiliary segment to provide us with more information about the account and help us prioritize the work.

Onboarding Completed segment corresponds with 3.Onboarded trait and it serves as a visual indicator for other teams that the Onboarding has been completed.

Alerts and revenue tracking

We occasionally shift our attention to help customers who may need more urgent assistance. For these, we have a few types of alerts (tasks) in Vitally, where Magda is a failsafe if the account doesn't have an assigned OS.

To help our Revenue team get the forecasting right, we now have a Payment Risk Assessment field in the Vitally dashboard, where we can manually mark when we see that the customer is unlikely to pay their invoice.

Specific cases

How to stop automation

There might be some specific situations where you're actively engaged with the customer, and you don't want the email automation to fire off (e.g., the second outreach). You can easily spot when the email automation is scheduled by checking "Onboarding Pipeline Stage Times" widget in the Vitally dashboard.

To stop the automation, you can flip the account to the Onboarding Completed status. This change will block the second outreach, but will still fire the graduation email.

Failed payments

We may get a Failed Payment alert on a customer we still haven't engaged with. In this case, since we're unsure whether the customer is going to stay with us, we don't have to do a deep account analysis. It's enough to remind them to settle the invoice, offer help, and briefly point out obvious spikes and main drivers of the bill.

It's enough to reach out just once, as the finance team monitors the payments and handles the account deactivation.

Currently, we exclude some subject line keywords, like "payment", "outstanding", and "fail" in the "Set onboarding initiated" playbook in order to avoid the status change from New Account to Onboarding initiated. In other words, when you reach out regarding a failed payment, the account should stay as New Account and resurface in Vitally's queue before the next renewal, if the customer settles the payment. When that happens, do our regular outreach with a deep account analysis and enroll them in the program.

Sales handover

Onboarding | Source: https://posthog.com/handbook/onboarding/sales-handover

Initial qualification

Direct handover - skipping onboarding

If you see that a customer is spending more than $1,000 monthly, evaluate whether their usage looks stable and legitimate, and make sure that MRR doesn't come from an unwanted event spike or misconfiguration issue.

If that's the case, you can pass the account to Sales even without speaking with the customer first, as long as you’ve confirmed that the high spend is intentional. The goal is to react quickly to healthy, high-spend accounts—but avoid passing through problematic ones.

Before you hand off, also consider month-over-month growth. A flat $1.2k account is a very different lead from a $1k account that doubled organically last month. Growth rate matters to the Sales person deciding whether to prioritise the lead, so call it out.

Be courteous and leave a note in Vitally with context on the account before handing off. Include what you spotted in Metabase, any relevant billing patterns, and your read on why the spend is legitimate. The Sales person receiving the lead should be able to pick it up without having to dig.

Handover during onboarding - engaged customers

While talking with customers or analyzing the account, do some discovery to understand the reason behind their high spend and assess whether there's potential for stable spend or usage moving forward. If they’re happy continuing with PostHog, you can mention our discounted pre-paid plan, which helps them save ~20%. However, if they prefer paying monthly, they are more than welcome to do so!

We typically hand the account over to Sales when a customer is interested in the annual plan, requires additional contractual or legal support, or we notice potential ourselves.

Make sure to include in Vitally our point of contact, i.e., the person you've been in touch with, while handing over the account to Sales.

Unresponsive customers during onboarding

Historically, there's still a good chance that they'll talk with Sales after passing them on! AEs have been successful in reaching out and securing long-term commitments. You don't have to wait for the customer to complete the onboarding program - you can pass them earlier on, if they didn't respond to our initial message, and if you see that the account matches criteria and the handover makes sense.

If there are any pending config issues that you raised before but the customer didn't respond to, just provide relevant context to the fellow AE/TAM in Vitally - sometimes it might be a good conversation starter!

Lead creation

If you come across an account with growth potential or stable high-level spend (especially if that high spend has occurred over the past two - three months and there are no pending issues to resolve), that might benefit from an annual plan or general sales engagement, you can add them to the Onboarding referral segment in Vitally. Within a few minutes, this will automatically create a Salesforce lead and assign it using round-robin logic.

After a few minutes, your lead will appear in the #sales-leads Slack channel, tagged as "Onboarding referral". As a good practice, leave a note in Vitally for the Account Executive with some relevant context on the customer. You can also ping an assigned AE on Slack, if any further follow up is needed.

The automation flips the Vitally's Onboarding pipeline trait to Sales Handoff so that we have not only data on the leads passed, but also a visual indication.

Proactively looking out for opportunities

Confusion about previous Sales engagement

Some pointers on what to pay attention to in Vitally while checking for prior Sales engagement:

If Sales are already engaged, there's no need to create an Onboarding Referral.

If the account has engaged with the Sales team at some point and it's unclear where the conversation stands, ping your fellow AE to make sure you’re not overlapping efforts.

If it’s clear there’s a duplication issue and we shouldn’t be involved, ping Mine to double-check the logic.

What to do when Sales is involved?

If an account is in the Onboarding Lead segment, but there are recent Active Conversations in Vitally from a TAE/TAM (or scheduled meetings), and TAE/TAM confirms they are already actively engaged with the account, add a Vitally note saying: “Removing from Onboarding Lead segment — Sales already engaged.” Then remove the account from the segment and delete both the pipeline trait and the timestamp.

Benefits

People | Source: https://posthog.com/handbook/people/benefits

Outside of our generous pay and equity, we also offer several other exceptional benefits to our team. We want to provide exceptional benefits when it comes to things that help you do your job better, and in line with the market for well-funded startups for everything else.

If you have any ideas for how we can improve our benefits offering, then please let us know!

Time off

Everyone in the team has unlimited, permissionless time off.

We also offer parental leave for new parents.

Equipment and co-working

As we are fully remote, we provide all equipment you need to have an ergonomic setup at home to be as productive as possible. We provide all team members with a company card for this purpose.

If you ever need change of scenery, co-working or working from a cafe or WeWork All Access are available, just follow our expense policy e.g. we trust you to do the right thing.

Please message Kendal to get added to our company WeWork account.

Meeting up

We do regular team offsites - recent trips have included Mexico, Aruba, Iceland, and Portugal! Small Teams also have their own offsites at least once a year.

We also encourage people and teams to meet up in person _in addition_ to the offsites. If you are working on a problem that is better worked on in person, then you should do this. Our expense policy is about trusting you to make the best decisions. Travelling can be distracting so we expect you to exercise judgement when doing this.

For any work-related travel, we also use Project Wren for carbon offsetting.

Free merch

People like our merch. If you want more, here's how to get it!

As always, we expect you to use this with restraint and with your own good judgement. The merch store should not become your sole source of clothing for your wardrobe, nor where you go any time a friend has a birthday. But sure, go ahead and buy your mom (or yourself) a hat or a hoodie!

Support open-source projects

Everyone gets a monthly open-source sponsorship budget to spend as they see fit to support open source projects of their choice.

We'll be your first investor

We'll be your first investor and biggest cheerleader, if you spend two years at PostHog and leave to start a new company. We're looking for entrepreneurs and a strong Why not now?!

Learning and development

We currently offer a Training budget and free books - you can find more on the relevant pages.

Country specific benefits

With everyone being distributed across the world, we do our best to provide the same benefits to everyone, but they vary slightly by country depending on the services that are available and local regulations.

US

401k contribution

In the US, our 401k plan is managed by Vestwell and we match up to 4%.

Health care

In the US, you'll enroll in benefits through BambooHR and manage your coverage through UnitedHealthcare for medical and Guardian for dental and vision. PostHog pays 100% of the premium of the Platinum plan for team members, and 75% for dependents.

We offer the option to opt in to a Flexible Savings Account (FSA), which is a tax-advantaged account that allows you to contribute pre-tax dollars up to $3,400 per year to be used on out-of-pocket medical expenses. The FSA is a "use it or lose it" benefit, so any dollars that are not spent by the end of the year return to the company.

There is also the option to choose a lower tier, high deductible health plan (HDHP), which will qualify you for a Health Savings Account (HSA) that has further tax benefits beyond what the FSA provides. At the end of the year, any unused money rolls over and the contribution limit resets.

UK

Pension

In the UK, we use Royal London. Team members contribute 5% and PostHog contributes 4%, but you can opt out if you like. You can also transfer out of the plan as frequently as you want, in case you would rather manage your own private pension.

Private health insurance

In the UK, we use Aviva for private healthcare (£100 excess per policy year) and Medicash as our cash plan for dental and vision. Children are included for free. Both of these are taxable benefits which will affect your Personal Allowance each tax year, and you can opt out at any time with 1 month notice.

Nursery

In the UK, we offer the workplace nursery scheme. This enables you to pay for your children's nursery using your pre-tax salary, saving you up to 45% in nursery fees.

If you are interested in this, first check with your nursery that they are part of the scheme, then message Kendal to get this set up.

Cyclescheme

In the UK we offer Cyclescheme to save money on new cycling gear. To get started, activate your Cyclescheme account via the Workplace Extras registration form.

Other countries

Pensions

In countries where you are employed under Deel's EOR service, we make pension contributions in line with legal requirements.

Unfortunately, we are currently legally unable to provide pensions to contractors.

Private health insurance

We offer private health insurance in countries where it is considered market to do so. For Ireland, Spain, Netherlands, Portugal & Canada the health insurer varies depending on market and offering via the Deel platform and can be subject to change. Please login to Deel to find the policy relevant to your market or reach out to the Ops team if you have any questions.

BookHog

People | Source: https://posthog.com/handbook/people/bookhog

BookHog is PostHog's official book club. We meet once a month to discuss a particular book. Radical.

Michael is the organizer and picks the next book to read through a pseudo-democratic process. Previous themes have included 'unorthodox manoeuvres', 'short stories', and 'super mainstream beach reads'. All discussion and voting for the next book to read happens in the #books-and-films Slack channel. Previous books we have read:

  1. The Panama Papers
  2. Exhalation by Ted Chiang
  3. The Spy and the Traitor
  4. Pride: The Story of the LGBTQ Equality Movement
  5. Soon I Will Be Invincible
  6. Six Easy Pieces by Richard Feynman
  7. Stories of Your Life and Others
  8. The Order of Time
  9. His Masters Voice
  10. When Breath Becomes Air
  11. Arnold: The Education of a Bodybuilder
  12. A Billion Years: My Escape From a Life in the Highest Ranks of Scientology
  13. Dune
  14. Zen and the Art of Motorcycle Maintenance
  15. Team of Rivals
  16. The Richest Man in Babylon
  17. Surely You're Joking, Mr. Feynman!
  18. A Brief History of Intelligence
  19. Meditations by Marcus Aurelius
  20. The Chemistry of Death
  21. Countdown to Zero Day
  22. Drive Your Plow Over the Bones of the Dead
  23. No Rules Rules

Books can be purchased using your monthly books budget.

Career progression

People | Source: https://posthog.com/handbook/people/career-progression

Helping the company win helps you win

The best way to progress your career at PostHog is to understand your team’s _and_ PostHog's objectives, then:

You are what you do. Getting promoted in a company that is struggling, is very hard. However, if the company is succeeding, it'll be easy to justify, _and_ to afford, pay rises where people are performing very well.

Give a shit about your work, your team, and our users

These three are the inputs that lead to the output of career progression. If you focus only on yourself, no value is provided and you won't progress. If you only focus on your team, you won't build the right thing for our users. Having a consistently caring attitude will in the long term lead to progression - if you do this, PostHog will progress you.

When we IPO, you will literally walk into any job anywhere

While being able to talk about all the cool stuff you built will help you in your future career, being an early employee that took us from very early to public is a huge and an _exceptionally_ rare career achievement. That's how you leap multiple positions into an exec role, or whatever else you want to do.

Ways we help you progress

Hire and maintain a team of excellent people, all working transparently, that you can learn from

We are disciplined with maintaining a high bar. And since everyone works so transparently, you can learn from watching what everyone is doing - from how board meetings work, to why we picked a company strategy, down to why our frontend is the way it is.

Give you loads of autonomy

We don't limit you, and will push for much more than you may think is possible. It will feel hard, but rewarding. You will get used to not asking for approval.

Give you lots of interesting problems to work on

PostHog has a wide variety of challenges - from data, to entire new products and features, to design and UX tradeoffs. On the go-to-market side, we're wildly different - you'll learn about self serve, bottom up adoption, handling a community, and how giving things away for free leads to us making money.

We have small teams - we can move people around as we grow to provide variety and to let people switch up their focus if things get stale.

Lightweight management

You have someone to talk to, but without being micromanaged. Their priority is to support you, and we give them resources to make them a better manager. They will also do a regular career check-in with you as part of your 1-1s to ensure you're on the right track.

Build a huge open source portfolio

Better than a fancy title - you can show future employers or investors what you built and the problems you solved.

Your team around you see your everyday work more than a manager - get direct feedback from them

Great people + direct feedback = learning.

Ways we do not help you progress

A checklist of things / a formal career progression framework

This is self-interested by its nature, so creates the wrong incentives. The benefits of frameworks only start to outweigh their drawbacks when you need to start coordinating 100s of people.

Fancy titles

We don't have a wide range of titles - we want people to be as equal as possible in order to enable autonomy versus micromanagement. Your open source work speaks for itself.

Getting a manager to progress you

This gives too much power to managers. No one else can really do this for you - your motivation to progress has to be intrinsic to be sustainable.

Compensation

People | Source: https://posthog.com/handbook/people/compensation

How it works

We have a set system for compensation as part of being transparent.

You can use our compensation calculator below to see what your compensation might look like when you're joining PostHog, and to see how it might develop over time:

We think the fastest possible shipping comes from a leaner and stronger team. We pay generously, so you'll work with the best people in the world.

Important:

Level

Level does not correlate with increased importance, but with impact within PostHog. Your level is _not_ a title - we don't believe in having a huge hierarchy of roles, as everyone needs to feel like the owner of the company that they are.

Very broadly, we think of the various levels as:

It's important to note that this is not a checklist. These descriptions are indicative, and there will always be a degree of judgement by the to decide which level you're at, also based on other people within PostHog.

Step

Within each level, we believe there's a place to have incremental steps to allow for more flexibility. We define these as follows:

With exception of team members at the very beginning of their career or where it is their first time in this type of role, we hire into the Established step by default. This will give everyone the opportunity to be set up for success and leave enough room for salary increases, without the need to move up in seniority.

Here's how to move from one step or level to another.

Benchmark

In line with our compensation philosophy, the benchmark for each role we are hiring for is based on the market rate in San Francisco.

We use Pave as our main source for our salary benchmark and build a target range based on that data.

Because the engineering market is very competitive, and we think there is a 10x difference between an average and a top engineer, we pay near the top of market, which we define as being the 90th percentile, at the time of review. For other roles we still try to pay towards the top of market, which we define as 50th percentile + 20%.

Location factor

Most of our location factors are based on GitLab's location factors. Location factors are based on _cost of market_, not cost of living. This means that we look at how much it typically costs to hire a person in that role in that location, not how much it costs to live there. This is why, for example, our location factor for San Francisco is the highest, even though there are several other places that are more expensive to live.

We set a floor of 0.8 in the US, and 0.6 everywhere else, to avoid creating huge disparities in pay if someone happens to live in an exceptionally low cost of living country/state.

GitLab uses a combination of data from Economic Research Institute (ERI), Numbeo, Comptryx, Radford, Robert Half, and Dice to calculate what a fair market rate is for each location. Read more on how GitLab calculates this location factor.

The location factor takes your local exchange rate into account, so we don't have to keep updating exchange rates when they fluctuate. The floor also helps mitigate this. You will always be paid in your local currency, unless there is a very good reason not to (e.g. it is normal in your country to transact in USD).

Executive compensation

For hiring into executive roles, we use a separate database of compensation benchmarks rather than this calculator. The terms of access to this (paid) database means that we're not able to share it publicly.

The benchmark data is all we use for executives. As a rule, executives are paid above average but not top of market.

The reason for less sophistication here is that we have very few executives, and only one for each role by definition. It's irrational to create a system so that, within a given benchmark, people are paid equally when there is just one person to consider!

Pay reviews

We review pay proactively and currently run pay reviews for the whole team 3x per year - usually around March, July, and November. You do not need to do anything - our goal is to keep your compensation at an appropriate level without you having to ask.

As we do these much more frequently than regular companies, team members should definitely not expect these to result in a change to their Step or Level each time - mostly they will stay the same. Additionally, team members will find that their Step, or place within a Step's range, will change more frequently than their Level.

Finally, we may change pay without editing Step or Level if the market rates for the underlying benchmark have gone up. When we review pay we don't take inflation into account, as this is already accounted for by market data. Thus, you won't get a yearly "inflation raise" as is typical in many localities, though our review process and benchmarking ensures your salary will remain in-line with the market.

We do also regularly increase benchmark levels when we are making a deliberate attempt to raise the bar in terms of hiring. This means that, when a benchmark is increased, it's not unusual for your level and step to come down. You will still get a pay increase, just not always at the same % increase as if the benchmark alone went up.

How the review process works

To make sure everyone has an equal chance at getting a pay rise, we do not factor in how frequently someone requests one. When increasing pay we only look at our calculator and performance. This helps us to be as inclusive as possible, as underrepresented groups are statistically less likely to request a pay rise.

We want to increase pay as frequently as we can in a proactive way, rather than putting the onus on the team member to negotiate every time. We don't make any changes outside of these reviews - if you change role for example, any changes will usually happen at the next closest review.

Any increases will be communicated by the relevant Blitzscale team member, as compensation is not a manager's responsibility at PostHog. If you need to talk to someone about your compensation or how the calculator works, you should ask the relevant member of the exec team in the first instance.

You will only hear from them if there has been a change in your pay, not if it is staying the same.

If you recently accepted an offer at PostHog and the benchmark changes between you accepting and joining PostHog, your pay will be re-assessed during the next pay review. Often this just means adjusting benchmark, step, and level - it is unlikely that your actual pay will go up.

Relocating

If you're planning on relocating, your salary may be adjusted (up or down) to your new location. This will be done at the next compensation review. If this represents an increase in pay, _we need to approve this change in advance_ - we cannot guarantee it is always possible, as our budgets may not allow it.

If you are nomading, we will set your location factor for the place that you are spending the most time in over the next 6 months. Our frequent compensation reviews mean that we can make adjustments reasonably frequently, but again any increase needs approval in advance.

Please note that there are a few countries that we don't employ people in.

Equity

It’s important to us that all PostHog employees can feel invested in the company’s success. Every one of us plays a critical role in the business and deserves a share in the companies success as we grow. When employees perform well, they contribute to the business doing well, and therefore should share a part of the increased financial value of the business.

As part of your compensation, you will receive share options in the company. We do not have a strict calculator here, but broadly you receive equity based on your role, level and location. Our general philosophy here is average equity, with extremely employee-friendly terms and options for liquidity through secondary.

Whilst the terms of options for _any company_ could vary if we were ever acquired, we have set them up with the following key terms which we believe are industry-leading in their friendliness to employees:

It can take time to approve options, as it requires a board meeting and company valuation. We can clarify the likely time frame at the time we're hiring you. Vesting will always start from when you joined PostHog, not from when you receive your option agreement. While we can commit to a particular number, we cannot commit to a particular strike price when offering share options, as the valuations are done by a third party and can vary depending on where we are in our funding cycle.

Check out our share options FAQs to learn more.

Equity refreshes

Every employee will be eligible for equity refreshes each year you are working at PostHog. These grants are between 18%-25% of the value of a new grant for your current role. The percentage is based on your performance and can vary year by year.

These equity refreshes will be decided at our pay review cycles that happen 3 times a year, in roughly March, July & November, and are communicated by the relevant #team-blitzscale member at that time. Grants are approved quarterly by the board, though vesting is back-dated begins on the actual anniversary date. You'll be made aware of your grant being approved when you get an email from Carta regarding the grant. These refresher grants will be on the same terms as your original grant with a 12 month cliff, they will likely be subject to a different strike price due to changes in valuations.

Funding rounds disrupt when we are able to issue new grants, so approvals my be delayed if we are actively fundraising.

Probation period

We are fully committed to ensuring that you are set up for success, but also understand that it may take some time to determine whether or not there is a long term fit between you and PostHog.

Subject to certain exceptions for sales roles and German employees mentioned below, the first 3 months of your employment with PostHog is a probation period. During this time, you can choose to end your contract with 1 week's notice. If we chose to end your contract, PostHog will pay you 4 weeks' base salary pay, but usually ask you to finish on the same day.

People in sales roles, such as Account Executives, have a 6 month probation period - this is to account for the fact that it can be difficult to establish whether or not someone is able to close contracts within their first 3 months, given sales cycles.

German employees also have a 6 month probation period - this is to align with market standard best practices and expectations for hiring in Germany, as it can be operationally difficult to part ways with German employees so we ask for as much information as possible to establish whether the hire is a good, mutual, long-term fit. During probation, either PostHog or the German employee may choose to end the employment contract with 1 month notice.

Your manager is responsible for monitoring and specifically reviewing your performance throughout this initial period. If under-performance is a concern, or if there is any hesitation regarding the future at PostHog, this should be discussed immediately with you and your manager.

At the end of your probation period, you won’t usually receive formal confirmation that you’ve passed probation, the default is no communication. By that point, you should already have a clear understanding of your performance and progress through your 30/60/90-day check-ins with your manager.

Severance

At PostHog, average performance gets a generous severance.

If PostHog decides to end your contract after the first 3 months (6 months for sales roles), we will offer you a total of 4 months of base salary (which includes any time we need to give you under the law). To receive these benefits, we will ask that you sign a standard post-termination certificate or release. For our German teammates who have completed their 6 month probation, we will follow the local legal requirements for notice and severance, in line with what is typical in Germany. In some cases, we might ask you to stop working right away and pay you instead of having you work through your notice period, or set up a "garden leave" depending on what is most appropriate for your location and contract. If the decision to leave is yours, then we generally just require 1 month of notice, though this can vary depending on your country's laws or the specifics of your contract.

If you are in a role with a commission/bonus component, you will be paid the amount you are owed as of your last day at PostHog.

We have structured notice in this way as we believe it is in neither PostHog's nor your interest to lock you into a role that is no longer right for you due to financial considerations. This extended notice period only applies in the case of under-performance or a change in business needs - if your contract is terminated due to gross misconduct then you may be dismissed without notice. If this policy conflicts with the requirements of your local jurisdiction, then those local laws will take priority.

Contracts

We currently operate our employment contracts in the three geographic regions where we have business entities:

This means, if you live in one of those countries, you will be directly employed by PostHog or the applicable subsidiary as an employee in one of our entities.

If you live outside the US, the UK or Germany, we use Deel as our international employer of record. This means you are technically employed by Deel on our behalf. This doesn't affect your rights or benefits.

In some cases, you may be an independent contractor, in which case you will invoice us monthly via Deel. Deel offers pretty much all countries and currencies. As a contractor, you will be responsible for your own taxes.

Payroll

In the UK and for international contractors, we run payroll monthly, on or before the last working day of the month.

In the US, we run payroll twice a month, on the 15th and on the last day of the month.

Deel runs payroll on the last working day of the month.

Feedback

People | Source: https://posthog.com/handbook/people/feedback

Feedback at PostHog

Sharing and receiving feedback openly is _really_ important to us at PostHog. Part of creating a highly autonomous culture where people feel empowered is maintaining the most transparent and open flow of information that we can.

This includes giving feedback to each other, so we know we are working on the right things, in the right way. While giving feedback to a team member can feel awkward, especially if it is not positive or if you are talking to someone with more experience than you, we believe that it is an important part of not letting others fail.

'Open and honest' doesn't mean 'being an asshole' – we expect feedback to be direct, but shared with good intentions and in the spirit of genuinely helping that person and PostHog as a whole to improve. Please make sure your feedback is constructive and based on observations, not _emotions_. If possible, share examples to help the feedback receiver understand the context of the feedback.

Full team feedback sessions

We run full team 360-degree feedback session as part of every offsite. Some teams will do them during their own small team offsite, while others choose to do them as part of the whole company offsite. The session gives everyone the opportunity to give and receive feedback to everyone else. If your team works closely with another or is very small, you may combine with another team (but keep attendees to <8 if you can).

Ground rules

How to give good feedback

We know that giving feedback can sometimes be difficult, so here are a few tips on how to give good feedback:

We expect everyone to support each other by giving lots of feedback – it's not ok to stay quiet if you have something constructive to share.

How to receive feedback well

If someone is making the effort to give you feedback, you should reciprocate by receiving that feedback well. Being a good feedback receiver means that people will be more inclined to give you feedback in the future, which will help you to grow!

Here are a few tips to help you do this:

README sessions

At small team offsites we may also run README sessions in addition to 360 feedback sessions. Typically we find it useful to run these README sessions as early as possible during the offsite and before 360 feedback, as they are a great way to get to know your team.

README sessions are an opportunity for you to help others understand more about your background, communication style, and interests. You can share as much or as little as you feel is appropriate. Some things which you may wish to consider include:

It's OK to ask short, clarifying questions when someone has finished, but sessions shouldn't become Q&As.

Team surveys

We run team surveys every 6 months using the _Pulse Surveys by Deel_ Slack app. These are set up to run automatically, including reminder messages in Slack, so you don't need to chase people manually. Charles and Coua have admin access to the surveys in Slack.

The questions are based on the ones used by Culture Amp and cover categories such as Company Confidence, Culture, Growth etc. on a 1 ('strongly disagree') to 5 ('strongly agree') scale. The benchmark used is against Culture Amp’s ‘new tech’ companies with less than 200 people. We then take the average score out of 5 and multiple it by 20 to get a % number. A bit rough, but close enough so we can compare with the benchmark.

Only the People & Ops and Exec teams have access to the full list of responses, which are not anonymous.

We follow a template to report a summary of the results in an Issue. You can view the latest survey results here - just copy the formatting ever.

Current list of questions

Finance

People | Source: https://posthog.com/handbook/people/finance

Finance mission

We exist to make your life easier. You should spend time shipping great products, instead of wrestling with restrictive financial controls. We want to keep things distraction-free for you, and remove admin obstacles from your path. If you are spending your afternoon arguing with an expense system, we have failed.

We aim to build a system that works for you, and not the other way around. The deal is that we take on the messy, admin heavy-lifting behind the scenes. But we ask that you don’t skip the small steps (like uploading a receipt) because when you bypass the easy steps today, it snowballs into a painful cleanup job later.

When we design processes, our first question is “how can we make this disappear for the team?” or “will this ensure fewer Slack pings for us?” We don’t use restrictive approval flows. We operate on high trust and give you the context to make good spending decisions rather than block you with red tape. We consider your time very carefully, 15 minutes distraction per person is days of productivity lost across the company. Sometimes we do have to consider the lesser of two evils, eg. asking everybody to do a small task now, to unlock less distractions later.

We also give you financial insights to make PostHog even better - we don’t gatekeep the numbers. We want you to have visibility into monthly, quarterly, annual financial performance and SaaS metrics. We benchmark ourselves against public companies and peers in the industry so we know where we’re headed!

Finance principles

This is how we think about financing PostHog as a business:

Finance runbooks

These can be found in the company-internal repo within the finance directory.

Grievances and Disciplinary Process

People | Source: https://posthog.com/handbook/people/grievances

While these issues are hopefully an extremely rare occurrence, it’s important for us to have a clear process around how we do this stuff in order to ensure everyone is treated fairly and transparently.

A couple of notes before we get started:

These policies are deliberately short and simple, and use the Acas template as a model. If you have any detailed questions about how they work in practice, please ask Charles.

Disciplinary process

In cases of minor misconduct which cannot be resolved informally, we may issue a verbal warning.

In cases of serious misconduct, or multiple instances of minor misconduct, we may issue a written warning, and then a final written warning. If these do not resolve the issue, we may move to dismissal with or without severance, depending on the circumstances.

We may omit any of the stages of procedure listed above as circumstances require - for example, if the misconduct is exceptionally serious.

Serious misconduct includes things such as:

If you are a person being accused of misconduct, you will be advised in writing prior to any relevant meeting with you of your alleged misconduct, and will be given a reasonable opportunity to respond prior to a formal meeting. Meetings are usually held with Fraser. If you are in the UK (or other jurisdictions where the right to bring other people with you is a legal requirement), you are entitled to bring a colleague or trade union representative to these meetings. If this is the case, please let us know who you are bringing in advance.

We will send round written notes afterwards, which will be kept confidential.

Grievance process

All proceedings are confidential, and you will never be punished for bringing a grievance (unless it’s obviously malicious), even if no action is taken.

Victims of harassment or bullying should disengage from the situation immediately and seek support. You can speak to Fraser about your grievance and he can help you. If he is not available, talk to Carol (US timezones) or Tara (Europe timezones).

Most grievances otherwise can usually be resolved informally between you and the person involved - if it is informal and you're unsure what to do, talk to your manager. If it is about your manager, talk to _their_ manager or ask Fraser. If the matter cannot be resolved informally, you should put the details of your grievance in writing and send it to Fraser (or if the matter concerns him, please send it to James or Tim). There is no particular format to follow, and you can start at this step if needed.

To make sure we can investigate your grievance properly:

Fraser will hold a meeting(s) to discuss further. If you are in the UK (or other certain jurisdictions where the right to bring other people with you is a legal requirement - in which case we require you to confirm the other attendees in advance of the meeting), you are entitled to bring a colleague or trade union representative to these meetings, and we will send round written notes afterwards, which will be kept confidential to those in the meeting and those the complaint is being made about. The number/type of meetings held is flexible depending on the nature of the grievance. You are not obliged to attend a meeting with the person you have a grievance against if you don’t want to.

If, following investigation, your grievance is not upheld, then we will support everyone in rebuilding their working relationship to the extent it is possible. We may consider making arrangements to avoid the affected parties working together closely.

Whistleblowing

Whistleblowing is where you observe illegal or dangerous behavior, and is different from raising a grievance as it may not affect you directly. In this case, please email Fraser and Hector. This includes things like criminal offences, someone's health and safety being in danger, or damage to the environment. You can also whistleblow about someone trying to cover up information about any of these issues. We will broadly follow the same process outlined above for grievances.

If your concern is a personal one, it will usually not be covered by whistleblowing. In these cases, you should raise a grievance.

Appeals

If you disagree with the outcome of the above processes, you have the right to appeal if you can demonstrate why you believe a particular aspect of the investigation has materially affected the outcome. Appeals must be submitted within 2 weeks of receiving the outcome.

If an appeal is submitted, we’ll arrange a final meeting within a reasonable time period. Any decision made here will be final and there is no further right of appeal. We will aim for the meeting to be held by a member of the Exec team who wasn’t involved in the process previously.

Design Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/design-hiring

Design hiring at PostHog

Our design team is small and we don't hire into this team very often. Please check our careers page for our open roles.

What we are looking for in Design hires

Beyond the specific skills listed in the job description, we always generally look for:

Design hiring process

1. Culture interview

This is our standard first round culture interview with the People & Ops team.

2. Technical interview and portfolio review

The technical interview round is a 2-part interview, lasting up to 90 minutes in total.

The first half of the interview will be with Cory and 1 or 2 team members, and it will focus on your Product and Design thinking. You can expect questions around your typical design process and how you prioritize.

The second part of the interview will be a portfolio interview, where you will meet a few other members of the team. You will present a deep dive into your portfolio, covering the end-to-end process from strategy to design to impact.

3. Design SuperDay

The final stage of our interview process is the PostHog SuperDay. This is a paid full day of work with us, which we can flexibly arrange around your schedule.

A Design SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences):_

In line with our values and culture, you might get short replies like "step on toes" or "bias for action".

Developer Relations Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/devrel-hiring

Developer Relations hiring at PostHog

Developer Relations are relatively new at PostHog. However, the team will be growing as the company grows and as we increase our engagement with developer communities.

Currently, we are hiring for the following Engineering roles:

What we are looking for in DevRel hires

Beyond the specific skills listed in the job description, we generally look for:

DevRel hiring process

1. Culture interview

The culture interview usually lasts around 30 minutes and will be with someone from our . This round is loosely structured into 4 different sections:

  1. PostHog - mission, vision, team, way of working etc. If it was cold outreach, we provide a little more context up front.
  2. Candidate background and mindset.
  3. Talk about the hiring process and check if the candidate has seen our compensation calculator so we know we're roughly aligned.
  4. Answer any open questions.

We are looking for proactivity, directness, good communication skills, an awareness of the impact of the candidate's work, and evidence of iteration or a growth mindset.

2. Technical Interview

Most developer relations roles will go into detail about the candidates experience and thoughts on:

DevRel SuperDay

The final stage of our interview process is the PostHog SuperDay. This is a paid full day of work with us, which we can flexibly arrange around your schedule.

We will share the task with you at the start of the day. The task is representative of the work someone in this role at PostHog is doing, and it is always the same for each candidate, so we can make clear comparisons.Day

This gives you the chance to learn how we work, and for us to see your quality and speed of work, as well as the way you communicate. It is a very demanding day of work, but we all want you to succeed!

An DevRel SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences)_:

You can expect to hear back from us within two working days of your SuperDay. We will also make the payment for your SuperDay as soon as possible.

If we decide to make you an offer, we will most likely arrange a call to discuss feedback and next steps.

If we don't make an offer, we will give you as much constructive feedback as possible.

Engineering Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/engineering-hiring

Engineering hiring at PostHog

Engineers make up around 60% of our team, and we are almost always hiring for Engineering roles. This page provides internal documentation on our engineering hiring process, including roles and best practices for interviewers. You can find all open roles at PostHog on our careers page, in case you want to refer someone.

What we are looking for in engineering hires

Beyond the specific skills listed in the job description, we generally look for:

Engineering hiring process

Hiring is a team effort, and we need everyone to contribute to make the best new hires. Talent Partners handle all scheduling throughout the interview process, and support both interviewers, candidates and hiring leads.

You can find more information regarding the hiring process in the handbook, or reach out to @talent-folks in Slack.

Culture screen

The culture screen is handled by the Talent team. Normally this is a 20-30 min call where they make an initial assessment for the candidate's fit for the job, culture, communication style, and sort all logistics.

Technical screen

The technical interview is an hour-long technical interview with one of our engineers. This might be architecture design or diving more into past technical experiences in more of a workshop style. No whiteboarding or brain teasers. We share our guide to preparing for the technical screen with candidates so they know what mindset to bring.

Sometimes when you get part of the way into a technical interview it becomes clear that the person is not a fit. Because rejecting candidates needs to be done in a specific way, please continue the interview as usual and do not reject on the call. It's okay to end the interview a bit early - interviews often don't take the entire time, and it's okay to give this caveat ahead of every technical interview.

You should use the technical exercise guide when evaluating candidates at this stage.

You may be shadowed by another PostHog team member – a shadow is someone who listens in, but doesn't participate. This is something we do regularly among technical interviewers, as a way of improving the hiring process. During high season, we may ask some of you to record these interviews for training purposes to help us onboard and train new interviewers faster. The candidate will, of course, have the chance to opt out by either letting their recruiter know in advance or letting you know at the start of the interview - you should always ask the candidate for their permission before recording and this will never affect the outcome of the interview - there are many reasons why someone may opt out from being recorded.

Culture & motivation chat

One of our co-founders or execs – Tim or James, depending on scheduling – will meet with the candidate for a short 15 min chat to dive deeper into culture and motivation.

Engineering SuperDay

The final stage of our interview process is the PostHog SuperDay. This is a paid full day of work, which we can flexibly arrange around the candidate's schedule. We share our guide to preparing for the engineering SuperDay with candidates so they know what to expect.

For full-stack roles, the task involves building a small web service (both backend and frontend) over a full day. The task is designed to be _too much_ work for one person to complete in a day, in order to get a sense of their ability to prioritize (and ship!).

Each engineering SuperDay will have a SuperDay buddy, this person will conduct the interview halfway through the day, will be available in Slack throughout the day to answer any questions, they will also be giving feedback on the SuperDay output.

An engineering SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences)_:

Usually the Superday buddy will review the output, but they can ask other engineers for input when needed, and we'll get back to the candidate with our final decision ASAP (always within a few days).

Overall, candidates should spend at least 80% of their time and energy on the task and less than 20% on meeting people, as we base our decision on their output of the day. However, we encourage everyone to use the Slack channel as much as needed for any questions or problems.

How to become an interviewer at PostHog

As PostHog grows and our hiring goals get bigger and bigger, to achieve that we will need more people taking interviews and assessing people in those interviews. As we scale, it's important that we maintain a calibration across interviewers by onboarding each new interviewer to the interviewing process carefully.

If you wish to get involved in interviewing, you can request so by contacting the talent team using the @talent-folks handle in Slack in the #team-talent channel. Please note, that if you are in your first 90 days at PostHog, you should not be focussing on interviewing and focus on ramping up and onboarding successfully. Even shadowing interviews can be distracting, so consider even leaving these until after your first 90 days.

Once you have let the talent team know, you'll work closely with a Talent Partner to get you up to speed. This will involve them scheduling you to shadow at least two live technical screens with two of our most experienced interviewers. They will also share with you the relevant watching and reading materials that should be consumed before conducting your first interview. Your very first interview will be shadowed by one of our experienced interviewers and they will give you feedback, on both their assessment of the candidate interviewed and then how you did. Based on how this goes, either you can get another interview shadowed by another engineer for more feedback, or you can go out on your own.

Your first interview alone

Your first interview alone might feel daunting, and it should. So it is best to prepare as much as possible, read all the available materials on the candidate ahead of schedule. Prepare how you will manage your time based off the interviews you've shadowed and the feedback from our shadowed interview. Block out time at the end of the interview to ensure you have time to write up your notes and reflect on the candidate and provide the feedback. Eventually you will get into the habit of being able to rely on AI notetaker notes (or your own notes) to be able to come back to leave the feedback but on your first go, it's important to have all the information fresh and give yourself plenty of time to go through this process. You want to feel excited about a candidate once you've finished up with them, but amongst this is a new experience so it's important to give yourself the ability to feel excited and go into detail on how you rate them. You want to be able to feel beyond reasonable doubt that you are making the right decision.

You should try to have as extensive notes as possible, this is good hygiene for interviewers at later stages to dig into areas of uncertainty. Always make sure to make any flags or doubts you have very clear.

Your first 5-10 interviews

Once you have conducted your first 5-10 interviews you will want to reflect on how you are getting on. The best way to do this is to review how your candidates have got on. If you've rejected more than 6 or 7, then perhaps you might be being too harsh. You can ask other interviewers or a talent partner to assess your notes and see if they would have been more lenient. The technical screen has about a 50% pass rate, so keep that in mind.

For the candidates that you put through, it's worth keeping a close eye on how they perform at stage 3 & stage 4 and seeing how they have done. Did those flags that you have ultimately lead to them failing, should you have just said no? Where flags that you had actually not a problem, if so, how can you dig in to them better next time to understand them better.

You should try and keep your approach relatively consistent for the first 5-10 interviews so you can then introduce changes afterwards and see if they yield better or worse results. If anything is very obviously not working, change it immediately.

How to keep improving as an interviewer

Preparing for the engineering SuperDay

People | Source: https://posthog.com/handbook/people/hiring-process/engineering-superday

If you've been invited to a PostHog SuperDay, here's what to expect and how to set yourself up for success.

What the SuperDay looks like

The SuperDay is a paid full day of work ($1,000 USD). You'll receive a project task at the start of your day and submit your work at the end. The project is the main focus -- you'll build something from scratch, tailored to the role you're applying for. Expect to spend the majority of your day on it.

Scheduled throughout the day, you'll also have:

You'll also have access to a dedicated Slack channel with the team throughout the day. Use it -- share progress, ask questions, surface blockers.

The project

What we're evaluating

Shipping and execution

The scope of the task is deliberately broad, so you have room to make prioritization choices. You won't finish everything -- that's expected. We want to see how you decide what matters most.

Strong candidates ship a working core feature early, then layer on improvements. They make deliberate choices about what to build and what to skip, and they can explain why. A functional product that solves the core problem well beats a half-finished product that tries to do everything.

Technical depth

We care about the quality of what you build. This means thoughtful architecture decisions, clean code, sensible error handling, and attention to edge cases in the data you're working with.

The best candidates go beyond surface-level implementation. They notice patterns and anomalies in the data. They think critically about whether their solution is actually correct.

Product sense

PostHog engineers are product engineers. We want to see you think about the person using what you're building. Is the interface intuitive? Does the output actually help someone make a decision? Would you be proud to demo this to a customer?

Think about the utility of what you're building.

Problem-solving and creativity

The strongest SuperDay submissions show candidates who thought deeply about the problem. They adapted when something wasn't working and found ways to make the tool more useful beyond the basic requirements. For example, if the core task asks you to visualize data, a strong candidate might notice something interesting in the data itself -- an unexpected pattern, a segment that behaves differently -- and surface that insight in the product. That kind of curiosity matters more than adding extra UI polish.

We notice when someone asks "what would actually help a user here?" and lets that guide what they build next.

How to prepare for the project

During the day

The debugging session

What to expect

You'll join a live coding environment with an interviewer and work through a series of problems in an unfamiliar codebase. The problems range from fixing bugs to improving performance to implementing a small feature. You won't know the codebase in advance -- that's the point.

You're allowed to use Google for reference, but we ask that you don't use AI tools – we want to see how _you_ think about debugging. Treat the interviewer like a colleague -- you can ask them questions, think out loud, and discuss approaches.

How to prepare

During the session

What not to worry about

A note on AI tools

You can use AI tools during the project portion of the day -- we know this is how many engineers work. But you need to understand what you've built. During the check-in, we'll ask you to walk through your architecture, explain your decisions, and reason about your code. If you can't defend and explain your solution, it won't matter how polished it looks.

For the debugging session, we ask that you don't use AI tools beyond basic autocomplete. We want to see how you reason about code.

What comes next

After the SuperDay, everyone involved will leave their feedback. We aim to get back to you with a decision within 48 hours. You can read more about the full interview process here.

If you've made it this far, good luck -- we're rooting for you.

Preparing for the technical screen

People | Source: https://posthog.com/handbook/people/hiring-process/engineering-tech-screen

If you've been invited to a technical screen at PostHog, here's what to expect and how to show up prepared.

What the technical screen looks like

The technical screen is a 60 minute architecture and design discussion with one of our engineers. You'll work through an open-ended problem together, and there are many reasonable approaches. We're primarily interested in how you think.

What we're evaluating

The session is intended to discover where your knowledge is wide and where your knowledge is deep. The best candidates tell us when they're communicating their direct experience, when they're talking about work they were close to but not part of, and when they've not done something but know that's how to solve that type of problem.

System design instincts

We want to see well-developed intuition for how systems work in practice – choosing the right tool for the job, understanding where complexity is warranted, and reasoning about what happens as requirements change. This is about technical depth and breadth, not just scale.

For example (not from what we'll discuss): how would you design a notification system that needs to reach millions of users without overwhelming downstream services? If you're building a deployment pipeline, where do you put the guardrails so a bad deploy doesn't take down production?

Strong candidates reach for these concepts naturally as part of their design.

The "why" behind your decisions

We want to hear why you'd choose a given technology. Saying "I'd use Postgres" is fine. Saying "I'd use Postgres here because the access patterns are relational and consistency matters more than write throughput for this part of the system" is much better.

Every design decision involves tradeoffs. We want to hear you articulate them – even when there isn't a clear winner. Knowing when not to use a technology is just as valuable as knowing when to reach for it. Showing that you understand the costs of your choices matters a lot.

Problem-solving approach

The strongest candidates slow down before they speed up. They ask clarifying questions. They scope the problem. They decompose it into pieces they can reason about individually.

We're looking at your process: do you clarify requirements before committing to an approach? Can you break a big problem into smaller ones? When you hit a fork in the road, how do you decide which way to go?

If you find yourself wanting to immediately start listing technologies, pause. Take a breath. Ask a question instead.

Product sense

PostHog engineers ship product, work directly with customers, make product decisions, and own outcomes end to end. In the technical screen, this shows up as thinking about the user of whatever you're designing.

Who is using this? What do they actually need? If you're designing an alerting system, do you think about what happens when someone gets paged at 3am for a non-critical issue? If a design decision trades off developer convenience for a better user experience, which do you lean toward and why?

You should be someone who thinks about your users, not just your systems.

Autonomy and independent thinking

PostHog is a company of small teams with high autonomy. We need people who can identify problems, figure out solutions, and drive them forward on their own.

In the interview, this shows up as taking ownership of the problem. Drive the conversation. Propose ideas. Change direction when something doesn't work. Treat the interviewer as a collaborator.

How to prepare

The best preparation is reflection. More concretely, here's what we recommend:

What not to worry about

A note on the format

We want this to feel like a working session. The interviewer is there to collaborate with you, ask follow-up questions, and sometimes push back on your ideas. If something isn't clear, ask. If you want to change direction, say so. The best interviews feel like a conversation between two engineers solving a problem together.

If you pass the technical screen, you'll meet one of our co-founders or execs for a short culture and motivation chat, followed by a PostHog SuperDay. You can read more about the full interview process.

Good luck – we're rooting for you.

Leadership Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/exec-hiring

Leadership hiring at PostHog

We deliberately keep our structure flat and we don’t believe in having a lot of fancy titles early on. However, as we grow, we will hire people into more senior-type positions.

With our senior leadership hiring, more so than normal, we are aiming for speed, and as always, quality. If a candidate is amazing but doesn't fit with a specific role need we have _right now_, we still aim to treat the hiring process with the same urgency as if posthog.com has gone down.

Hiring process

Preparation

Before we kick-off the hiring process for a role, we make sure to have everything we need for the role prepared:

Interview process

In order to ensure speed, we aim to finish the process within 5 working days (assuming the candidate has availability). This is a rough guide that can be adapted

Day 1: Candidate meets Coua - _30-45 minutes_

Day 2: Candidate meets James and/or Tim - _45-60 minutes_

Day 3: Technical Interview with James/Tim + respective team - _60 minutes_

Day 3: Meet rest of the team - Charles - _30-45 minutes_

Day 4: SuperDay (_optional_) or meet the team (standup or informal lunch)

Day 4: Wrap up call with James and/or Tim - _30 minutes_

Day 5: Offer out

Depending on the role, we might also schedule a call with one of our investors.

We take exceptional people when they come along - and we really mean that! Don’t see a specific role listed? That doesn't mean we won't have a spot for you.

In cases where a candidate reaches out without us having a role posted, we follow the same process as above, and work through all open tasks we would usually prepare for on day 1 and 2.

Interview technique - principles to follow

People | Source: https://posthog.com/handbook/people/hiring-process/how-to-interview

Reminder: hiring the right team is the most leveraged activity we can do. Whatever you do, focus on getting the strongest signal from a candidate in an interview. Do _not_ focus on scalability / efficiency.

Focus on themes

Many well-intentioned interviewers will create a long list of questions that they'll follow rigorously. This is likely to lead to shallow answers.

You're trying to understand how a human being operates, so go deep. It'll be more interesting for both of you, and will give a stronger signal.

Prepare the themes in advance you'll want to ask about. For a cultural interview, it might be something like:

For each of the above themes, consider some good questions in advance. Use these as starting off points.

Ask permission to get what you need

Candidates' expectations of interviews vary wildly.

Less experienced candidates, or those in less competitive markets, often expect an intense questioning.

The reverse is true for more experienced candidates - who will want to understand if the company is performing well, aligned, and many more things to help them pick the right place to work.

At the start of the interview, "name it". Say to the candidate something like "hey, I need to go deep on how you work to do the best assessment of a good fit here, is it ok if I focus this interview primarily on that for the first 20 minutes? Then I'll leave 10 minutes at the end for questions. If we overrun, I can book more time with you." Other times, you'll need to explain the opportunity more - for example, if the candidate came through cold outreach. Have a clear idea before you start, and explain it to the candidate up front.

Work as a team

Focus your questions on the areas you're stronger at. If you're great at scrappiness, you're probably best suited to spotting it in others.

It's more important to validate a few things well, and to get others to dig deeper in other areas, than it is for you to do a shallow interview across everything. If you miss something, just create a clear ask of your next colleague to cover the area you missed, or to dig deeper on an area you felt uncomfortable with.

Don't bias yourself

When you do your final write up on a candidate, do this ahead of reading the feedback from others.

Humans are evolved to stick to their tribes - if you know that your colleagues believe X, you're much more likely to believe X. Reading others' feedback means you are less likely to say no to someone because of minor concerns or to push for a candidate with hidden talent. Both things we need to do.

Bonus points - writing your own independent decision in front of your peers, forces you to clarify your feedback properly.

Figure out why you're not excited

You will be asked to give a score out of 4 for each interview (where 4 is the highest). If you don't give a 4/4, please articulate as clearly as you can why, even if it's a minor concern. This helps (i) subsequent interviewers to dig further into a concern to validate/invalidate it and (ii) it may cause other people to spot/mention the same issue - which can stop us moving forward with someone that won't be a good fit.

Some of the hardest decisions are when lots of people are _fairly_ lukewarm on a candidate. This is particularly likely when a candidate has relevant experience but is a poor cultural fit.

Beware of how you're feeling through the interview, and adapt as you go.

If a technical interview makes you feel worried someone isn't fast, energetic, or intelligent enough, or whatever else - do some digging on those themes.

Write out mild concerns

Imagine any perceived issue being magnified 10x when the candidate starts. Mention in your feedback to others if you had a mild concern about something. Sometimes you'll find that everyone shares this concern, which means we shouldn't hire.

Get specific

Going into detail helps you figure out the difference between someone that _sounds_ good and someone that _is_ good at their job.

How did they solve the impressive sounding technical problem? Why did they solve it like that? Did they drive the project or were they a passenger? Who actually wrote the code?

In one interview, assessing organization skill, I've even found out how a candidate used to organize her fridge. "What's something you've done that is so organized, that it was weird".

Keep it on track

Some candidates, due to nerves, will go down rabbit holes. The ability to sum up information concisely, under pressure, usually isn't something that appears in our job descriptions.

Therefore, if a candidate goes way off track, it's in their interest for you to politely interrupt "hey I think I've got what I need here on this question, I'm going to move on so we cover everything - is that ok?"

Focus on slope

... and be very wary of getting seduced by companies rather than people. Candidates who've worked at places with strong product market fit will have had an easier time achieving results. Some of our best people have come from a string of very average startups.

As the interviewer, you should feel a little nervous

A short interview has a _huge_ impact on our company - either hiring the right or wrong person. There are lots of hard-to-reverse consequences of not getting it right.

Bring energy into the interview. Be engaged. You are part of our brand.

Hiring process

People | Source: https://posthog.com/handbook/people/hiring-process

Our approach to hiring

Our goal is to build a diverse, world-class team that allows us to act and iterate fast, with a high level of autonomy and innovation.

Our recruitment strategy is to run:

This has resulted in the highest number of qualified and motivated candidates reaching final stages with us compared to other methods, such as more generic sourcing. As a result, we invest most of our energy in:

Countries where we employ people

<CountriesWeHireIn excludedCountries={[ 'Afghanistan', 'Armenia', 'Australia', 'Azerbaijan', 'Bahrain', 'Bangladesh', 'Belarus', 'Belgium', 'Bhutan', 'Bolivia', 'Brazil', 'Brunei', 'Cambodia', 'China', 'Comoros', 'Cuba', 'Denmark', 'Djibouti', 'Eritrea', 'Ethiopia', 'Fiji', 'France', 'French Polynesia', 'Georgia', 'Hong Kong', 'Iceland', 'India', 'Indonesia', 'Iran', 'Iraq', 'Italy', 'Japan', 'Jordan', 'Kazakhstan', 'Kenya', 'Kiribati', 'Kuwait', 'Kyrgyzstan', 'Laos', 'Luxembourg', 'Madagascar', 'Malaysia', 'Maldives', 'Mauritius', 'Mongolia', 'Myanmar', 'Nauru', 'Nepal', 'New Caledonia', 'New Zealand', 'North Korea', 'Oman', 'Pakistan', 'Papua New Guinea', 'Philippines', 'Qatar', 'Russia', 'Samoa', 'Saudi Arabia', 'Seychelles', 'Singapore', 'Solomon Islands', 'Somalia', 'South Korea', 'Sri Lanka', 'Sweden', 'Switzerland', 'Syria', 'Taiwan', 'Tajikistan', 'Tanzania', 'Thailand', 'Timor-Leste', 'Tonga', 'Turkey', 'Turkmenistan', 'Tuvalu', 'Uganda', 'United Arab Emirates', 'Uruguay', 'Uzbekistan', 'Vanuatu', 'Vietnam', 'Yemen', ]} />

We are all-remote, but we have a few limitations on the countries we are able to employ people in:

Hiring Process

External recruiters

All of our recruiting is done in-house, and we do not work with external agencies for any of our roles. We frequently receive unsolicited messages from agencies – sometimes 20 in a week – who want to work with us. The best response is to simply ignore the message. If they attach any candidate profiles or résumés to their email, please _do not_ open the attachment. If you are ever unsure what to do, feel free to forward any unsolicited messages to careers@posthog.com.

Deciding to hire

‘You're the driver’ is one of our values here at PostHog. We think carefully about each new role and the complexity it introduces to the organization. We also have an extremely high bar for the people we do hire!

We use Pry to plan our hiring. We use the hiring forecast as a guide, but iterate on this pretty much every month, so we can stay super responsive to changes PostHog's needs. Typically we know:

For each new role, please open a new issue on the Ops & People project board and add all the requested information from the new hire form. Everyone will have the opportunity to give their feedback on the proposed role before we publish it.

The role of the Hiring Manager

The hiring manager is a role assigned to the person who will work most closely with the People & Ops team to make a hire. Usually this is the person who will manage the new hire or is a Small Team lead.

If you are a hiring manager for a role, you will usually:

How to write a great job description

The People & Ops team will then write up the full job description in Ashby.

We frequently iterate on our specs, but we have a template for a product engineer role that you can use as a starting point. Generally, the "About PostHog" and "Things we care about" sections should be used in all ads, and you can adapt the other sections to your specific requirements.

We find the following approaches work well:

Once the hiring manager has signed off on the spec, we will publish it on Ashby – instructions on how to do this are here.

Job boards

Ashby will automatically add the role on our careers page. It will also 'helpfully' publish it on a bunch of other free but irrelevant job boards - you should manually remove all of those except for Ashby and LinkedIn. Wellfound will need to be posted manually.

As a Y Combinator company, we can post jobs ads on the HackerNews front page for free at https://news.ycombinator.com/submitjob. This requires a founder's HackerNews account to do so.

Ashby also had a partnership with YC's job board so all roles to YC's Work at a Startup will push out automatically. For certain roles, we also publish on other job boards:

Design

Engineering

Product

Referrals

Every time we open a new role, we will share the details and ideal profile with the team during All Hands.

What qualifies as a referral?

A referral must meet these criteria to be eligible for a bonus:

Personal referral

If you know someone who would be a great addition to the team, please submit them as a personal referral. If they're successfully hired, you'll receive a $2,500 referral bonus! The bonus can be either paid to you directly, or towards a charity of your choice where we will match the amount! You can also split the amount between you and the charity.

What makes a strong personal referral:

Especially when referring cross-team, feel free to reach out to #team-talent and gather context before doing the work of referring (will save us all some work, and the candidate a rejection email!) Referring someone means we'll review them carefully. It doesn't guarantee they'll get an interview. We hold referred candidates to the same high bar as everyone else, and we'll let you know if they don't progress and why.

Examples of insufficient referrals:

Please make sure the candidate has given their consent before putting them forward.

We occasionally open up short term contracts, and you'll receive a $1,000 referral bonus if you recommend someone here too! The contract just needs to be on a full time basis and at least 3 months long.

Unfortunately people who actively work on recruitment in the People & Ops team at PostHog are not eligible for referral bonuses, to mitigate the risk that they influence the process unfairly. If you would like to refer someone and are not sure if this applies to you, speak to Tim.

Help with your network

We recognize everyone is busy with limited bandwidth. If you'd like help identifying potential referrals from your network:

What's the process?

If there is an ongoing conversation, please cc careers@ into the email thread with the referred candidate, and we will take it over from there. Otherwise, please upload the profile to the Ashby referral page.

Important: If they have applied themselves already, you cannot claim them as a referral - this includes candidates who applied weeks or months ago.

Social referral

You will sometimes get people emailing or messaging you on LinkedIn asking to chat about a role or get referred in. If you have a chat with them and think they are worth referring, but you don’t know them enough to provide the talent team with valuable context, you can submit them as a social referral. If you don't know them, you can point them back to our careers page, or just ignore them. We get dozens of these kinds of messages every day, so don't feel bad about not engaging! If they are asking for advice, you can point them to this article.

The referral bonus for social referrals is $500, and we again match any amount you choose to give this to charity.

If you are consistently posting about jobs to your networks, please note that Ashby does not currently support referral links in a way that lets us reliably track those applications as social referrals. If someone reaches out after seeing your post and you want them to count as a social referral, ask them not to apply directly yet. Instead, submit them through the Ashby referral page.

Family referral

We welcome referrals of family members as long as they will not work on the same team or within the same reporting chain as the referring team member. To maintain a fair and balanced team environment, we do not hire spouses, as this can create interpersonal dynamics that are difficult to manage in a professional setting. This approach helps ensure that all hiring decisions remain objective and that team interactions stay healthy and unbiased.

Referral payouts

You'll get paid the bonus 3 months from the new team member's start date, and it will be processed as part of payroll. If this date falls close to the payroll cutoff for that month, it may be included in the following month's payroll instead. Bear in mind that you might be liable for income tax on the bonus.

Non-team referrals

We also welcome external referrals, e.g. from:

As a thank you, we will give you $50 credit for our merch shop.

Managing candidates

All of our candidates are managed in Ashby – all team members have access to the platform and Ashby will automate your specific level of access based on the role you play during the hiring process (i.e. hiring manager, team member, etc.). If you need additional access, please reach out to Coua or Charles.

We record all candidate-related comms in Ashby, so we can ensure we provide all candidates with the best experience we possibly can - even if they are unsuccessful, they should come away feeling like they had a great interaction with PostHog.

Ashby is a pretty intuitive platform to use, but here are a few helpful tips to get you going

If you receive an application via email or some other non-Ashby channel like Slack from a candidate you think we should definitely interview, tell them to apply directly via the website and forward it to careers@posthog.com. You will get people reaching out to you over LinkedIn regularly - only forward the high priority candidates to the talent team.

Managing sourced candidates

For roles we're actively sourcing for, please make sure that an extra step is added to the interview process as "Sourced Screen" after "Replied". All sourced candidates need to be added to Ashby from the "New Lead" stage and should be moved through each stage until the end of the process. If a sourced screen goes well, the candidate can be moved to "Technical Screen" directly.

Booking interviews through Ashby

Schedule interviews through Ashby itself. Do not use Google Calendar, otherwise the event won't be populated with useful candidate info, and we won't have a record of the meeting anywhere.

When we book a meeting, we have the option of selecting a Google Meet or Zoom call which Meet should be the default.

If you are involved in interviewing it is important to keep your calendar up to date. Candidates can book directly into your calendar so having your calendar blocked when you are not available to interview is important. This includes things like personal appointments, travelling, attending off-sites etc.

If you have an interview booked in you cannot make, do not just respond "no" to the calendar invite, please let the ops team know asap, or even better find a replacement for your interview and let Ops know, and we can update the interview. We aim to provide a great candidate experience and moving interviews is one way to reduce the quality of that experience.

Hiring stage overview

Application

The Talent team reviews applications and resumes/portfolios carefully and leaves their feedback as a comment on the candidate's record in Ashby if relevant.

Blocked application for multiple open roles

Our Talent team reviews candidates across all relevant open roles company-wide. If our system shows that you’ve applied to multiple roles at the same time, your original application will be retained while the others may be temporarily blocked within your candidate profile. This helps us review candidates fairly and thoroughly. No action is needed on your end— your information is already in our system, and the Talent team will ensure you’re properly considered for other similar opportunities.

If a candidate hasn't customized the application or resume to the role, it is a flag they aren't that excited about working at PostHog. Cover letters are definitely _not_ mandatory, but at an interview stage, it's important to note how passionate they seem about the company. Did they try out the software already? Did they read the handbook? Are they in our community forum?

Candidates who are unsuccessful at this stage will receive an automated rejection email. Due to the volume of applications we receive, we usually don't provide personalized feedback.

Interviews

As a rule, all interviews at PostHog are conducted in English. Whilst this might seem obvious to some, we are lucky to have people from multiple different countries, that speak multiple languages. We are hiring for people to be successful at PostHog, and at PostHog we conduct our business in English, so it is important the hiring process is also conducted in English.

If you are paired with an interviewee who speaks your native language, just politely acknowledge this and let them know all interviews are conducted in English. We also require these calls to be conducted as a video call, so a working webcam is necessary.

How interviews are scored

Scoring Scale (1– 4)

A good rule of thumb when deciding whether not to progress at any stage: if the candidate is between a 2 and a 3, then it's a 2. It's almost never worth putting through someone who is a 'maybe'! We provide lots of information about PostHog to enable candidates to put their best application forward.

When you have conducted an interview, you should leave feedback no later than the end of the day after the interview. Moving candidates through the process quickly is critical to us being able to hit our hiring goals, waiting more than one day for feedback can kill the momentum and leave the candidate with a bad experience. If for some reason you cannot give feedback before then, alert the talent team ASAP.

1. Culture interview with Talent

We start with an interview which is designed to get the overall picture on what a candidate is looking for, and to explain who we are. A template scorecard has been created for this stage in Ashby.

This is to allow both PostHog and the candidate to assess whether the candidate is a great cultural addition to the team (not culture fit), and to dig into any areas of potential misalignment based on the application. We are looking for proactivity, directness, good communication, an awareness of the impact of the candidate's work, and evidence of iteration / a growth mindset.

This round is loosely structured into 4 different sections:

  1. (If we sourced them) PostHog – quick intro about the company and role
  2. Candidate background and mindset
  3. Talk about the hiring process and check if the candidate has seen our compensation calculator, so we know we're roughly aligned.
  4. Answer any open questions

This stage is usually a 20-minute video chat.

Candidates who are unsuccessful at this stage should receive a short personalized email with feedback.

2. Technical interview with the hiring manager

In this round, the candidate will meet a future team member. This round is usually 45-60 minutes and will focus on a mix of experience and technical skills. Please check the specific hiring process for each team for more details.

As a rule of thumb, everyone interviewing must feel a genuine sense of excitement about working with the candidate. Again - if it is not a _definite yes_, then it's a _no_. Ask yourself - does this candidate raise the bar?

For engineering roles only: during high-volume seasons, this round might be recorded for training purposes to help us onboard and train new interviewers faster. The candidate will, of course, have the chance to opt out by either letting their recruiter know in advance or letting the interviewer know at the start of the interview.

3. Small Team interview with an Exec Team member

This is a call with either James, Tim, Raquel, Paul, Ben, or Charles depending on which Small Team they are being hired into. They will probe further on the candidate's motivation, as well as checking for alignment with PostHog's values.

Candidates who are unsuccessful at this stage should receive a short email with feedback.

4. PostHog SuperDay

The final stage of our interview process is what we call a PostHog SuperDay. This is a paid full day of work with us, which we can flexibly arrange around the candidate's schedule. We are not able to bypass this stage so if the candidate is not interested in conducting this final round, unfortunately we will have to part ways and the candidate will no longer be considered for the role.

If it is difficult for a candidate to commit to a whole day in one go - they may not be able to get the time off, or have childcare commitments that make this difficult - we can be _very_ flexible. For example, we can split the SuperDay across two or more sessions, and can align timezones to suit the candidate, given we have a team that's globally distributed. A candidate will never lose out because they are not available to do a SuperDay right away.

The candidate will be working on a task that is similar to the day-to-day work someone in this role does at PostHog. They will also have the chance to meet a few of their potential direct team members, and if they haven’t already, our founders. This gives the candidate a chance to show off their skills, and for us to see the quality, speed and communication of the candidate.

As we grow, we find the need to hire engineers who are comfortable working with existing codebases to be increasingly fundamental. During SuperDay, Product Engineering candidates will also have a 45-minute debugging session. There is nothing to prepare in advance for this; you'll work with your interviewer in a pairing session to get through a bunch of bugs! It is a demanding day of work.

We pay all candidates a flat rate of $1,000 USD for their efforts on the SuperDay. On rare occasions, if we have to cancel a scheduled superday because we have filled the role with another candidate who was further advanced in the process, we will pay a $500 USD term fee to the candidate for their efforts up until this point.

If the candidate is unable to accept payment for the SuperDay, we will donate by default to Django Girls Foundation. Payment and donation for SuperDays are expected to process every Wednesday so it'll depend on what day your SuperDay falls on. Either way, please feel free to flag your talent partner if you don't see the payment deposit within one week of your SuperDay.

This day will be _the same_ task each time for a given role, to be shared with the candidate at the start of the day. The task is generally designed to be _too much_ work for one person to complete in a day, in order to get a sense of the person's ability to prioritize and get things done.

Overall, the candidate should aim to spend at least 80% of their time and energy on the task and less than 20% on meeting people, as we will base our decision on the output of the day.

For everyone on the PostHog team meeting a candidate, ask yourself – will this person raise the bar at PostHog? The answer should be yes if we want to hire them.

In advance of the SuperDay, we will need to do some additional prep to ensure that the candidate has a great experience:

For some roles, we may occasionally set a task that goes over multiple days. For example, we have set Content Marketer tasks that last 3 days in order to create a piece of content.

Decide if we will hire

We aim to make a decision within 48 hours of SuperDay - being decisive is important at this stage, as great candidates will probably be fielding multiple job offers.

After a SuperDay, everyone involved in the day leaves their feedback on Ashby. This is hugely important to us in making a final decision so team members should make an effort in completing their feedback as soon as possible. If there are wildly different opinions, you should open an issue in company-internal to discuss.

If a decision is made to hire, the People & Ops team will open an onboarding issue once the candidate has accepted and James/Tim will share in our Monday All Hands Meeting a brief overview of the following:

If we don't make an offer, it's important to clearly outline to the candidate why that decision was made. Highlight what went well, but also mention specific points of improvement. Offer to schedule a call if they would like to discuss further. Make sure to leave the door open for the future so they can apply again in 12-18 months time as circumstances and people change.

Making the offer

Hooray!

The People & Ops team will prepare the offer details. James and Tim give final sign off. We then schedule an offer call with the candidate - this might be Charles, Fraser or a member of the people & ops team.

During the offer call, we'll share feedback from the interview process, and sell the opportunity here at PostHog. We will also briefly cover the offer details (salary, equity, benefits), and answer any open questions. Afterwards the person who made the offer will follow up with an offer email, outlining all the details. If a candidate is proving tricky to close, the team may escalate to James or Tim to help.

Once the candidate accepts, the People & Ops team will kick off the onboarding process and take the role offline, after rejecting all remaining candidates.

How Ashby works for interviewers

We pay Ashby per seat, so as an interviewer your access is limited to those candidates that you will interview, to save us some money. You will be able to see their application (inc cover letter), their resume and all previous feedback left about the candidate.

You will not be able to see every other candidate in the pipeline, this is because of the per seat pricing. However, we will keep a couple of seats aside so you can login to see other candidates in the pipeline and do a bit of profile/assessment calibration. If you would like to do this, please contact the talent team on slack and they can provision this for you. You won't keep this access forever but you can get it for a few days/ a week to get an overview of how some other interviewers are doing things.

Visa sponsorship

Building a diverse team is at the heart of our culture at PostHog, and we are proud to be hiring internationally. In some cases, this includes the need for visa sponsorship. We are currently only able to provide visas in the UK.

For employees where PostHog covers the costs related to obtaining a visa, the employee agrees to reimburse PostHog if they voluntarily terminate their employment prior to the completion of 12 months of service. The costs will be calculated on a monthly basis, so when the employee decides to leave after 10 months, they will have to repay 2/12 of the costs related to the visa.

If a candidate needs visa sponsorship, including sponsoring or transfer of H1B visa in the US, at this time, we cannot hire them.

E-Verify

We participate in E-verify for all US new hires which allows us to verify employment eligibility remotely and continue hiring in multiple states. E-Verify is not used as a tool to pre-screen candidates.

Location

For some teams, it's important to have a wide range of timezones covered by the small team. This allows us to have closer to 24 hour coverage in case of incidents, and is particularly relevant for infrastructure or pipeline teams.

For teams working on a pre-product market fit product with no users, it is preferable to hire people within a few timezones of each other, so it's easier to get together in person and to do synchronous meetings if people wish to work that way.

Currently, we are hiring a lot – aiming to go from ~96 people to ~185 by the end of 2025. Our pace of hiring is the biggest blocker to shipping all the tools in one and driving our growth, so we need to go fast while keeping the bar high. Therefore we should _not_ restrict hires to certain timezones, even if in the short run a small team would prefer to have everyone closer together. This is because over the next six months, we'll have enough new people, that we can later re-org our teams to group people back together by timezone if needed as we have higher density of talent everywhere in the timezones we cover.

Internships

We regularly receive enthusiastic requests from students about internships, which we're always flattered by. Currently, we don't offer internships, placements or work experience - we’re a bit too scrappy to do them well right now. Once you ~~escape college/university~~ graduate, you're welcome to apply to full time roles via our careers page. Your details will then go straight through to our hiring team (who _are_ real humans, not AI) and you'll hear back from us shortly after.

Post-mortems

We won't get every hiring decision right. So when we do let somebody go in their probation period, or shortly after, we need to try and figure out what went wrong. This is why we hold post-mortems, these are not massive inquests into who is to blame, these are figuring out one or two high leverage things that we can introduce to the hiring process to improve it going forward.

The Process

Pre-work

The process will be owned by the talent partner that was responsible for the hire. It will also include the blitzscale team member and the team lead involved, where the team lead was not involved in the hiring process we will include the other main person involved in the hiring process.

The talent partner should create a private slack channel (mainly out of respect to the colleague who has left, the main results will be shared publicly) with everybody involved and share all the feedback from the hiring process. The Blitzscale member will share all the relevant feedback the team member received that led them to failing their probation. Once this is shared the following work should be prepped before the post-mortem call. The talent partner can share a google doc so everybody has access.

The pre-work here is the most important part, the call shouldn't happen without this being done.

The Post-mortem call

The talent partner should remind everybody that we are here to fix the process, not re-litigate the decision or apportion any blame.

The first 10-12 minutes are about discusssing the pre-work and trying to answer two questions:-

Once agreed the second half should be focused on agreeing one or two fixes to the process that can be shipped. Try to avoid creating long lists as this is harder to implement.

post post-mortem call

The talent partner should write up the post-mortem and share it in the #team-talent channel cross-posting to #tell-posthog-anything. They should then update any handbook pages about the process and be sure to share any findings with the relevant interviewing channels like #technical-interviewers.

Marketing Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/marketing-hiring

Marketing hiring at PostHog

Our is small and we don't hire into this team very often. Please check our careers page for our open roles.

What we are looking for in marketing hires

Beyond the specific skills listed in the job description, we always generally look for:

Marketing hiring process

1. Culture interview

This is our standard culture interview with the People & Ops team. We will at this stage also ask for work samples or portfolios, to get a better feeling for the work a candidate has done in the past.

2. Technical interview and portfolio review

The technical interview round usually lasts 45-60 minutes and usually involves two of our team members. They will ask questions around background and previous experience, as well as some scenario-based questions. At the end, they will leave time to answer any open questions.

If relevant, we'll go through a candidate's portfolio.

3. Marketing SuperDay

The final stage of our interview process is the PostHog SuperDay. This is a paid full day of work, which we can flexibly arrange around your schedule.

The task will usually be actual marketing work, involving creating a piece of content or talking to customers, though we don't actually publish the work. We usually give a fairly open-ended task, where it is up to you to decide how you want to prioritize and tackle it.

A Marketing SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences):_

Overall, you should spend at least 80% of your time and energy on the task and less than 20% on meeting people, as we will base our decision on your output of the day. However, we encourage everyone to use the Slack channel as much as needed for any questions or problems.

In line with our values and culture, you might get short replies like "step on toes" or "bias for action".

Operations Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/operations-hiring

People & Ops hiring at PostHog

People & Ops at PostHog covers legal, finance, people and culture.

This is our smallest team, and we don’t hire very often. That means that each new hire has a disproportionately high impact compared with other, larger teams. Please check our careers page for our open roles.

What we are looking for in Operations hires

Outside of the skills listed in the job description, we are generally looking for:

Operations hiring process

Culture interview

This is our usual first round interview with a member of the People & Ops team.

Technical Interview

The technical interview usually lasts between 45-60 minutes and you will probably meet a member of our as well as Charles. For this round, you can expect questions about your background, together with scenario-based questions.

Operations SuperDay

The final stage of our interview process is what we call a PostHog SuperDay. This is a paid full day of work, which we can flexibly arrange around your schedule.

We will share the task with you at the start of the day. The task is representative of the work someone in this role at PostHog is doing, and it is always the same for each candidate, so we can make clear comparisons. It will typically involve doing actual PostHog work, e.g. sourcing candidates or planning an offsite.

An Operations SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences):_

Overall, you should spend at least 80% of your time and energy on the task and less than 20% on meeting people, as we will base our decision on your output of the day. However, we encourage everyone to use the Slack channel as much as needed for any questions or problems.

In line with our values and culture, you might get short replies like "step on toes" or "bias for action".

Sales and Customer Success Hiring

People | Source: https://posthog.com/handbook/people/hiring-process/sales-cs-hiring

Sales and Customer Success hiring at PostHog

Our Sales and Customer Success teams look after customers paying $20k a year or more for PostHog, as well as new customers who _may_ end up in that bucket. The job of the teams is to land and expand usage of PostHog in these customers. We have three roles on the Sales and CS team:

We've proven that the way we do sales works at a small scale, we are now growing the team in line with increased top-of-funnel growth for PostHog. Please check our careers page for our open roles.

What we are looking for in Sales and CS hires

Outside of the skills listed in the job description, we are generally looking for:

How we evaluate candidates

We need to be particularly sensitive to culture at this stage. We can handle someone underperforming much better than someone who is a poor culture fit due to the impact on the broader team. We don't want to end up taking cold leads from BDRs so we can run MEDPICC from our car phone while promising 50% discounts if they sign before the next full moon. We want someone who is comfortable carrying a sales conversation while also possessing the technical chops to talk to engineers and get their hands dirty.

We want someone who can own technical problems and even if they don't have the answer, understand enough of the context to provide that to engineers. We want someone who sees themselves as the first line of defense for our engineers, because engineering time is valuable; it's a win when they can solve a problem without additional engineering lean in.

A great litmus test for a candidate is if they are comfortable instrumenting PostHog and can speak to how they actually implement it on a site. That's typically a good indicator that they've got the right technical prowess.

We want someone who is in it to develop customers for the long run, we don't want someone who is here for quick churn and burn to pump up quota attainment. Building a relationship with a product engineer requires actually knowing PostHog, not just knowing about PostHog.

Ultimately, we want someone who we'd want to buy from.

Sales hiring process

Culture interview

This is our usual first-round interview with a member of the Talent team.

Technical interview and demo

The technical interview with the relevant team lead usually lasts ~45 minutes. Part of this session will be a demo role-play so that we can assess how you talk about your current product to a prospective customer who knows nothing about it. You can assume that the customer is a prospective buyer but otherwise knows nothing about your product, and you should approach the demo as if they were a real prospective customer. What we care about here is not the content of the demo, but seeing how you'd interact with a prospective customer. After a short introduction, we will jump into questions for 25-30 minutes, and then move on to the role play where you should aim to present and demo your product for 15-20 minutes with 10 minutes. After this, we will allow you to ask anything that's on your mind.

Culture and motivation interview

In this 30-minute interview, you'll meet with Simon, who will be trying to answer "Are they a good cultural fit for the Sales team at PostHog?".

Sales SuperDay

The final stage of our interview process is what we call a PostHog SuperDay. This is a paid full day of work, which we can flexibly arrange around your schedule.

We will share the task with you at the start of the day. The task is representative of the work someone in this role at PostHog is doing, and it is always the same for each candidate, so we can make clear comparisons. It will typically involve doing actual PostHog work, e.g. prioritizing customers, doing a demo, etc.

A Sales and CS SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences):_

Overall, you should spend at least 80% of your time and energy on the task and less than 20% on meeting people, as we will base our decision on your output of the day. However, we encourage everyone to use the Slack channel as much as needed for any questions or problems.

In line with our values and culture, you might get short replies like "step on toes" or "bias for action".

CS and Onboarding hiring process

Culture interview

This is our usual first-round interview with a member of the People & Ops team.

Small Team interview

The small team interview with the relevant team lead usually lasts 45 minutes. For this round, we will use scenario-based questions to assess your technical and customer skills, as well as your knowledge of PostHog. As part of this, we will ask you to give a quick pitch of PostHog (not a full demo).

Culture and motivation interview

In this 30-minute interview, you'll meet with Simon, who will be trying to answer "Are they a good cultural fit for the Sales team at PostHog?".

CS and Onboarding SuperDay

The final stage of our interview process is what we call a PostHog SuperDay. This is a paid full day of work, which we can flexibly arrange around your schedule.

We will share the task with you at the start of the day. The task is representative of the work someone in this role at PostHog is doing, and it is always the same for each candidate, so we can make clear comparisons. It will typically involve doing actual PostHog work, e.g. prioritizing customers, doing a demo, etc.

A CS and Onboarding SuperDay usually looks like this (_there is a degree of flexibility due to time zone differences):_

Overall, you should spend at least 80% of your time and energy on the task and less than 20% on meeting people, as we will base our decision on your output of the day. However, we encourage everyone to use the Slack channel as much as needed for any questions or problems.

In line with our values and culture, you might get short replies like "step on toes" or "bias for action".

Hogpatch operations

People | Source: https://posthog.com/handbook/people/hogpatch-operations

Hogpatch is our San Francisco coworking space, shared with a handpicked group of YC founders who are active PostHog users. Teammates in the Bay Area (or visiting) are welcome to drop in regularly to work, meet users, gather feedback, or join events we host in the space. Here’s a quick guide to how it all operates.

Judy Opperwall, our Office Manager, is on site Mondays to Thursdays from 9am–5pm. For any issues while she is not in, please reach out to the #project-hogpatch channel or DM her directly.

---

Building access

PostHog teammates don’t need an invite to use Hogpatch — the space is open for you whenever you’re in town. If you know your SF travel dates in advance, it’s helpful to post in the #sf-bay-area channel so we can make sure the space is ready for you.

---

Parking

---

Housekeeping

---

FYIs

---

Hogpatch - Our SF Home for YC Founders

People | Source: https://posthog.com/handbook/people/hogpatch

Hogpatch is our dedicated coworking space in San Francisco for YC founders in the current batch - a lofty, airy, light-filled warehouse just around the corner from YC. It's invite-only and open 24/7, giving you a reliable place to work whenever you need it. We've set it up because we know how valuable a quiet space is when you're bouncing between office hours, customer calls, and late-night sprints.

Hogpatch is a free space to help your batch experience a little easier. There's no catch! We're not here to sell you PostHog. We hope that this becomes a dependable home base in the Dogpatch neighborhood for when you need one.

What's Inside

Getting Started

Access

Hogpatch is invite-only, and isn't open to the whole batch. We keep numbers low so it stays calm, focused, and actually useful. When space opens up, we handpick a few YC companies to join. If we want to tap you on the shoulder with an invite, you'll hear from us directly.

Passes

Once you've been invited, Judy (our office manager) will send you and your cofounders a digital wallet pass. The pass gives you 24/7 self-service access to the space, ideal for late-night sprints or weekend hacking sessions. You can come in anytime, day or night. There's no check-in or reservations needed.

Location

Hogpatch is just 100 yards from YC's office, with the nearest Muni stop at 20th Street. You'll find us behind a discreet door off 3rd Street - the exact location is listed on your digital wallet pass. Scan your pass on the front door QR reader to get in, or buzz the intercom for help.

Support

You'll bump into our product engineers from time to time. They're in the space because they enjoy chatting with founders, and they're happy to give product feedback or help set up your dashboards ahead of demo day.

Judy Opperwall, our office manager, keeps things running smoothly 9am–5pm, Mon–Fri. Outside those hours, treat the space like it's your own. If anything urgent comes up, Judy's your go-to. Her details are in your welcome email.

Workspots & hangouts

Are you trying to sell me PostHog?

Not at all. Hogpatch is a perk for select YC founders who already know us through the Bookface deal. We know you're focused on building your company, not listening to pitches - so think of this space as a convenient homebase whilst you're hopping around the Dogpatch area, not a sales funnel. We went through YC too, and wanted to create a space that takes off some of the stress whilst you're going through a batch.

Offboarding

People | Source: https://posthog.com/handbook/people/offboarding

Offboarding team members is a sensitive time. The aim of this policy is to create transparency around how this process works. This offboarding policy _does not_ apply to regular contractors who are doing short term work for us.

Voluntary departure

In this case, the team member chooses to leave PostHog.

We ask for 30 days of notice by default (unless locally a different maximum or minimum limit applies), and for team members to work during that notice period. This is so we have some time to find someone to hire and to enable a handover. Please assume by default we will expect team members to work all of this period.

If you are a current team member and you are thinking about resigning from PostHog, we encourage you to speak with your manager or the to discuss your reasons for wanting to leave. While we don't want to persuade anyone who is unhappy to stay, you may find that the best solution involves changing things here at PostHog, rather than going somewhere else. If resignation is the only solution after you have discussed your concerns, please send an email communicating your intention to resign to people@posthog.com. We will then start a discussion around what is needed for the handover.

Involuntary departure

In this case, we are letting the team member go.

This is generally for performance reasons or because the company's needs have changed and the role can no longer be justified. If the decision is down to performance issues, we will have already communicated feedback to the individual and given them time to take the feedback on board. However, performance issues sadly can't always be resolved, which means we might ultimately need to end someone's employment.

Tim and James are responsible for making any final decision to let someone go.

We use the following general process for managing people whose performance isn't up to the right standard. We modify this slightly depending on the specific nature of the role and how long they have been at PostHog, so the process isn't identical in every case:

In cases where a team member's role can no longer be justified, we usually make a decision as an exec team and then let the team member know straight away - unfortunately it is not feasible to let someone know that we are thinking of getting rid of their role.

In either case, we will usually ask the team member to stop working immediately. Final pay and severance are calculated as below.

If a team member wants to resign but is deliberately trying to get let go so that they receive 4 months' severance, we may treat this as a material breach of your employment contract, which is gross misconduct. In such cases, team members are not eligible for any severance beyond the statutory minimum where they live.

Communicating departures

In the case of voluntary departure, we will ask the team member if they wish to share what they're up to next with the team. If you have resigned, please speak to the relevant team Blitzscale member to agree on who will communicate you are leaving. Please don't announce your resignation until the relevant member of the Blitzscale team has given the go ahead as they may need to prepare accordingly for the impact of your resignation.

In the case of involuntary departure, we will aim to be as transparent as possible about the reasons behind the departure, while respecting the individual's privacy.

Please be aware that PostHog cannot always provide context around why people are leaving when they do.

References and employment verification

At PostHog, we keep things simple: we don’t give personal or professional references for former team members. When someone asks about a past or current employee, the only information we share is their dates of employment.

Expectations for current employees: If someone contacts you directly about a current or former employee:

This helps us keep things consistent and protects everyone’s privacy.

The offboarding process

For team leads

If a team lead has resigned, the Blitzscale team should figure out who will take on the team lead responsibilities and have that prepared to let the team know just before the resignation is announced or as part of the announcement.

For involuntary leavers, we will schedule a call. During the call, someone on the ops team needs to complete the offboarding checklist.

We will then send over an email covering the following points with the team member:

  1. Final pay
  2. Share options vested
  3. Company property
  4. Business expenses
  5. Personal email to the company (optional)

Final pay

Final pay will be determined based on length of service and the reasons for leaving:

We ask departing team members to sign a post-termination certificate, separation agreement or release in order to receive payments beyond their final day of work. If we do not receive this, then we will only pay in line with statutory and contractual requirements.

Please note that if there are local laws which are applicable, we will pay the greater of the above or the legally required minimum.

Share options vested

If a team member has been allocated share options, we will confirm how many have vested and the process by which they may wish to exercise them. We have a team-friendly post-departure exercise window of 10 years, and most team members who leave will be deemed a 'good leaver' unless they have been terminated due to gross misconduct.

Offboarding checklist

This is maintained as an issue template in GitHub. The People team will create a new offboarding Issue for each leaver.

Onboarding

People | Source: https://posthog.com/handbook/people/onboarding

Welcome to PostHog!

Giving a new joiner a great onboarding experience is super important to us. We want new joiners to feel they’ve made the right decision to join us, and that they are excited and committed to what we’re doing as a company.

Want to introduce a new joiner to the People team for onboarding, but don't know who on the team does what? Just introduce them to people@posthog.com and a member of the team will jump in and take it from there!

Our team is spread across the world, and so are our new joiners. In order to ensure the best possible onboarding experience, we aim for the new joiner to meet up with someone from their team in their first week. Depending on the new joiner's location, they might fly out to one of our team members, or the other way around. So the onboarding experience will look a little bit different, depending on where the new joiner is based and which team they will be joining.

Onboarding checklist

This is maintained as an issue template in GitHub. The People team will create a new onboarding issue for each new joiner.

Onboarding email

We send an introductory email to all new hires to welcome them to the team and ease them in the some of the essential actions we need them to take. This needs communicating openly, as users may not be able to access the company-internal repo yet. So, we send them an email.

Once you've joined PostHog, we will not use email for communicating with each other. For example, James or Tim will never ask you to do something critical over email only – they'll always confirm it over Slack, and so will everyone else. Be extremely cautious of direct emails from James, Tim, or other people of PostHog.

The onboarding email is sent by the People team directly. We want to strike a balance between sending attractive, personalized emails and avoiding creating process or using overpowered tools, such as Customer.io or Mailchimp. So, we landed on a simple email with the necessary links.

<p>This doc is a suggested template with important actions specified, though we recommend personalizing it to the individual. We've linked to these as docs and direct images to make the formatting easier for you, but here is an accompanying image for use in emails.</p>

Image: onboarding image

Onboarding buddy

Every new joiner at PostHog has an onboarding buddy. If possible, a new joiner will meet their onboarding buddy in person during their first week. In case in-person onboarding isn't an option, we will make alternative arrangements. The onboarding buddy is usually a member of the team a new joiner is joining - ideally the team lead - and they can help with any questions that pop up and with socializing during the first couple of weeks at PostHog. Of course, everyone is available to help, but it’s nice to have a dedicated person to help.

Guidance for onboarding buddies

If any travel is needed for the in-person onboarding, please check our Spending Money page and book your travel accordingly. _You don't need to let us team know, just use your Brex/Revolut card._

In-person onboarding

Except under special circumstances, new joiners meet with members of their team in-person to go through the onboarding process. Upon acceptance of an offer, your Team Lead will notify the People & Operations team who will help you coordinate travel if necessary. We encourage team leads to consider the Hedgehouse as a location for in-person onboarding. Regardless of location, everyone should have their own bedroom.

In these cases, the process is:

While there is no fixed budget for onboardings they should be relatively less expensive than a small team offsite, which is $2,000 per person. Some considerations to reduce the cost:

Aim to keep things sensible and cheap. As always, use your best judgement when spending money. Request a budget in Brex in USD for any onboardings you are doing. There will of course be some exceptions to this, please just include the reasoning in your brex budget request, and ensure to list who the budget request is for.

You should by default avoid combining in-person onboarding with small team offsites as they serve different purposes. The focus of onboarding is generally on making the new team member successful, but offsites feature things like hackathons and 360 feedback which aren't usually helpful for this and detract from useful onboarding time. However, it may occasionally make sense to combine the two - just use your judgement.

It is important that you make the most out of the sync time with the new joiner on your team. You should not spend the whole week sitting next to them doing your usual work. Having something planned each day is sufficient, some ideas include:

Your first week

Your first week can definitely be a bit overwhelming at any new company, so here's what you can (roughly) expect!

If your laptop is delayed: In rare cases your PostHog-issued laptop may not arrive until several days after your start date. If that happens, you can begin non-sensitive onboarding tasks (reading the handbook, intro calls, etc.) on your personal laptop in the meantime. Treat a personal laptop as less trusted:

- Do not access production cloud environments (AWS, GCP, etc.) from it.

- Do not store or handle any secrets on it, including secrets used for local development.

Move anything sensitive to your company laptop as soon as it arrives.

Engineering

We hire engineers on a regular basis, running in-person onboarding practically every time. Over the years, we've learned a lot about doing this efficiently and there's much to gain from sharing the knowledge between teams.

Based on this ongoing learning process, here are our five rules for onboarding an engineer:

  1. Ship something together on day one – even if tiny! It feels great to hit the ground running, with a development environment all ready to go.
  2. Run 1:1 learning sessions with the new teammate every day. Give them all the context they need to succeed. By the end of the onboarding, each team member present should've run at least one such session.

<summary>Looking for learning session ideas?</summary> <p>Here's a non-exhaustive list:</p>

<li>The <a href="/handbook/engineering/databases/event-ingestion">lifecycle of an event</a>, from a client library all the way to query results</li> <li>How we turn all our TSX and SCSS files into a fast frontend served from S3</li> <li>The architecture of PostHog Cloud</li> <li>Trunk-based development - how we make use of feature flags</li> <li>Query nodes and how they're used throughout the app</li> <li>What the dead letter queue is for</li> <li>How PostHog experiment results are calculated</li> <li>What engineering planning looks like at PostHog</li>

Any of these chats can take as little as 15 minutes or as long as 1 hour, depending on the level of detail. You'll also find that some topics apply perfectly in some teams, but not so much in others. This is all up to you!

  1. Do at least one brainstorming session on a topic important for the team, writing down actionable conclusions. Use the time together to discuss issues and involve the new joiner in decisions.
  2. Pair whenever possible. You're all sitting next to each other, so pick work that can benefit from in-person collaboration.
  3. Have fun, because life isn't all work! Do some sightseeing, go out for dinner, or find a fun activity – just hang out together any way you like.

Tools we use

We use a number of different tools to organise our work and communicate at PostHog. Below is a summary list of the most important ones - this list is not intended to be exhaustive

Everyone

Engineering

Design

Ops, People & CS

Signatories

Charles, James and Tim at this time are the only people able to sign legal paperwork on behalf of the company.

How we work

Now it's time to dive into some of the more practical stuff - these are the most important pages:

  1. Communication - we have a distinctive style. If PostHog is your first all-remote company, this page is especially helpful.
  2. Team structure - we are structured in Small Teams. These pages will help you get the lay of the land, and who does what.
  3. Management - we have a relatively unusual approach to management, and it is possible that you will not be familiar with our approach.

Working in GitHub

We use GitHub for _everything_, including non-engineering task management. This might take some getting used to if you are non-technical. If that is the case, we have a detailed guide on how to set up a local version of Posthog.com so that you can make changes to the docs, handbook and website and a blog about why we use GitHub as our CMS to help you out.

Our most active repositories (aka 'repos') are:

When you have a new Issue or want to submit a Pull Request, you do that in the relevant repo.

We use GitHub Projects to track the status of Issues in an easily viewable way. When you create an Issue, you can assign it to a Project - think of a Project as a way of organising and filtering Issues in a particular view. This makes it easy for Small Teams to easily track what they are working on without having to jump between different repos. Some Issues may be assigned to multiple Projects if they involve the work of more than one team.

You can also assign an Issue to a specific person, and tag it with a relevant label - use these to help people filter more easily.

Each Small Team has its own Project for tracking their Issues - full list here. Most teams run two week sprints - as part of onboarding, you will be invited to the relevant planning meetings.

Support hero training

Employees are occasionally called upon to act as support heroes, or need to deal with support tickets that are escalated to them. This most often applies to engineers, but can include any employee regardless of their team. For this reason, we need everyone to have a broad idea of our support processes and know how we deal with customers.

All new hires should schedule a 30 minute session with the support engineer closest to their timezone within their first three weeks at PostHog.

In this call the support engineer will be able to answer any questions, as well as demonstrate how we deal with support at PostHog. In particular, the support engineer should cover:

It can be especially helpful for new hires if support engineers demonstrate how to solve a few simple tickets from start to finish, through shadowing.

30/60/90 day check-ins

Managers are responsible for helping their new members navigate the first 3 months probationary period. There is a strong importance on 1) providing feedback to the new team member, and 2) communicating with execs about unresolved performance issues, so that there is enough time for action. Managers are, again, not responsible for hiring or firing, nor communicating these possibilities directly to teammates - this is handled by the exec team, and is frankly a very rare situation - the vast majority of people we hire do pass their probationary period!

As part of the onboarding checklist, the Ops team will schedule reminders for a new team member's manager at the 30, 60 & 90 day mark to serve as a reminder that these checkpoints have arrived and to make sure everything is on track for the probationary period to be passed.

Feedback is a really important part of the onboarding process and as a manager it's a good idea to ensure the new team member receives feedback from their peers - either from you collecting it or them receiving it directly from their peers. It won't always be possible or necessary to do a 360 feedback session within the first 3 months, so it's up to you as a manager how best to approach that. As a manager you can also have blind spots on performance, so checking in with their peers can be helpful and can be done during your normal 1-1s.

These check-ins are designed to ensure every new starter is set up for success. Every manager will deal with these slightly differently, but it will hopefully be clear to everybody by around the 60 day mark how things are going and what needs to be worked on, if anything. It is important for a manager to ensure that they do not wait for one of these check-ins to communicate with an exec team member that there could be issues with the team member passing probation. They should let them know immediately, so that a fair and reasonable plan can be put in action ASAP.

If you have any issues or any feedback on how to improve a specific intro just post in the #team-people-and-ops slack channel and tag the relevant people

Finding answers at PostHog

Need help finding something? We have a strong culture of self-sourcing answers - it helps you get unblocked faster and builds your intuition for where things live. Start with #ask-max, our AI that's read every handbook and documentation page. It'll point you to relevant docs instantly and is available 24/7.

Of course, if you're stuck or need context beyond what's documented, just ask in the relevant channel - people are always happy to help. The goal isn't to make you figure everything out alone, it's to give you the fastest path to answers.

Slack Channels

Below are a list of Slack channels you may find helpful:

Social channels

We encourage you to join and create channels focused around different types of hobbies and interests. We explicitly don't allow channels based on categories that we legally (and rightly!) can't discriminate against in the hiring process, such as gender, sex, political affiliation, religion, and age.

Location specific channels

etc.

Overview

People | Source: https://posthog.com/handbook/people/overview

The Ops team's primary goal is to make PostHog an incredible place to work by removing distractions from other small teams. We keep PostHog running smoothly without implementing lots of unnecessary processes.

We are also responsible for growing the team by adding in world class talent to new and existing small teams. We want to do this while retaining our world-class team by making PostHog the most transparent company in the world, and the best place for people to work in general.

Ops provide all the tools, literally and metaphorically, needed for our team to come in and do their best work. Practically, this looks like:

These are some things that Ops is _not_ responsible for which you might see at other companies

Ops team values

Take ownership

When something falls on the Ops team, we make it very clear we are the owners of that specific thing. We communicate clearly with other teams when we require their input and we make it as easy as possible for them to help us achieve the desired outcome. We are quick to triage things that don’t have a clear owner and we get them into great shape before we expect others to have to interact with it. This could be anything from a compliance matter to how our merch process works. We say 'here's how this could get done' rather than 'that's not my job'.

Be supremely reliable

If we say will take care of something, we take care of it - no exceptions. We are often trusted with big and small things, and we take all of them seriously. This also means we will keep you in the loop if something can't happen as we originally intended. We are trusted with a lot of sensitive information, from our team members' personal details to specific company info that we need to protect. When people trust the Ops team with something, they need to know it’s getting done properly.

Act with care & compassion

The Ops team has got your back. We treat everybody with respect - caring deeply about the success of the business means caring deeply about the success of every team member, irrespective of things like seniority. We want to be sure we will be proud of how we handled any situation. This doesn’t just apply to our team members but to anybody interviewing, the customers we deal with, and anybody else we interact with externally such as suppliers.

Philosophy club

People | Source: https://posthog.com/handbook/people/philosophy-club

Philosophy club runs once a month, and Charles Cook organizes it. We spend each 30min session discussing one philosophical question in the Socratic tradition. A short text is shared in advance as light pre-reading from the Stanford Encyclopedia of Philosophy. These can be a bit dry, so feel free to use alternatives - the Crash Course videos are quite good.

If you're interested in joining, ask Charles Cook to add you to the recurring event in #team-people-and-ops.

Structure

Each session is 30min:

The only rule is no observers - if you join a session, you should expect to take part.

Topics

See below for the full roadmap, with a link to each pre-read. This is a 1 year commitment if you want to do the whole thing, but each question stands alone, and you can attend as many sessions as you want!

  1. What counts as knowledge?
  2. Can we trust our own reasoning?
  3. If behavior is shaped by causes, what does accountability mean?
  4. Is success about happiness, achievement, or meaning?
  5. Are habits more important than intentions?
  6. Should decisions prioritize results even when methods feel wrong?
  7. Is fairness equality, merit, or need?
  8. When is challenging authority ethically required?
  9. Is trust a rational calculation or a leap without evidence?
  10. Does technology shape our behaviour or simply reflect it?
  11. Should work be intrinsically meaningful or primarily instrumental?
  12. Do we discover purpose, or create it through choices?

Product Manager ramp up plan

People | Source: https://posthog.com/handbook/people/ramp-up/product-manager

This is a rough guide to ramping up as a product manager in PostHog.

Timeline

Day 1

Outcome: Get started

Week 1

Outcome: Get stuck into execution

Month 1

Outcome: Your teams are hitting their goals faster

Quarter 1

Outcome: Hit your goals and set the strategy for the next quarter

Specialism

As well as your day job with specific teams, its important we have PMs having company level impact across the following specialisms too.

Analytics

Customer Research

Growth

Stock options

People | Source: https://posthog.com/handbook/people/share-options

Overview

It’s important to us that all PostHog employees feel invested in the company’s success. Everyone plays a critical role in the business and deserves a share of that success as we grow. When employees perform well, they contribute to the business doing well, and therefore should share in the increased financial value of the business.

As part of your compensation, you will receive an option to purchase stock in the company, subject to a standard 4-year vesting schedule with monthly vesting and a 1-year cliff. Broadly, the number of shares subject to your option depends on your Level. We may adjust this policy over time depending on our hiring pace – for example, if there is an extended gap in hiring, we may revise the allocation.

While the governing terms of the options may vary if PostHog is ever acquired, we have set them up with the following key terms:

It can take time to formally approve and issue options, as it requires a board meeting and updated company valuations. We can provide estimates of the likely issuance timeframe at the time of hiring, but generally speaking, we try to get formal board approval a few times a year. In any case, you will not be disadvantaged by any delay in the approval process, as vesting always starts from your PostHog start date.

Frequently asked questions

We have written out a few of the most commonly asked questions about stock options below. Some of these questions are useful if this is your first time receiving options, while others provide more detail. If you have specific questions, please reach out to Hector. However, note that these questions are often highly individualized, and as such, we may suggest that you consult with your own personal tax advisor for tailored guidance.

What is a stock option?

A stock option gives you the right to purchase shares of PostHog's stock at a predetermined price set on the date of the grant, regardless of what the market value of the stock is in the future.

Stock options can be financially very lucrative, because PostHog will give you the opportunity to buy stock at the grant-date fair market value, which may be lower than what investors or an acquirer may be willing to pay in the future at a liquidity event. As we continue to grow the company, we hope that the value of our stock will exceed the exercise price on your options, which could result in substantial financial upside to you and the rest of our option holders.

What does it mean to "exercise" a stock option?

This simply means you decide to buy the underlying stock covered by your option at the price set out in your option agreement. The price you pay is called the “exercise price” or the “strike price”; both terms are widely used and mean the same thing here. When you exercise a stock option, the exercise price is paid to PostHog as consideration for the stock you're buying.

You should be careful here, as exercising stock options can have personal tax consequences. We always recommend you consult with your own personal tax advisor before making a decision to exercise any options so that they can provide you with individualized tax advice based on your specific circumstances.

What are my stock options actually worth?

Because there is no public market for our stock, and because the stock is subject to standard private company restrictions on transfer, rights of first refusal, and consent requirements, it is not possible to assign a true “value” to the stock.

Although we can tell you what the last-round preferred stock investors paid and what our 409A (US) or HMRC (UK) appraisers assigned as the most recent “value,” there is no guarantee that any buyer or investor would pay those prices – even in the event of a sale or acquisition.

That being said, you can use this handy calculator to model what your options might be worth in the future, under certain assumptions about liquidity, sales price, dilution, etc. You'll need to make a copy first, and be signed in with your PostHog email address. You can also find estimates in your Employee page in the PostHog Ops platform.

Since these numbers are based on assumptions, we cannot promise you that value, but in any case it can give you a sense of what the stock may be worth.

What if I leave PostHog before this exit event happens?

Happily, we have set up terms that are industry-leading in their friendliness to team members! If you leave PostHog, you will have 10 years from the date your stock option was _granted_ to exercise any vested portion. Note that the exact deadline you have for exercise is whatever is written on your stock option agreement under "Expiration Date".

The industry standard is to give only 90 days to exercise after leaving, which we believe is overly restrictive.

Are there any tax issues I should be aware of?

You should always consult with your own tax advisor before making decisions about exercising your options. That being said, there are a few important tax issues we want to highlight here that may apply if you live in or are a taxpayer in the US or the UK.

US stock options

To the extent legally possible, we grant stock options to our employees as Incentive Stock Options (ISOs), which can be tax-advantaged assuming the following two holding period requirements are met:

If you do not exercise the option within 3 months of leaving, you still keep the stock option to the extent vested, but any non-exercised portion will legally convert into a Non-qualified Stock Option (NSO), which does not have the same tax advantages. Generally, no tax is payable upon exercise of ISOs (except for potential liability under the Alternative Minimum Tax (AMT)), whereas income tax is payable upon exercise of NSOs. US taxpayers may therefore wish to exercise stock options within 3 months of leaving to retain ISO tax benefits, though this requires paying the exercise price out of pocket and may reduce optionality.

After 3 months, if you _exercise_ (not sell) your stock options, you will be liable for income tax at exercise on the difference between the market value at exercise and the exercise price. Within 3 months, no tax is payable upon exercise (other than potentially AMT) – you will only pay tax upon _selling_ the shares (generally at a lower capital gains rate, if ISO holding period requirements are met). We can't give you personal advice here, so please consult a tax advisor to see if exercising your ISO options makes sense for you.

UK stock options

If you are an eligible UK taxpayer, you will be granted stock options under either an EMI scheme or a CSOP scheme, both of which can be tax-advantaged. We aim to grant employees the most tax-advantaged options possible, subject to eligibility rules. Since EMI options are generally seen as more favorable than CSOP options, we will default to granting EMI options if we are eligible to do so at such time. In the event we do not have EMI eligibility, grants will be made under a CSOP plan instead, and if we are not CSOP eligible, the grants will instead be non tax-advantaged.

EMI Options: EMI Options are similar to ISOs in that they are tax-advantaged in the UK, but the tax advantage is lost 90 days after you leave PostHog. <p>After 90 days, if you _exercise_ (not sell) your EMI options, you will be liable for income tax and potentially national insurance contributions on the difference between the market value at the time of exercise and the exercise price. Within 90 days, no tax is payable upon exercise – you will only pay capital gains tax upon _selling_ the shares (which is generally a lower rate than income tax). In addition to that, EMI's also have the added benefit that if you sell after holding for 2 years from the grant date, you may be eligible for more favorable capital gains rates due to business asset disposal relief. Again, we can't give you personal advice here, so please talk to a tax advisor if you're not sure whether exercising your EMI options makes sense for you.</p>

CSOP Options: CSOP Options are similar to EMI Options in that they are also tax-advantaged in the UK, but there are a few key distinctions:

Please check with your tax advisor if you plan to exercise your CSOP Options (especially prior to 3 years) as you may be losing out on critical tax benefits if this is done incorrectly!

As with everything here, this is highly facts and circumstances specific, so please consult with your individual tax advisor to make sure you don’t lose out on any key tax benefits.

Other countries / non-tax favored options

At the time of grant, we check for eligibility factors and do what we can to provide tax-advantaged treatment where possible. However, not every option will necessarily be eligible for tax-advantage status (whether due to lack of company eligibility, ineligible tax jurisdiction/residence, employment requirements, caps on issuance amounts, or otherwise). Any option that is not eligible to be issued as either an ISO, EMI, or CSOP will be granted as an NSO.

Historically, we designated all non-US and non-UK grants as "ISOs" for consistency, though in practice, none of these grants were ever eligible for true ISO beneficial tax treatment under the law since they were made to non-US taxpayers. As of July 2025, we revised this practice to avoid confusion and to align with recommended best practices, and now all grants we make to non-US and non-UK service providers are issued as NSOs. NSOs are not tax-advantaged, and generally speaking, income tax will be due upon _exercise_ of the option.

In all of the above cases, your exercise price remains fixed at the time of issuance no matter what.

Does it make any difference how I leave PostHog – what if I am fired or made redundant?

We have taken a very broad, market-standard and team-friendly approach to what we consider for and not for "cause" in the event of departures:

The concept of “cause” is similar (though not identical) to the concepts of “good leaver” and “bad leaver” under UK law. We aim to align our option agreements across jurisdictions in an employee-favorable way, but note that local law classifications may not perfectly match the contractual provisions in your option agreement. As such, you should check with your tax advisor to confirm your individual circumstances.

We also have a special provision in place in case PostHog is acquired by another company and, in connection with such acquisition, you are let go without "cause". In this case "double-trigger vesting" applies, which means 100% of your unvested options immediately vest. This benefit is usually only offered to executives at startups (if at all), but we thought it was fair that everyone should benefit from this.

While we cannot guarantee that an acquirer will agree to assume these provisions without issue, including them in our option agreements gives us a strong position to advocate for maintaining them at such time.

What if I move jurisdictions but stay employed by PostHog?

As a remote company with global hiring practices, we often get questions about what happens to outstanding options if an employee moves to a different country. Generally, provided that your option does not cease vesting upon move due to legal requirements, you should expect them to continue vesting on the same schedule and keep the same strike price. However, the tax treatment of your options may change depending on your new jurisdiction. Not all countries recognize the same tax-advantaged schemes, so your options, or a part of them, may lose favorable tax status even though they continue to exist and vest.

You should not assume that PostHog can accommodate cancellations, re-grants, or restructurings based on personal tax circumstances or decisions to relocate. For example, if you were granted NSOs and later move to the UK where you might otherwise be eligible for EMI, your existing NSOs will generally continue as-is. We would not cancel and re-grant them as EMI options.

Given the flexibility we offer around work location, it is not operationally feasible for us to tailor equity treatment to each individual’s circumstances and decisions to relocate. If you already anticipate a move at the time of hiring, let us know. We may be able to delay your grant until after your move. However, this comes with the risk that your strike price may be higher at the time of such grant.

As always, jurisdictional moves are highly fact-specific. Please reach out with questions, but note that in many cases we may recommend seeking independent tax advice.

Exceptions

There are a few notable exceptions to the general approach described above:

  1. EMI options and moves to EOR arrangements

If you hold EMI options and move out of the UK while continuing employment at PostHog via an Employer of Record (EOR), unfortunately this is treated as a cessation of continuous service under EMI rules. In this case:

In such a circumstance, since the option ceases to vest due to legal requirements, PostHog will re-grant the unvested portion as NSOs after your move, on the same vesting schedule. However:

We do not have flexibility on this point, so this is an important factor to consider when deciding whether to relocate, as the total number of options you ultimately retain won't change, but the economic impact to you of the change in strike price or tax treatment could be significant.

  1. Broad-based changes (not individual requests)

In some cases, we may make changes that benefit groups of employees, such as canceling and re-granting options or changing option types due to regulatory or eligibility changes.

For example, if we previously issued CSOP options due to lack of EMI eligibility and later became EMI-eligible again, we might evaluate whether to make a broader change. These decisions:

If a change in tax treatment or eligibility applies only to you, you should assume we will not make an exception.

What is "vesting"?

Vesting means that you don’t receive all your stock options immediately; otherwise, you could work at PostHog for a week, leave, and still receive a significant portion of your options.

Instead we follow the standard industry vesting schedule over 4 years:

Vesting starts on the day that you started at PostHog, not the date that your stock options were granted.

How did you decide the strike price that I should pay to exercise my stock options?

PostHog doesn't decide the price – we get an external company to conduct a valuation and determine the "fair market value" (FMV) of the stock. Note that this is different (and often lower) than the price from the last funding round, due to the way that the price is calculated, and due to the fact that the stock options you receive will cover common stock while investors instead buy preferred stock.

For UK grantees, similar criteria is used to determine the valuation by HMRC.

In either case, we don't have any flexibility here – if we set an exercise price lower than the FMV, this would create serious tax issues for both you and PostHog.

These valuations are typically valid for at most 1 year (US) and 90 days (UK), so we have to redo them periodically.

Why do we allocate stock options in batches?

Two reasons – because valuations (mentioned above) need to be re-run, and because each time we allocate stock options we need to get them formally approved by the board.

As a result, it is normal for companies to grant stock options at set intervals (e.g. 1-2 times per year), rather than individually at the exact time of hire.

Why don't you just give me the shares?

Under most countries' tax laws, including the US and UK, a direct issuance of stock would be considered income, and you would immediately have to pay income tax on the stock received. This would mean you getting hit with a tax bill of tens/hundreds of thousands of $$$ with no direct cash compensation to help you pay the tax liability due to the illiquid nature of the stock. Stock options are a much more tax-efficient way to compensate team members, as you don't pay tax today when you are granted the stock options, and as mentioned above, you are often able to take advantage of tax-favored schemes that can further reduce your liability.

Can PostHog help me figure out what tax I will have to pay in the future though?

We cannot give you personal tax advice – you need to talk to an accountant. We're happy to ask around our network for recommendations.

I received stock options under the EMI and/or CSOP plan as I'm based in the UK – how are these different from our regular stock options?

EMI and CSOP options have various additional tax benefits associated with them that we're able to offer because PostHog Inc. has a UK child company, Hiberly Ltd. The option is still for stock of the parent company, PostHog Inc., even though you are employed by Hiberly Ltd. Please see the section titled “UK Stock Options” above for some key differences, but as always, please make sure to consult with your own personal tax advisor for any specific questions about tax treatment.

It is worth noting that you will lose EMI and/or CSOP tax benefits if you stop being a UK tax resident.

How do I track my vesting and manage my options?

We use a tool called Carta to virtually manage our cap table and stock options. You can sign in to the platform using your PostHog email, and you will be able to see all of the option grants you have received, the start date and how much you have vested thus far, the strike price of your options, and how much it would cost to exercise a certain amount of options.

Non-UK employees can exercise options via Carta by sending PostHog the exercise price via ACH. Due to additional jurisdictional specific requirements in the UK, EMI and CSOP holders cannot exercise via Carta – if you are in the UK and would like to exercise, please reach out to Hector or the ops team.

I have a question that is not covered here!

Ask Hector or Fraser – ideally in a public Slack channel (if appropriate) for better visibility.

May I suggest a change to our stock option plan or my stock option documents?

Unfortunately this isn't possible – we have a standard set of agreements that we use with everyone which are pre-approved by the board and our investors. Making any changes would not be feasible, unless you spot an obvious error in your option agreement.

That being said, we do not include any terms that are not either completely standard or (in many cases) as team-friendly as possible.

Side gigs

People | Source: https://posthog.com/handbook/people/side-gigs

PostHog looks for passion in the people it hires. This often correlates with people who have side projects as a hobby. For example, we view pre-existing open source work as a strong qualifier that you're good enough at programming that it's fun to do rather than frustrating and hard!

These side gigs may sometimes earn you money. Sometimes, you may one day want your side gig to become your main gig.

We have deliberately called them "side gigs", as we are ok with you earning money on the side. We are not ok with this being your main focus and PostHog being just a paycheck. Quite simply, we are too small for PostHog not to be your main motivation. For this reason, we also currently don't offer part time work as an option at PostHog.

Managing time

The key distinction to something being a side gig, and thus it being appropriate, is its impact on your work and the amount of time involved.

A few hours a month on a paid side gig is acceptable. In any case, side gigs should by default be something you work on in your personal time, and they should not impact the work you do at PostHog.

In a few cases, you may want your side gig to become your full time work one day. That is ok - please just let us know, so we can create a plan. We know the key to motivated people is to help you achieve your long term goals, and to align this with what PostHog needs, whether or not you eventually achieve them with us.

Above everything else, if you are going above and beyond for PostHog and you're still able to look after yourself properly, side gigs (whether paid or unpaid) are totally fine. We don't think that's possible beyond a certain level of time/energy commitment to them, but we are very happy for you to spend a little time on them each week.

Intellectual property

Just to reassure you, PostHog won't try to claim ownership of any intellectual property (IP) you create in your personal time, e.g. if you are contributing to another open source project as a hobby. However, you need to be _really_ careful that you do not introduce any of PostHog's non-open source IP into any project that you work on - this can cause serious legal headaches. As a rule, anything from PostHog that is explicitly MIT-licensed is fine to use, anything else is not.

Ideas that start at PostHog

If an idea, project, or product comes out of your work at PostHog (for example during hackathons, offsites, team projects, or internal experiments), it should be treated as PostHog work by default.

When thinking about whether something is truly a personal side gig or PostHog work, it’s important to consider where the idea originated, who it was built for, how it’s been shared or used internally, and whether PostHog data, tools, equipment, or infrastructure are involved.

If you’d like to take an idea that started at PostHog and develop it as a personal or external side project, please get explicit sign-off first so we can avoid any confusion around ownership, data use, or future plans. Without that sign-off, these projects should live within PostHog repos and follow PostHog processes.

If you are ever worried about this, please talk to Fraser and he can help you figure out the best solution here, especially if what you are working on directly competes with something PostHog has built or is on our roadmap.

Getting signoff

We ask you to please just check in with the relevant Exec team member for your team (ie. James, Tim, or Charles) to get their confirmation before going ahead with any side gigs.

If your side gig existed before you joined PostHog, this will usually be covered as part of the onboarding process.

Spending money

People | Source: https://posthog.com/handbook/people/spending-money

There are many occasions when you will need to spend company money. PostHog is a lean organization - the less we spend, the more time we have to make sure the company takes off. However, it is more important you are productive, healthy, and happy.

Guiding principles

We have a context-based expense policy inspired by the book 'No Rules Rules'. We're empowering team members to make good decisions while maintaining transparency and accountability.

Ask yourself:

If the answer is yes, it is likely the expense is in the best interest of the company and supports your productivity.

If not, think again.

The goal here is to empower you. However, when in doubt, ask your team lead for context, not permission.

How it works

Transparency & accountability

Logistics

UK employees

Use your Revolut if the expense is over £75 _and_ has UK VAT on it. If not, use your Brex.

The company can claim back VAT on these larger purchases - the more money we claim back, the more money we have to #DoMoreWeird

Please make sure that the invoice is addressed to Hiberly Ltd and our registered address (this can be found in the Important Company Details sheet).

Receipts

All expenses over $75 or £75 must have itemized receipts attached and memo updated within 14 days of the charge, as this is what our auditors require.

In extreme cases, expenses with no receipts above $75 or £75 may be deducted from your pay if we can't verify the business purpose - this is mainly for repeat offenders where someone's clearly ignoring the policy.

We need an _itemized receipt or invoices_ because Brex auto-verifies these using the information on the file. For example, this means the full booking confirmation email for flights/hotels, detailed bill from the restaurant for a team dinner etc. Please do not upload cropped images that show just the amount or just the credit card machine confirmation slip - without context, the receipt is pointless

Template for a thorough memo

- What: [item/service purchased]

- Why: [business reason - how this helps PostHog/your work]

- Context: [relevant details - who attended, what project, etc.]

Reviewing expenses

Finance reviews:

Team Leads

You'll get monthly expense reports for your direct reports. You have context Finance doesn't, which will help justify spending decisions to the auditors. How much you dig into these is up to you - the goal is catching patterns, not micromanaging.

Why documentation matters

Missing receipts and memos create extra work during audit season - auditors will need to dig into those expenses, which means hours of back-and-forth with you about charges from several months ago! A quick receipt upload and memo update now saves everyone time later.

We're legally prohibited from paying personal expenses on behalf of team member, and are at risk of penalties/fines if this happens.

The flexible, trust-based policy we have only works when everyone maintains proper documentation. If team members consistently fail to upload receipts and add memos, we'll either need to implement rigid pre-approvals for all spending, or treat repeated violations as performance management issues. Which we really don't want to do!

Budget structure on Brex

We're not going to police your spending with hard limits but we will continue to use budgets and limits on Brex for audits, and because categorizing this stuff properly is essential for things like Board reporting, tax compliance, and seeing how we're doing against our budget.

Joining an offsite? _Only use the offsite budget_, not your User Limit - it helps the People & Ops track travel spend accurately against budgets. Let Kendal know in #team-people-and-ops.

Frequently asked questions 🤨

How we handle inappropriate spending

Expenses that could be construed as personal will be flagged as non-business expenses by auditors, as they will be considered a taxable benefit that PostHog has provided to you.

Examples of inappropriate spend include:

If the inappropriate spend was due to a misunderstanding, e.g. you genuinely thought an expense was in PostHog’s best interest, but we disagree, we’ll provide clarification and context. If you knowingly and deliberately spent money in ways that are not in PostHog’s best interest, or tried to intentionally circumvent the guidelines, we will probably treat this as serious misconduct.

Expense guidelines

Equipment

Laptop & monitor

Talk to Tara who handles most Macbook and Apple Studio Display purchases - ping her on #team-people-and-ops.

Yubikey (for specific roles only)

Passkeys are the preferred way of securing accounts. In some cases Passkeys aren't supported by the service provider.

If you find yourself in a team requiring access to these kinds of tools where a Yubikey is required then you should purchase them as recommended on the on the MFA page using your Brex card. If you aren't sure if you need one then you probably don't and should instead be using Passkeys

Other equipment

As a guide, here's what we'd consider reasonable spend:

Software

We are _strongly opposed_ to introducing new software that is designed for collaboration by default. There needs to be a very significant upside to introducing a new piece of software to outweigh its cost.

The cost of introducing new collaborative software is that it creates another place where todo items/comments/communication can exist. This creates a disproportionate amount of complexity.

Individual software is down to your personal preference, and we encourage you to share cool software.

You can ask for access to team/company tools by submitted a request in Slack. Find the Zluri app in Slack. type: /accessrequest and press enter. You'll get a pop up that allows you to search for the app you'd like access to. Add any specific information about license level/type if necessary. The request will then be sent to the team member who owns/manages access to the plaftorm. Once they have provided access to the platform, they'll confirm via the Zluri task and you'll also receive confirmation. You should then receive an email invite, or be able to login via SSO depending on the tool. If you do not see a tool in the app that you believe is centrally managed, drop a line in the [#team-people-and-ops] channel

Travel

If you find yourself needing to do extra travel outside of the regular things listed above, e.g. you've been asked to take a last minute trip to work on an emergency project, we may pay for a nicer seat here, especially if you are traveling at very short notice or long haul. Ask on #team-people-and-ops if you think this may apply to you. This is intended for genuine one-offs, not where you've decided you'd like to come along to an extra offsite!

Sponsorships

If you believe an open-source project is fundamentally important to the success of PostHog then we should set up a recurring sponsorship. In this case, see the open-source sponsorship Marketing initiative.

Talent

People | Source: https://posthog.com/handbook/people/talent

Talent Principles

The talent team is uniquely placed at PostHog to have an outsized influence on the people that join the company in comparison to the rest of the business. This is why we ask our talent partners to think like owners.

PostHog's business model requires us to build and automate more products, to do that we need more engineers. Once we have more products, we require commercial team members to market those products and to look after the customers that sign up. We also need to support those customers and support the growth of the business. This means that the people that join PostHog directly impact our growth, hence why we invest so heavily in finding and retaining great people.

At some companies the Talent team are seen as a support function, at PostHog we view them as a growth function, so we need our talent partners to be thinking about the growth of the business as a whole, not just as a headcount.

This means taking ownership of understanding what products we need to build, the types of people we need to build those products, understanding how our commercial team make our customers successful and what types of people fit into those commercial roles. Talent partners should also understand how our funnels are working and what is needed to solve any problems within them.

Talent partners at PostHog do not just interview candidates and put them through, they own the process from beginning to end and should use all their skills to make that process successful.

Removing distractions from the rest of the team

As a talent partner, part of the role is to ensure we are removing distractions from the rest of the business where we can. Practically, what this looks like in talent is that other team members really only need to concentrate on assessment and shouldn't need to concern themselves with other areas of the process.

Throughout the recruitment process there are different ways that talent partners can remove distractions.

Sourcing Talent

Our default recruitment strategy is 100% inbound. We are not an outbound-first recruiting style organization and don't want to become one. That said, there are times when sourcing is the right tool for acquiring new candidates. This section explains when to reach for it, how to do it well, and how to know if it's working.

When To Source

We should source when inbound alone isn't generating enough qualified candidates at the top of the funnel for a specific role. This can happen for many reasons, including:

How We Source

Be targeted, not generic. We don't do high-volume spray-and-pray outreach. We'd rather send 15 highly personalized messages than 150 templated auto-messages. Sourcing at PostHog should feel like a curated referral from someone who's done their homework, not a LinkedIn InMail blast.

Practically, this looks like:

What A Good Sourcing Message Looks Like

Here's the kind of thing that usually works:

Hey (name), I was taking a look at your work on (specific project/contribution). We're building (specific thing) at PostHog and it really seems like the kind of problem that needs someone with your background in (specific skill).

If you haven’t heard of us at PostHog, we're fully remote, pay transparently, and our entire company handbook is public; you can read exactly how the team you'd join operates before you even start day 1: (link to small team page).

If this seems even 1% interesting to you, let me know, and I can set up a call for you with one of our talent partners handling the role!

And here's what doesn't usually work:

Hi (name), I came across your profile and was impressed! PostHog is a fast-growing product analytics company backed by Y Combinator. We're looking for world-class talent to join our world-class team. Would you be open to a conversation?

The first message is specific and gives the candidate real information. The second is generic and could be about any company hiring any role.

Sourced Candidates In the Process

Sourced candidates follow a slightly different path at the start of the process. See Managing sourced candidates for the exact mechanics.

The key difference to keep in mind: sourced candidates didn't come to us; we went to them.

Some context related to this, to end this sourcing section:

Top of the funnel

Talent partners should be speaking to hiring managers before a job goes live to understand the types of candidate that they should be looking for at the top of the funnel, this part should continue to be honed as you learn more from feedback at the various stages of the process. We want to avoid passing through inappropriate candidates, that waste time. It is a talent partner's responsibility to screen applications at the top of the funnel, here we are looking for signal that people can be effective at PostHog. We look for things like:-

Once you’ve screened the application and moved them forward you will have a culture screening call. We also want to ensure that we are putting relevant and motivated candidates through to the next round. At the screening stage it is important to make sure your notes are well organised and clear, not just for the next stage. These notes will be reviewed at SuperDay and will be taken into consideration if we are going to hire this person so make sure to have these in good order.

Post-screen, pre-SuperDay

Talent partners are responsible for moving candidates through the next two stages, we rely on automation from Ashby to allow candidates to book directly in calendars. Talent partners are responsible for the candidate experience so if a candidate or an interviewer can't make that time, then you need to step in and resolve the issue. It is important that interviewers know that they need to maintain an up to date calendar, however sometimes last minute changes do occur.

SuperDay

Talent partners are responsible for scheduling these, you can read more in the hiring process SuperDay section. The people involved should be focused on assessing the candidate so talent partners need to be alert for helping with any logistics to make sure the SuperDay runs smoothly.

Speed v quality

At PostHog, we have two major forces playing against each other when it comes to hiring. We want to move quickly, when we want to hire somebody, usually they should have started last week. The reason for that is we always want to hire for quality, and this takes time. This makes life as a talent partner a constant balance between these two things. The way we try and balance these things is try and move as quickly as possible with the things in our control. We aim to review applications ASAP, usually within 48 hours of application. Then we want to make sure that candidates can book their first round call with us within 2 business days. This keeps the momentum going from a candidate deciding to apply, to speaking to somebody. Our aim is to get back to every candidate by the end of the day after their interview, this is difficult, so whilst it's an aim we don't always hit it. We should be pushing interviewers to get feedback to us ASAP. The longer we leave canddiates without a decision the slower we will move.

Speed is a team effort, we should always keep things moving for each other. This means that if we see something that needs arranged and you can do it now, do it. No need to wait for the person who was last in contact to come online, just give them a heads up that it's done. This shared ownership mentality of speed is what will help us succeed.

When it comes to quality, we are always looking to be impressed. We use a rating system out of 4 and it is rare that we would hire somebody without receiving a 4 from somebody in the process. We know that exceptional people are usually spiky, they don't have an evenly distributed skill set so people can have reservations about a certain area and they can still be exceptional. Talent partners need to be aware of when to push and pull when it comes to hiring decisions. There will be circumstances when there are hard decisions over whether we should hire somebody or not and talent partners should be prepared to offer opinions. This could be vouching for a candidate that is similar to other successful team members who we might be hesitating on and conversely stepping in when it looks like we might be making a hiring that isn't appropriate.

When a hiring process is moving slowly, or we seem to be rejecting lots of candidates at SuperDay, it is up to the talent partner to realize and own this. They should review what is going on, understand the problem and aim to fix this. They should get in front of this as soon as possible.

Maintaining quality is also about ensuring that our interviewers are assessing candidates in a consistent way, so spotting when there are inconsistencies are coming up in feedback and doing something about that is important.

Evaluating success

A talent partner will be judged on how many excellent candidates they can get into the business and how those candidates manage to impact the business' performance. We would much rather we had consistently great people coming in and moving the business forward, than if we had lots of people joining but it's not helping us grow. We want talent partners to be able to just assess a candidate on a screening call but be able to figure out how we continue to scale the growth of PostHog, with great people at the heart of it.

Time off

People | Source: https://posthog.com/handbook/people/time-off

We offer our team unlimited time off, but with an expectation that you take _at least 25 days off a year_, including national holidays. This is to make sure that people can take time off flexibly while not feeling guilty about being on vacation.

The reason for this policy is that it's critical for PostHog that we hire people we can trust to be responsible with their time off - enough that they can recharge, but not so much that it means we don't get any work done. The People & Ops team will look into holiday usage occasionally to encourage people who haven't taken the minimum time off to do so. The 25 days is a minimum, not a guide.

As general guidance, we don't care about a few days here and there. If you are taking significantly more vacation time than most - for example, 40 days - we would be very surprised if you aren't causing a strain on the rest of your team as a result.

Permissionless time off

We care about your results, not how long you work.

You do not need to get approval for time off from your manager. Instead, we expect everyone to coordinate with their team to make sure that we're still able to move forwards in your absence. You should avoid things like:

How to book time off in Time Off by Deel

Before you start, make sure that:

If you don't do this, your holiday won't show up in the team time off calendar.

To book a day off:

Please manually book in public holidays you plan to take off as well. We have team members working in countries all over the world, so it is not practical for us to book these all in on your behalf. Some people also prefer to work on certain days even if they're considered a public holiday in the country they are living in or visiting. In the Time Off by Deel app, you can use the Bulk Add by Region feature to quickly identify and add the public holidays you want off.

The same rules as above apply regardless of the holiday length and type. Sick leave and any other types of time off should also be booked in the same way.

How to cancel time off

If you decide to cancel your holiday, drop a message in #team-people-and-ops and a member of the team will cancel the holiday for you, as only admins can delete holidays.

Flexible working

We operate on a trust basis and we don't count hours or days worked. We trust everyone to manage their own time.

Whether you have an appointment with your doctor, school run with your kids, or you want to finish an hour early to meet friends or family - we don't mind and you don't need to tell us. Please just add it to your calendar and, if you are doing anything that could require you to be immediately available (ie support hero / or any customer-facing role), please make sure you have cover.

When you should have time off

You are sick

If you are sick, you don't need to work and you will be paid - the upper limit for paid sick leave for your country will be specified in your contract. This is assuming you need a day or two off, then just take them.

Please let your manager know if you need to take off due to illness as soon as you are able to and add it to Time Off by Deel. You shouldn't pre-emptively book a bunch of days off sick, as you can't know how long you will actually be sick for and you may trigger the need for a doctor's note (see below). Just book the day or two off that you are sick then add more if you still feel unwell.

For extended periods of illness (5+ work days), or if you are going over the limit in your country/state, please speak to Fraser so we can work out a plan. In most countries, we will need a doctor's note from you.

If you have a medical condition you know will take you away from work regularly, please let Fraser know so we can work out accommodations with you and your manager.

Bereavements / Child loss

We do not define “closeness” and we won't ask about your relationship to the person or what they meant to you. Please just let us know up front how much time you would like to take.

Our bereavement policy also covers pregnancy and child loss for both parents, with no questions asked. Please take at least 2 weeks of paid leave.

If you need extended time for physical or mental health reasons, we will treat it as extended sick leave - just chat to Fraser.

Jury duty / voting / childcare disasters, aka 'life stuff'

There are lots of situations where life needs to come first. Please let it - just be communicative with your team and fit your work around it as you need. We trust you will do the right thing here.

If you are summonsed for jury duty, please let Fraser know right away - we can often get an exception granted if we have enough notice.

Parental leave

Parental leave is exceptional as it needs to be significantly longer than a typical vacation. Anyone at PostHog, regardless of gender, is able to take parental leave, and regardless of whether you've become a parent through childbirth or adoption.

This table explains the amount of paid time off, depending on how long you've been at PostHog

| Time at posthog | maternity leave | paternity leave | | - | - | - | | under 6 months | 3 weeks | 2 - 3 weeks | | 6 - 12 months | 12 weeks | 2 - 3 weeks | | over 12 months | up to 24 weeks | 6 weeks |

Parental leave at PostHog is designed to be more generous than your local jurisdiction's legal requirements. This means that in most cases you will receive the PostHog policy, if you live in a country with more generous parental leave, then you will receive that. This PostHog policy is not designed to be _in addition_ to your specific state/country policy.

We only pay the enhanced parental leave in one continuous block.

Parental leave isn't supposed to be combined with our unlimited PTO policy here - we aren't prescriptive and will trust your judgement. If you need a longer break after childbirth or a staggered return reach out to Fraser or your manager. But please note that we usually won't allow you do a combination of parental leave plus a long holiday in addition to that to extend your time off.

Please communicate parental leave to Fraser as soon as you feel comfortable doing so, and in any case at least 4 months before it will begin. They will let the People & Ops team know, who will follow up on any logistical arrangements around salary etc. and any statutory paperwork that needs doing.

Maternity leave

The above is in reference to Paid Time Off (PTO). Maternity leave can be extended using unpaid time off, please work with your team to find a reasonable solution for both your family and your team, and let Fraser know the total amount of time you expect to take off as soon as possible.

For quota-carrying Sales roles taking 12 weeks or longer, your OTE will be calculated by averaging your sales quota attainment for the prior two full quarters (capped at 100% OTE).

Paternity leave

We do not offer unpaid leave for Paternity leave.

Birthday and anniversaries

We celebrate all the big and little milestones at PostHog, including birthdays and work anniversaries. We celebrate each team member as a reminder of how much we appreciate them. Kendal is currently responsible for organizing these.

Birthdays

We have partnered with Wellbox to send all team members a personalized giftset for their birthday.

These are the steps for making an order:

  1. Log into our Wellbox account (details in 1Password)
  2. Select the birthday gift to send
  3. Fill out delivery information
  4. All set!

The birthday gift usually arrives on the day of or 1-3 days prior to the birthday. Shipping fees: UK shipping is free while all other countries will have shipping fees.

Anniversaries

On your first anniversary with PostHog, you will receive a giftcard from Giftogram or Prezzee (if you are based in the UK) which can be used on a wide selection of brands. On your second anniversary you'll be gifted a customized Lego minifig in a display case, and on your third anniversary, you'll receive a personalized gift from Wellbox.

1st year anniversary

For the first year anniversary, we give $50 for US gift cards/$55 for all other countries gift cards to cover service fees:

  1. Login into Giftogram by using your gmail credentials
  2. Two ways to create a new Giftogram, on the tool bar above where it says “Create and Send'' or you can click on the right hand side on the blue button “Send a Giftogram''.
  3. Walk through the following steps:
2nd year anniversary

The second year anniversary gets you a customized Lego figurine:

  1. Log into Fab-brick (login credentials are shared in People & Ops 1Password vault)
  2. Select the third tab “MiniFig Creator” and design your mini fig to look like the individual you’re celebrating!
  3. Make sure to include a display case and the three tier brick option
  4. After you’ve completed your design, check out. There should already be a Brex card on file. Please make sure you add the individual’s correct mailing address.
3rd year anniversary

The third year anniversary is a pack of gifts provided via Wellbox.

  1. Select the 3rd Anniversary gift in our profile
  2. Fill out delivery info
  3. You're all set!

The gift will usually arrive on the day of or 1-3 days prior to the anniversary date. Shipping fees: UK shipping is free while all other countries will have shipping fees.

4th year anniversary

On your 4th anniversary at PostHog as a big thank you for sticking with us, we give you a choice of 3 gifts:

  1. Sage Barista Touch coffee machine
  2. Apple 27-inch 5K Retina Studio Display with standard glass and tilt-adjustable stand
  3. Rimowa luggage set (large trunk, cabin bag, packing cube, toiletries bag)

On the run up to your anniversary, our Ops team will send you a link to the gift options questionnaire and order your 4 year anniversary gift once we receive your completed form. Thank you for making PostHog great!

Training

People | Source: https://posthog.com/handbook/people/training

The better you are at your job, the better PostHog is overall!

Books

Everyone at PostHog is eligible to buy books to help you in your job.

The reason we think books can be more helpful than just Googling stuff, is that the level of quality has to be higher for them to get published.

You may buy a couple of books a month without asking for permission. As a general rule, spending up to $50/month on books is fine and requires no extra permission. You can use your books budget towards audiobooks and podcasts as well, if you prefer.

Books do not have to be tied directly to your area, and they only need be loosely relevant to your work. For example, biographies of leaders can help a manager to learn, and can in fact be more valuable than a tactical book on management. Likewise, if you're an engineer, a book on design can also be particularly valuable for you to read. Additionally, we host a monthly book club called BookHog, and the budget can be used for those books as well.

Training budget

We have an annual training budget for every team member, regardless of seniority. The budget can be used for relevant courses, training, formal qualifications, or attending conferences. You do not need approval to spend your budget, but you might want to speak to your manager first, in case they have some useful feedback or pointers to a better idea.

We strongly encourage all non-technical team members to take some kind of entry level programming course - it's part of our 'you're the driver' culture that everyone can at least understand very basic concepts around software development. Codecademy is a good place to start and they cover many of the technologies we use, such as Python and React.

The training budget is $1000 per calendar year, but this _isn't_ a hard limit - if you want to spend in excess of this, request an increase to your budget in Brex and it should usually be granted.

If possible, please share your learnings with the team afterwards!

Conferences

You can use your training budget for time spent talking at conferences and user groups, including coaching others. It is expected that you would spend up to half a day a month on these activities. Like the training budget, this isn't a hard limit. If you think you need more than that talk to your manager in the first instance.

Product metrics

Product | Source: https://posthog.com/handbook/product/metrics

We track a short list of metrics in each per-product growth review. The idea of a standardised list of metrics is that each product team has roughly the same metrics they care about, and we can compare metrics across products and across time, such as revenue growth, to see how we compare.

Our growth review metrics strike a balance between depth, efficiency and "measuring what matters". We want to make sure our metrics alert us of potential negative (or positive!) developments, giving us enough signal to dive deeper into lower-level metrics.

If as a new product manager or team lead you want to look at a wider range of metrics, please do! These can either be incorporated in the growth reviews, or in ad-hoc metrics reviews you or your team are doing.

Metrics we use in growth reviews

Revenue

These queries are written and owned by the . They are standardised across products, and match the combined PostHog revenue queries. If you are intending a change, please chat to the billing team first.

Note that currently, refunds are not removed from per-product revenue, which is something to note in a growth review if there is a sizable refund that month.

| Metric | Notes | | --- | --- | | Monthly recurring revenue (MRR) | | | Annual recurring revenue (ARR) | | | MoM growth rate | For more mature products, we want this to be over 9%, for newer products between 15-20% on average | | New revenue growth rate | | | Revenue expansion rate | | | Revenue contraction rate | | | Revenue churn rate | | | Revenue retention rate | | | Total paying customers count | | | Paying customers growth rate | | | Quarterly net revenue retention (NRR) | Instead of a rolling metric, we use the quarterly values and report on it once a quarter. The rolling metric is available on the dashboard too, as it can be helpful for debugging | | Annualised NRR | Same as above | | Revenue share | For revenue products like data pipelines or product analytics, it makes sense to calculate CDP/batch exports/anonymous-only share of revenue, to understand individual product contribution better |

Usage

Product usage metrics are defined by the PM or small team lead. When setting up metrics for a new product, it’s recommended to start with a longer list and trim it once user behavior is better understood. We recommend adding all relevant product metrics to one dashboard that is accessible, kept up to date and reviewed by the whole small team. For better discoverability, some of us use the appendix ™ to mark the primary usage dashboard.

This dashboard can also include NPS & support metrics (see below).

| Metric | Notes | | --- | --- | | Unique monthly users - count | As defined by a key product action we also use in the activation definition, such as “flag created” or “recording analyzed” | | Unique monthly users, growth rate | | | Unique monthly organizations - count | Same definition as unique monthly users | | Unique monthly organizations, growth rate | | | Activation | Guide how to define activation for a new product; Dashboard that contains all per-product activation queries | | Usage retention (1, 3, 6-month) | Report on it once a quarter. Retention changes slowly, it will be easier to see changes zoomed out |

NPS

We have a NPS survey set up for each product. They need regular updates due to some survey limitations. If you want to set up a new NPS survey, speak to the Surveys PM (Cory Slater), he can help you set one up and keep it updated.

We track a 4-week NPS score, but we don’t have the volume of responses we need to get reliable results. This is why we include the open-ended feedback in the growth reviews, as this is usually more actionable.

| Metric | Notes | | --- | --- | | NPS score - last 4 weeks | Include constructive feedback as a comment in the spreadsheet for context |

Support

Similar to our revenue metrics, we are reusing queries the support team has set up, broken down by product. If you need to make a change or want to understand how SLA reporting works in detail, speak to the support team.

| Metric | Notes | | --- | --- | | Created tickets | | | Escalated tickets | | | Escalated tickets - SLA | The insight also tracks non-escalated tickets SLA, which is useful to be aware of, but we don’t need to report on it in every growth review | | Ratio no. of users vs. no. of tickets | Formula dividing no. of tickets / unique monthly users |

Metrics outside of growth reviews

If there are any other metrics you want to track to understand how well your product is doing, or which areas need improvement, go for it! Just make sure you are not tracking too many metrics, causing you to lose sight of what matters.

Tips for increasing metrics awareness in a small product team

If you are a PM at PostHog, you will be more successful if your whole team is aware and keeps track of your per-product metrics, instead of just you summarising growth review insights once a month. Here are some tips we found are working well:

Per-product cost & margin analysis

Product | Source: https://posthog.com/handbook/product/per-product-cost-margin-analysis

Understanding your product's infrastructure costs helps you make pricing decisions, contextualize growth reviews, and catch problems early. This guide covers how to build per-product cost allocations.

Why this matters

As products scale, margins matter. A product with healthy margins can afford aggressive pricing; one with tight margins needs to be more careful. You can't know which you're in without understanding costs.

Cost visibility also helps you:

When to do this

This makes sense for:

For early-stage products, don't bother. Ship first, optimize later.

The process

1. Map your architecture

Before talking to infra, sketch out what your product actually uses.

Write path: How does data get in?

Read path: How does data get back to users?

Storage: Where does data live?

You don't need to be perfect. The infra team will validate. But a rough diagram saves time.

2. Understand the cost buckets

Infrastructure costs fall into two types:

| Type | What it means | Examples | How to allocate | |------|---------------|----------|-----------------| | Direct | Tagged specifically for your product | Product-specific k8s nodepools, dedicated S3 buckets | 100% to your product | | Shared | Used by multiple products | Load balancers, reverse proxies, shared caches | Proportional (e.g., 20% of traffic = 20% of cost) |

Direct costs are easy. Shared costs require estimating your product's share of usage.

3. Work with infra on tagging

Reach out to #team-infrastructure to kick off the process – they can help you estimate your product's traffic share and navigate the tooling.

We use a FinOps tool for cost allocation. The infrastructure team sets up allocation tags that group AWS resources by product/function.

What you bring:

What infra does:

Expect iteration. First allocations are rarely complete. Common gaps:

ClickHouse costs (separate process)

If your product queries ClickHouse, you'll need to work with #team-clickhouse to get query cost attribution. This is separate from FinOps tagging.

ClickHouse costs are attributed by analyzing query_log to see which queries belong to your product. The ClickHouse team can help set up a query or dashboard to track this. Note that query attribution may require code changes to tag queries with a product identifier — this isn't just a dashboard exercise.

For some products (like Session Replay), ClickHouse query costs are a small percentage of total – queries are lightweight (list/fetch metadata). For analytics-heavy products, ClickHouse costs will be a much larger share.

We run ClickHouse in multiple regions (e.g., US and EU), make sure you account for costs in each.

4. Interpret the tags

The FinOps tool organizes costs using allocation tags. Here's how to think about them:

Product-specific tags (direct costs)

Shared infrastructure tags (need a proxy)

Network transfer tags

When pulling reports, make sure you're not double-counting. If you select multiple tags, check whether they overlap.

5. Build the cost model

Once you have the cost data, build a simple model for unit economics.

Get the totals:

Calculate unit economics:

cost_per_unit = total_cost / total_volume
revenue_per_unit = total_revenue / total_volume
margin = (revenue_per_unit - cost_per_unit) / revenue_per_unit

Break down by component (optional but useful):

cost_per_unit = compute_cost/volume + (storage_rate × avg_size × retention_period)

This helps you understand what drives costs. For storage-heavy products, storage will be significant portion of costs. For compute-heavy products, compute dominates.

Test your assumptions. Key inputs like traffic share, retention period, and storage rates are estimates. Check how sensitive your margin is to these — if your traffic proxy is 20% ± 5%, what's the range? If effective retention is 60 days vs 90 days, how much does storage cost swing? If the answer changes materially, document the range rather than a point estimate.

6. Document your assumptions

Every cost model has assumptions. Write them down so future-you (or someone else) understands what's in vs out.

Common things to document:

7. Set up monitoring

Once your allocation is stable, set up alerts:

You want to catch:

8. Add to growth reviews

Include margin metrics in your growth reviews:

| Metric | Notes | |--------|-------| | Total cost | From FinOps tool | | Cost per unit | Total cost / volume | | Gross margin | (Revenue - Cost) / Revenue | | Cost trend | MoM change |

Healthy products have stable or improving margins as they scale. If margins are declining, investigate.

What to expect

Timeline: 2-4 weeks to get a reliable allocation, depending on how well-tagged your resources are.

Accuracy: First pass is typically 80-90% complete. Shared resources and edge cases take time. Directionally correct is fine; perfect is not required.

Keeping it current

Cost models rot. The two main causes:

Review your cost model quarterly at minimum, or whenever you ship significant infrastructure changes.

Common mistakes

Worked example

See the Session Replay unit economics RFC for a complete example of this process applied to a real product.

Contacts

Per-product growth reviews

Product | Source: https://posthog.com/handbook/product/per-product-growth-reviews

For products that have product-market fit and are generating revenue, we are doing monthly per-product growth reviews. We recommend to do the growth reviews at the start of the month, to review the previous month. Most growth reviews happen asynchronously with the PM reviewing key metrics, analysing anomalies and sharing an overview with the team in Slack.

Objectives

The objective of the growth review is to review key product metrics and understand changes that have occurred over the preceding four weeks. By reviewing metrics on a schedule, we can spot issues faster than when reviewing them only sporadically.

Looking at the same metrics regularly will increase our understanding how they relate to each other, whether metric changes are expected or exceptions, and will make efforts to improve them more successful.

The growth reviews should focus on analysing anomalies instead of expected metric behavior, especially as teams become more familiar with their data.

Outside of the regular monthly reviews, it’s the job of the Product Manager to regularly monitor these metrics, becoming an expert in their nuances. Should a metric deviate from the norm, they are responsible for presenting a well-researched explanation during the review.

Contents

Recurring analysis

During these reviews, we assess both input and output metrics. Input metrics, serving as leading indicators, significantly impact output metrics like revenue and retention.

Here are some examples:

As mentioned before, we aim to analyse the same set of metrics month over month, so we can see trends and anomalies. However, there can be cases where we decide to change a metric if it’s a better indicator of long-term success, particularly for product activation and key product actions.

We’ve found that the best way to review what is a quite long list of metrics is to combine all numbers (revenue as well as usage) in one spreadsheet with a new column for each month, and only open individual graphs where required. Below is a screenshot that shows a part of our growth review document. View the internal growth review spreadsheet for internal users.

Image: Growth review template

Monthly focus areas

To make growth reviews more actionable, each of the three growth reviews per quarter should have a slightly different angle:

First growth review of the quarter (1 week in):

Second growth review of the quarter (5 weeks in):

Third growth review of the quarter (9 weeks in):

Deep dives

In each growth review, we usually do a couple of deep dives. Topics can be proposed in a preceding growth review, by team members, or it is simply something the Product Manager finds worthwhile. Here are a couple of examples:

In-sync or async?

While monthly metrics reviews are important, they are not always actionable. Or, sometimes a metric might be suboptimal, but we decide not to focus on it, because we have more important topics to work on. Since PostHog's culture leans towards no meetings by default, we are not meeting every month to review the metrics in-sync. For in-sync growth reviews, the following guidelines apply:

In-sync growth reviews are usually joined by the PM, the team lead and the responsible exec. Team members usually don't join the growth reviews, but a summary and the full analysis is accessible to everyone and shared via Slack.

Structure of the in-sync growth reviews

During the meeting

Before the meeting

After the meeting

Structure of the async growth reviews

Very similar to the above, except that the metrics walkthrough doesn't happen in a meeting. The PM shares a summary of the full growth review they prepared (key metrics, deep dives, anomalies and follow-ups) in the team's Slack with the team lead and responsible exec tagged. The whole team is encouraged to read up on the full notes & numbers that are linked, and ask follow-up questions.

Prioritizing work for mature products

Product | Source: https://posthog.com/handbook/product/prioritizing-work-for-mature-products

There's no golden rule or perfect framework for prioritization. Here are a few things we should keep in mind when building mature products (products with substantial revenue and usage).

What we look for in product managers

Product | Source: https://posthog.com/handbook/product/product-manager-hiring

This page outlines what makes a great product manager at PostHog: The traits, skills, and mindset we look for when hiring and developing PMs.

For how the role works day to day, see What product managers do at PostHog.

User closeness

We expect every PM at PostHog to be obsessed with users. Not just to enjoy talking to them, but to crave it. Great PMs feel uneasy when they or their team go too long without a real user conversation.

This obsession can show up in different ways: maybe in the past they’ve been a founder, a product engineer, a user researcher, or worked in another deeply user-facing role. The path doesn’t matter, but the curiosity does.

Talking to users is table stakes at PostHog. It’s a skill anyone who cares can learn quickly and keep refining over time, independently of their role and background.

A red flag is someone who’s worked in user-facing roles for years but shows little genuine curiosity or interest in understanding users.

---

Metrics ownership

Owning metrics requires two distinct capabilities:

Technical capability

PMs at PostHog are expected to do their own data analysis. They must be fluent in SQL and comfortable investigating metrics directly in our data warehouse. Additional experience in data modeling, analytics engineering, or building dashboards is a plus.

Depth of experience

Beyond technical skill, we look for PMs who have lived with metrics over time. Not just “churn was high, I ran five interviews, we fixed churn.”

We want PMs who have owned a product for months or years, stayed close to metrics like retention, churn, or revenue, and have gone deep into diagnosing and improving them.

Ideal candidates can share examples such as:

This experience is much harder to teach than talking to users, so we actively screen for it.

---

Product sense

Finally, PMs need strong product sense: The ability to recognize what makes a product feel powerful, intuitive, and cohesive.

This doesn’t mean micromanaging every design detail. It means having the judgment to know when the product experience is drifting away from what “feels right” and stepping in at the right level of detail.

Examples of how this shows up day to day:

Strong product sense means keeping a holistic view. Understanding not just what works, but what feels right to users and to PostHog’s product philosophy.

---

Hands-on with code (optional but valuable)

It’s not a requirement that PMs at PostHog know how to code, but it helps. PMs who can navigate a codebase, make small changes, or who have built small side projects often find it easier to empathize with engineers on their team and also with our target users (developers).

We also find that PMs who occasionally ship a small PR in their product:

They don’t need to be an engineer, but curiosity about how things work and the willingness to dive in and experiment is a strong advantage.

---

Culture fit

PostHog PMs need to combine strong opinions with deep trust in their team. We want PMs who:

For example:

Their job is to lead with context, to make a compelling case grounded in data and user insight. If the team decides differently, the PM assumes good intent and trusts that choice.

Ultimately, at PostHog:

We value PMs who show conviction where it counts, humility where it doesn’t, and trust in their team above all else.

What product managers do at PostHog

Product | Source: https://posthog.com/handbook/product/product-manager-role

This page explains what product managers do at PostHog: How the role works, what PMs are responsible for, and how they collaborate with their teams.

If you're looking for what we value in PMs and how we hire them, see What we look for in product managers.

The role at a glance

At PostHog, product managers (PMs) exist to bring clarity and context to their teams. They do not own roadmaps or dictate what to build next. Instead, they ensure the team deeply understands its users and its product’s performance, so that the right decisions happen naturally.

A PM owns the following three areas for their product and team:

Systems for user discovery & product thinking

Team metrics

We are following a defined format for growth reviews to make sure it's easy to compare performance across products. See Per-product growth reviews for more information.

Pricing

---

A PM shares ownership of the following areas with their product team. This doesn’t mean it's a lesser priority for a PM to own projects here. It simply means product engineers are equally expected to contribute.

User understanding

---

A PM supports the following areas for their product and team. In consultation with the team, they might own projects in these areas:

Everything else in the PM role builds on these foundations.

---

The product lifecycle and the PM's focus

The PM’s work evolves with the product. While the principles stay constant — context and clarity — the emphasis shifts by stage of the product.

Note on the table below: Especially in the early stages (1 & 2), a team usually doesn’t have a PM yet — product engineers own all product decisions.

Review this table with the context from “The role at a glance” (own/share/support) and “Deciding when to add a product manager to a team.”

| Stage | Goal | Key PM questions | Typical PM work | Example projects | |------------|-----------|----------------------|----------------------|----------------------| | 1. Idea → Beta | Get a first version of the product into users’ hands | - Who are we building for?<br/>- What is the 80/20 of features?<br/>- What needs to happen to get this into beta as quickly as possible? | - Lead or synthesize early user research<br/>- Define MVP scope and launch criteria<br/>- Shape initial positioning → who the product is for and why it matters<br/>- Coordinate beta launch activities and internal comms (beta program, website copy, docs) | — | | 2. Beta → General Availability | Launch a product that users want to pay for | - What’s missing in the product to start charging for it?<br/>- How do we charge for it? | - Lead or synthesize early user research<br/>- Decide pricing model<br/>- Coordinate launch activities and internal comms (beta program, website copy, docs) | PostHog AI pricing, renaming “Max AI” to “PostHog AI” for clearer positioning | | 3. General Availability → Growth Stage | Strengthen and expand adoption | - Are users truly getting value?<br/>- Where are they dropping off or getting stuck?<br/>- What drives retention and revenue growth? | - Regularly review usage data, churn patterns, and feedback loops<br/>- Run interviews and other user research to understand user sentiment and evolving needs<br/>- Research “best in class” competitors to highlight gaps<br/>- Identify the biggest opportunities for improvement or expansion<br/>- Frame problems clearly so the team can decide autonomously what to build next<br/>- Collaborate on pricing adjustments or repositioning as understanding deepens | User interviews on error tracking, Surveys pricing change, Research into needs of data engineers after re-positioning of Data Warehouse / Data Stack | | 4. High Revenue / Mature Product | Sustain growth and manage complexity | - What’s limiting growth now?<br/>- What new segments, features, or pricing models allow us to sustain a 9% MoM revenue growth rate? | - Track advanced revenue and usage metrics (activation, retention, revenue retention)<br/>- Share context and opportunities in identified problem areas with the team<br/>- Run “conviction sprints” to refresh understanding of ICP and persona(s) as they relate to the product<br/>- Ensure product strategy stays grounded in real user and business outcomes | — |

---

Deciding when to add a product manager to a team

Because most engineers at PostHog have strong product skills, many early-stage products don’t need a PM right away (stages 1 & 2).

At this stage, it’s typically the team lead’s job to decide:

  1. What to build
  2. When to release the first version
  3. How to charge for it

...with feedback and guidance from the Blitzscale team.

There are a few exceptions: When a team is working on an infrastructure-heavy product, it’s more important that the team lead has strong engineering and infra skills than product skills. In that case, it can make sense to add a PM early who can focus on product direction, positioning, and user research while the lead focuses on technical execution.

In most cases, though, we add a PM after a product has launched and started generating revenue (stage 3). Once a product is live, product opportunities multiply, and it becomes the PM’s job to surface what matters most, connect user feedback with metrics, and ensure the team’s effort goes where it has the biggest impact.

Ultimately, it’s up to the Blitzscale team to decide when adding a PM will help the team ship faster and focus more effectively. When that becomes true, it’s time to bring in a PM.

---

Collaboration and decision-making with the team

PostHog is an asynchronous and autonomous organization. PMs do not “own” the roadmap or feature priorities. The closest thing we have to a “product owner” at PostHog is the team lead.

Instead:

Product management at PostHog

Product | Source: https://posthog.com/handbook/product/product-team

PostHog has a product-minded engineering organization. Engineers own sprint planning and spec'ing out solutions.

So, what is the role of product managers at PostHog? PMs set context across multiple products for how products are being used, what the competitive landscape is like, what users are feeling about PostHog, and how they're using things.

Among other things, they

  1. run growth reviews for products that have product-market-fit
  2. organize user interviews
  3. coach product engineers on "how to do product"

For a more in-depth look at the product role at PostHog, see What product managers do at PostHog.

How PMs work

Small team membership

Each PM belongs to a small number of our small engineering teams, so that all teams have a strong sense that the PM is there to support them equally. This also ensures that the PM has the time to dive deep into issues that require it. PMs join small team standups and planning whenever it makes sense, but they are not required to attend _all_ team meetings. This is up to the PM to decide when it makes sense to join these, and when their time is better spent elsewhere.

Here is a overview that shows which of our PMs currently works with which team:

<legend>Anna Szell</legend>

- - - -

<legend>Annika Schmid</legend>

- - -

<legend>Cory Slater</legend>

- - - -

<legend>Abe Basu</legend>

- - - -

<legend>Mike Warren</legend>

- - -

<legend>Product teams with no PM currently</legend>

- - - -

Product goals

Product managers primarily support their teams in reaching their goals. The top two priorities of each PM are to run a growth review at the beginning of every month for each of their products, and to organize regular user interviews. (Our rule of thumb is 1 interview per week per PM).

The quarterly per-product planning typically highlight the biggest blind spots a team or product has (e.g. what metrics or parts of the product do we think have potential, but we don't have enough context yet). Teams are encouraged to include their "biggest unknown" as a research goal for the PM to own as part of their quarterly goals. Findings should be shared asynchronously via a GitHub PR in the product-internal repo, and in the growth reviews or team standups where applicable.

To keep track of their projects across teams, PMs should track their personal quarterly goals transparently somewhere, for example in the public PostHog Meta repo.

As the PM team, we are usually also pursuing a couple of side projects each quarter with the goal of leveling up how we do Product at PostHog.

In Q2 2026, we are working on the following themes:

“Business-as-usual” themes:

Releasing new products and features

Product | Source: https://posthog.com/handbook/product/releasing-new-products-and-features

This guide walks you through the full lifecycle of releasing new products and features at PostHog, from initial planning to general availability.

For complete step-by-step checklists when creating a new product, use the new product RFC template.

Overview of the product lifecycle

New products at PostHog go through four phases:

  1. Setting up - Initial planning and alpha development behind a feature flag
  2. Alpha - Slowly adding customers you've spoken with to the feature flag
  3. Beta - Opening up to all users who want to opt-in
  4. General availability (GA) - Full launch with pricing and marketing

PostHog includes a variety of early access features in the feature previews section of a users' settings page, as well as a roadmap of feature previews which are coming soon.

Items in the feature previews section can be toggled on or off if users want to try a feature out. Items in the coming soon section enable users to register their interest so that we can contact them with updates. Both sections work only at the user-level and not at the org or project level.

Please refer to the RFC for what the actual steps are. Duplicating them here would cause them to go out-of-sync extremely quickly. We'll simply explain the rationale behind each of the stages.

Phase 1: Setting up a product

Adding items to the coming soon menu early offers several advantages. It enables us to gauge interest in a new feature via sign-ups, equips our marketing teams with news they can promote to users, and ensures that betas can have sample users ready from the moment they launch.

Coming soon features can either be large or small, so use your judgement about what is of interest to users, but it should be something that you expect to work on in the next 3-6 months.

Phase 2: Alpha

During alpha, you're testing with a small group of customers you've specifically invited. It's fine to have bugs and your testers know that's the case. You're also actively working on fixing all known bugs before we can move this on to an opt-in scenario.

Phase 3: Beta

Beta is when you open up the product to all users who want to opt-in. Betas do not need to have been in concept stage first.

Once you are ready to move an item from the coming soon roadmap to a beta which users can interact with, update the stage from concept to beta (or alpha). This triggers an automatic notification to all subscribed users letting them know that the beta is available. Users who registered interest during the Concept stage can then opt in to enable the feature.

Make sure your early access feature flag includes a product_key on the payload field to give people access to the product in their sidebar. Check the new product RFC for more details.

Beta requirements

A beta doesn't need to be perfect, but it should provide value to the user and have base elements of functionality. It doesn't need to be feature complete, but it should provide more than a mocked up front end. We aim not to leave items in beta unless they are in active development. All betas should be clearly documented.

Betas do not need to be performant for high-volume users and can have big bugs, but should be clearly marked as such in the UI.

<CloudinaryImage src="https://res.cloudinary.com/dmukukwp6/image/upload/goodbeta_daa2ddca2a.png" alt="An example of a good beta" className="dark:hidden" /> <CloudinaryImage src="https://res.cloudinary.com/dmukukwp6/image/upload/goodbeta_dark_1dd8b2e833.png" alt="An example of a good beta" className="hidden dark:block" /> Betas should include a title, description, feedback button, payload with product_key and link to basic docs

All betas should follow the best practices below in order to provide a minimum amount of information and usability for customers.

Product teams are responsible for writing documentation, but the can help, if needed. Titles, descriptions, and links can be added using the early access menu.

It's helpful to let the Marketing teams know when new betas are added. They'll then add the beta to the changelog, organize any marketing announcements, plan a full announcement for full release, create an email onboarding flow to help you collect user feedback, and anything else you need. You can let them know via the Marketing Slack channel.

Collecting beta feedback

Teams are encouraged to collect feedback from users in current betas so that they can build better products and we have some automations in place to facilitate this.

After a week in any new beta, users will trigger an automatic email from the beta-feedback@posthog.com Google Group. This email will ask them, essentially, for any suggested changes to the beta. By default, all team leads and exec team members are in this Google Group and will get daily digests of responses. Others are invited to add themselves to the group, or change their notification settings.

Regardless, emails to this Google Group will sync to the PostHog Feedback Slack channel for general awareness. Team leads are encouraged to respond to beta feedback emails.

Teams can collect additional feedback if needed and the is able to help with creating feedback emails or funnels.

Phase 4: Launching to general availability

Once a beta is mature enough, you may want to launch it into general availability (GA).

If you're planning to launch your product in a specific quarter, you MUST let the Marketing team know at the start of the quarter.

Smaller features which don't require major announcements should be announced internally via the Tell PostHog Anything channel so other teams are aware.

You can set the feature flag to release to 100% of users BEFORE the Marketing launch, you don't need to wait for it.

See product announcements for marketing requirements during launch.

How do I work with marketing and billing teams?

The short version here is to try and give other teams as much notice as possible when starting a launch cycle. Marketing and billing teams typically ask for two weeks of notice before a major launch, as a minimum. It's the responsibility of the team lead to ensure these teams are aware of upcoming launches.

Who's responsible?

The Team Lead is typically responsible for:

Team members can be assigned specific tasks within the RFC checklist.

User feedback

Product | Source: https://posthog.com/handbook/product/user-feedback

😍 Want to share feedback? File a GitHub issue or reach out directly. We're always happy to hear from you!

We actively seek (outbound) input in everything we work on. In addition to having multiple channels to continuously receive inbound feedback, we generally do active outbound feedback requests for:

Feedback call process

Recruiting users

Ways to invite users for an interview:

Email writing inspiration

When crafting user outreach, just put yourself in the shoes of the person about to receive the message. How can you help each other by getting on that quick call?

Here's an example of an email from a real project, crafted by Michael Matloka to learn about the problems of top users of the PostHog AI beta:

Subject: Quick chat about your PostHog AI experience?

Hey $FIRST_NAME, Michael from PostHog engineering here!

I'm focused on improving PostHog's AI features, and I saw you've been using our AI assistant.

I want to make PostHog AI 10x better for you – and it'll be a gamechanger to hear about your personal experience with it.

What do you say about a 30min chat about your product-building workflow, this week or in a couple of weeks? I promise in return you'll get a better tool for your job, plus $40 of PostHog merch. :)

Feel free to pick any time that suits you in my calendar at $CAL_DOT_COM_LINK, or send me your own calendar! I'm excited to hear from you.

$GMAIL_SIGNATURE

Scheduling

During the call

After the call

  1. If you used BuildBetter, the tool will automatically generate a summary for you under the recording. We recommend checking this, and adding any additional thoughts, because the AI can sometimes pick up things incorrectly. You can also generate a doc using the platform, where you can give very specific prompts for the outline of the summary.
  2. We also want to keep recordings easily identifiable, therefore please rename the recording to [topic of the interview] user interview with [first name of the user], e.g. Web analytics user interview with Joe.
  3. In case recording wasn't possible, add the notes to the [Google Doc][feedback-doc].
  4. Share a short summary of the user interview in the #posthog-feedback Slack channel.
  5. If the user reported specific bugs or requested specific features, open the relevant issues in GitHub. Be sure to link to their person profile in case our engineers needs more context when scoping/building.
  6. Generate the reward for the user (see below).
  7. Most of the time, the reward will be a gift card for the PostHog merch store. If it's the case, create the gift card in Shopify.
  8. Follow-up with the user. Send any applicable rewards, links to any opened GitHub issues, and answers to any outstanding questions.

Rewards

We strongly value our users' time. As such, we usually send a small gift of appreciation. We have the following general _guidelines_, but just use your best judgement.

Instructions on how to create gift cards can be found in the merch store customer section.

Repositories of information

We keep a log of user feedback in the following places:

Wherever feedback happens, share it

Any PostHog team member may receive feedback at any time, whether doing sales, customer support, on forums outside of PostHog or even friends & family. If you receive feedback for PostHog, it's important to share it with the rest of the team. To do so, just add it to the #posthog-feedback channel.

To ensure feedback durability and visibility, the #posthog-feedback channel should not be used as the primary source of <i>storage</i>. Please add the feedback to the main Google doc.

We strongly recommend that everyone joins at least one user call per month. Regardless of your role, you will always benefit from staying in the loop with our users and their pain.

[feedback-doc]: https://docs.google.com/document/d/1762fbEbFOVZUr24jQ3pFFj91ViY72TWrTgD-JxRJ5Tc/edit [recordings]: https://drive.google.com/drive/folders/1kmhj0GMAZTjVauN8JJKs_U7BgaD7XnUJ?usp=sharing

In-Person Customer Visits

Product | Source: https://posthog.com/handbook/product/visiting-customers

Right now, PMs conduct a lot of remote interviews with customers about their specific products to bring context to their teams. As the number of PostHog products grows, and as customers increasingly use multiple products together, small teams risk developing a siloed view of how our customers actually use PostHog.

This matters because:

One partial solution is to go meet customers in person.

The following is a guide to share what has worked, and what hasn't for others who might want to try this out.

Setting up meetings

  1. Create an open flexible script.

Develop a small, flexible set of questions related to your product area, but keep them broader than your typical product interview. Leave space to understand organizational dynamics, team workflows, and feedback across PostHog’s full product suite.

Feel free to use interview time to watch customers use PostHog directly, and ask them questions about why they take specific approaches as they navigate around the product.

  1. Pick a metro hub, and research potential customers.

Choose a city with a good number of PostHog customers. Aim to identify ~12–15 customers across different sizes and maturity levels in that region. 15 may sound like a lot but you'll likely only be able to talk to 30% of these customers in the end.

You should do deep research on each account - take notes of which products, and features our customers are using Vitally (and of course PostHog). Who at those companies is using specific features and have a look a few session recordings. You should refer back to these notes before meeting the customer in person so you are informed.

  1. Coordinate with Sales and Customer Success.

Post in the relevant Sales and Customer Success Slack channels about your plans. Tag the account owners for the customers you’d like to visit. Ask if it’s a good time to reach out, whether they have additional context, or if there are other relevant customers or prospects to reach out to as well.

Hey @[sales/cs members] I’m visiting [City] in 2 weeks and would love to meet some of our customers in person. I was thinking of reaching out to [customer a], [customer b], [customer c] Is now a good time to chat with them? Any other company who uses [product] a good company to go visit?

  1. Join customer Slack connections.

Introduce yourself in each shared channel. A good intro might look like:

👋 Hey everyone! I’m [Your Name] from the PostHog product team — I’m visiting [City] next week and would love to meet some of our customers in person. If you’re up for a chat over coffee or lunch to share feedback on how you use PostHog please let me know

  1. Have your Sales owner tag relevant people within the customer org in that Slack thread. When the account owner directly tags people, the response rate increases significantly. Here are some examples of what worked:

@[relevant customer member] any ideas on who from [customer] may be around and interested in this? We find this kind of thing to be pretty mutually beneficial for us to learn about your needs, but also to help shape our offering as well

@[relevant customer member], [Your Name] from the product team is visiting [City] next week. They can visit and help your team better get better set up with [products]

cc @[relevant customer member]

  1. Schedule meetings.

Send calendar invites to everyone who responds positively. (We have typically found about 30% of outreach resulted in a meeting.)

  1. Remind participants.

Post a friendly reminder in the thread a day before each meeting.

Conducting meetings

  1. Be flexible.

Some meetings will last 30 minutes, others 2 hours. Lunch-time slots often work well, you can grab food together, build rapport, and then dig in. If other posthog team members are free and local in the area and want to come along, feel free to bring them too.

  1. Bring merch.

Small things like hats or shirts go a long way. Drop a message in #merch and Kendal can help you place an order.

  1. Structure the time.

An effective structure we found was:

After the meetings

  1. Follow up.

Thank them for their time, and if the customers had questions you could not immediately solve in your in person meetings, message and tag employees at PostHog who could help.

  1. Reflect and Share.

You should walk away with a much clearer view of PostHog from your customer’s perspective — not just how they use your product, but why they use PostHog, what types of questions or jobs they are trying to complete with PostHog, and how they use PostHog as a whole.

Share this in the posthog-feedback channel or somewhere similar. Feel comfortable sharing a longform write up and tag people and teams that are relevant. Here's an abbreviated example:

I had the opportunity to meet multiple customers in person over the last few weeks. I went in with the intention of focusing on data pipelines and messaging. However, I was open to receiving feedback on our entire product suite.

Two interesting customers were [customer a] [customer b]

[customer a] does xyz, and currently uses product analytics, session replay, data pipelines, and feature flags. I was able to talk to [customer employee name], who is the head of engineering, as well as another employee who ran a business unit. They were the power users.

They use product analytics in two main ways:

1. Business review. They had a high-level dashboard that was setup earlier and the leadership team would view it every single week, tracking changes in things like MAUs and other business critical product KPIs.

2. Ad hoc product insights. If the GM of one of the product lines had a question that popped up in his mind, he would go to PostHog to try to answer it first.

They used insights and dashboards and were pretty comfortable with breakdowns and some of the more advanced features. However, interestingly, they did not use sql queries and they were not aware that you could even use sql either.

Data pipelines was used to inform colleagues of high-value customer interactions in the product. This was done via a Slack destination. This is a very specific job to be done that I've seen in a number of other companies as well.

They were particularly interested in organization-level views, specifically:

- Organizations who have completed key events

- Organizations who have not completed key events

The primary complaint and frustration that they had with PostHog product analytics was that it was very difficult to search for people who have not done things or organizations where things have not occurred.

I was surprised that both [customer a] and [customer b] did not know you can use sql insights with product analytics @product-analytics-folks

One other thing to note was both customers also did not know where they could find a list of all events, and definitions (I showed them the data management tab). One customer commented it would be good if AI added a description automatically of what each event actually signified, and if there were easy ways to delete old events. Data governance came up a lot even with this mid sized companies

Brand

Strategy | Source: https://posthog.com/handbook/strategy/brand

Brand matters to us, greatly. It's one of the four major reasons people get recommended PostHog so directly helps us grow. Everyone else is largely terrible at it so it's a massive opportunity to build a long term advantage as a company, and frankly it's fun. It's every interaction we have with our users and comes from how the company itself is designed. It's more than hedgehogs:

The harsh truth of cat videos

When it comes to attention on the internet, you are competing with cat videos and TikTok, _not_ B2B SaaS competitors. Be realistic - if it's not actually funny (and it's "corporate try hard") then it's not good enough. At one point we realized we were getting cutesy - "ooh a hedgehog". That's not interesting enough for people outside PostHog, even if we think it's cool.

It is thus encouraged to be rogue / sarcastic / meme-y / unhinged / weird.

Our competitors are (i) more defensive and self-interested in their approach (focused on optimizing revenue growth), and (ii) more boring. Let's keep it that way. If we have fun, we'll stick it out longer and will win in the long term.

Brand first

We should always optimize to not piss users off unless they're being totally, extremely unreasonable, in which case figure out how to be the bigger person. Even when that costs us revenue.

For example, we should refund customers when they screw up their tracking and get a shock bill.

Pavlovian merch response

Give it out to people who say nice things about us. That'll create an army of developer warriors fighting for PostHog on the internet!

Breaking bad news

Sometimes you may need to tell customers something they don't want to hear - e.g. "we don't have X planned in our roadmap". Instead of a vague "I'll share this feedback" type response, be specific and give context like "Hey we don't have that planned because we're focused on X, Y, Z at the moment. If you want to suggest it to the wider team, you can do so by X".

Karma

Be helpful to other companies. We are here to increase the number of successful companies in the world – especially those with high potential that are putting in the work, like YC current batch ones. For example, if a YC company reaches out, take them seriously and buy their product (if it's genuinely valuable and safe to do so) or give direct feedback if not.

Hacker News premortem

Hacker News is a very intensely logical and critical place – in a good and bad way. If you are doing something think, "How would this go down on Hacker News?" If the answer is "poorly" then change it. This rule of thumb applies to everything, not just stuff literally getting posted there.

Brand assets

We keep a comprehensive list of brand assets and guidelines for their use on the dedicated brand assets page.

Customer support

Support | Source: https://posthog.com/handbook/support/customer-support

You can build a good company by focusing on getting lots of customers. To build a great company, you must delight your existing customers. This means that the journey doesn't simply end once we sign up a user - even more important is to ensure that PostHog is consistently delivering value for them.

How we ensure amazing customer support

It's easy for customers to reach us

We have a few different routes for users to contact us. As an open source company, our bias is towards increasing the bandwidth of communication with our users and making it easy for them to reach us through a clearly defined, simple set of channels.

These are the ways in which customers can currently reach us:

Sometimes, people reach out to us with support issues on Twitter/X. Regardless of whether someone reaches out to your personal account or to the company account the broad approach should be as follows:

  1. Check first if they already have a ticket in Zendesk (either in-app or via /questions). There is nothing more annoying for a user than being asked to create a support ticket if they already have. If you don't have Zendesk access, ask someone in CS.
  2. If no tickets exist, explain that we can't provide support over social media and ask them to create a support ticket within the app - this is _much_ better than trying to solve their problem over Twitter as Zendesk pulls in a bunch of contextual information and is easier to collaborate in. Do this from the PostHog Twitter account - otherwise you will get personally contacted every time this user wants help.
  3. If yes, say that we can see their ticket and reassure them that all tickets are triaged and responded to. Let CS know that you have done this. Again, use the PostHog Twitter account.

Your objective should be to get the conversation into Zendesk ASAP, because it's easier to help the person there and to avoid setting a precedent that complaining visibly on social media results in an expedited response. An exception to this rule is if you are engaging with someone who has provided general feedback about PostHog - feel free to use your personal account if someone has a feature request or similar. If a user engage in a way which causes you _any_ distress, you can skip all of the above and just highlight it in Slack for CS to deal with.

Sometimes users ask about the progress of certain issues that are important to them on GitHub. We don't consider GitHub to be a proper 'support' channel, but it is a useful place to gauge the popularity of feature requests or the prevalence of issues.

Support is done by actual engineers

All support at PostHog is done by actual, full-time engineers. We have two types of engineers:

What do Support Engineers do?

Right now, support engineers provide the first level of support for the following teams:

Support engineers respond to and solve as many tickets as they can for these products, or escalate tickets to the appropriate product engineer if needed. For all other products, the engineers on those teams are directly responsible for support. The support runbook is maintained on the Support Hero page.

When we hire new support engineers they will usually spend the first few weeks focused just on product and web analytics tickets, until they've started to build more familiarity with the platform as a whole.

What do Support Heroes do?

One person on each product team takes on the Support Hero role each week. This is a rotating responsibility, where the person involved spends a significant chunk of their time responding to support queries across Slack, email and Zendesk, and sharing that feedback with the team and/or building features and fixes in response. We find each stint as Support Hero throws up a lot of really valuable feedback.

Response targets, SLAs, and CSAT surveys

Response Targets

We have a high volume of tickets and we're a small team, so we're not able to respond to all issues equally. For this reason we prioritize tickets according to the customer's plan. We set a response target for each plan so that we can be sure that tickets are being handled effectively.

Note that tickets are automatically prioritized in Zendesk and users are updated with information about response targets to set appropriate expectations. In all cases, tickets are routed to the appropriate team and that team is responsible for meeting the response target.

The response times listed below are targets for an initial response, and it's possible we will respond faster. These targets are listed in calendar hours Monday - Friday. Please note that we do not offer any level of weekend customer support.

| Plan level | Target response time | |-----------|----------------------| | Free | Community support only | | Pay-as-you-go | 72 hours | | Boost | 48 hours | | Scale | 24 hours | | Enterprise | 8 hours |

Within Zendesk, we will further prioritize tickets based on their selected severity. If you come across a ticket that doesn't have the severity set appropriately according to our severity level guidelines, then you should update the ticket with the appropriate severity level.

As a general rule, we aim to prioritize customers who pay for support, or who are otherwise considered a priority customer, to ensure they get the best possible support experience.

_NOTE:_ If a user has recently upgraded to the Enterprise plan, their tickets may not automatically be tagged as Enterprise in the PostHog Priority field in Zendesk. If this happens, manually set the Priority field to Enterprise to ensure they get in the proper queue.

Follow-up / next reply response targets

Our follow-up response targets and next reply targets are the same as the initial response targets. We believe that customers should receive regular updates on the status of their query - even if the update is that we're working on it and there's nothing meaningful to report at present.

Escalated ticket response targets

When support engineers need to escalate issues to other engineering teams for deeper investigation, the investigations can take longer but we should still check in with the customer to let them know! For escalated tickets, our response targets are the same as for all other tickets.

_NOTE:_ The targets are for a reply to the user. If the escalation turns out to be a bug or feature request, the reported issue doesn't have to be solved by the response target date, we just need to reply to the user. That reply may be to let them know it won't be fixed right away, but that we have opened a bug report or feature request. If we've opened a feature request or a bug report, you can refer the user to the GitHub issue for updates, and Solve the ticket. If you're replying with info that should resolve the issue, leave it in a Pending state (will be auto-solved in 7 days if the user doesn't reply.) If the user replied to confirm the issue is resolved, Solve the ticket. Use On-Hold sparingly, e.g. if you intend to get back to the user soon (more than a week, less than a month.)

CSAT Surveys

We send out CSAT surveys after a ticket has been closed for at least 3 days using this Automation. The emails contain a link to https://survey.posthog.com/ with their distinct_id, ticketId, and the assigned team as query parameters, which are being used alongside their satisfaction rating to capture a survey sent event. The code for the survey website is in the PostHog-csat repo and the responses can be viewed in this dashboard.

As an incentive, we offer to feed one hedgehog for every survey sent. Ben Haynes is the current holder of the hedgehog feeding rights, and takes care of this by making a quarterly donation to the Suffolk Prickles Hedgehog Rescue Charity.

Guidelines for doing support at PostHog

Dealing with difficult or abusive users

We very occasionally receive messages from people who are abusive, or who we suspect may have a mental illness. These can come via the app, or Community Questions. We do not expect support engineers to deal with abuse of any kind, ever.

If this happens, notify Charles Cook, Abigail Richardson or Fraser Hopper. They will either take this on, or advise you on how to reply.

We very rarely receive messages from people wishing to make a legal claim against PostHog, such as cease and desist letters. These can come via the app, or Community Questions. Do not respond to these requests. Instead, notify Charles Cook or Fraser Hopper immediately. They will either take this on, or advise you on how to reply.

Dealing with billing issues

Issues related to billing are handled exclusively by our billing engineers. Billing support is currently lead by Eleftheria Trivyzaki. Most tickets get routed directly to the , however some issues require technical investigation before the billing issue can be resolved. In such cases, add Eleftheria Trivyzaki as a follower to the support ticket from the outset, and leave an internal note briefly explaining what will eventually be required. Complete whatever technical investigation is required and then let the customer know you are handing them over to the .

Users asking for demos, consultations or partnerships

We often receive requests for demos, consultations or other sales-related requests. Most of the time these can be escalated to the Sales team if they arrive via Zendesk. If they arrive directly via email you can forward them to sales@posthog.com.

We also often get requests for partnerships, backlinks, or messages trying to sell us baby Yahama pianos. Sometimes, people want to invest in PostHog. Most of these are obviously spam and can be ignored, but if you think an opportunity may be genuine then you can forward it to Joe Martin so he can take over.

Users asking for their data to be deleted

Most of the time users can self-serve deletion requests and should be encouraged to do so in order to save time and ensure they take responsibility for deleting their own data. Users can delete their environment, project, and organization in the appropriate 'Danger Zone' section of their settings page if they have the correct permissions. Admins can remove members from their organization in the Members page.

If a user refuses to delete their own data, you must first confirm they have the permissions to do this by checking their email address matches that an organization admin. As an extra layer of security, you should also ask them to confirm their address by emailing you directly from it (e.g. not through Zendesk.) Only then should you delete any data on their behalf.

If a user asks for us to delete all of their _personal_ data in compliance with GDPR, you should confirm their identity as described above and delete the user from PostHog. Finally, you should notify Joe Martin so he can delete customer data from our email marketing systems, and Fraser Hopper so he can coordinate further data deletion across our systems.

Targeted deletion requests

Occasionally users will mistakenly share sensitive data which should not have been shared via event/person properties. As such they wish to be more targeted in their deletion by removing only certain properties or events instead of an entire project.

Before taking any deletion action, they should ensure that they are no longer sending the sensitive data to us either by redacting information client-side or setting up a CDP transformation. If they don't do this first they will continue to send us the sensitive data even after deletion is actioned.

Due to the the nature of how our infrastructure works, events and properties cannot be amended once they are stored in Clickhouse. As such, the only way to remove sensitive data is to delete the person profile associated with the events where the sensitive data has been captured. This can be achieved in the app or via the API. As per our deletion docs, the person profile will be removed immediately but the events will take some time (days or even weeks) to be removed.

If they aren't using person profiles, they won't be able to use this method and as such will need to revert back to deleting the entire project containing the sensitive data.

For customers spending $20K and above a year our Clickhouse team may be able to craft a more targeted event deletion/property amendment query. There are no guarantees here and it is very time consuming hence why we will only explore this for high-paying customers. If you have a customer in this situation and the above methods won't work for them; escalate a support ticket to the Clickhouse team with as much detail as possible on the event and property names where the data is leaked so that they can create a query to process the deletion. To expedite this you should ask the customer for a SQL query which correctly identifies the events or properties to be deleted, or help them in crafting that. Also verify that the numbers returned by this query match what the customer expects to see. Once started this can also take some time (days or weeks) so you should set those expectations with the customer.

If they need to remove data immediately, the only way to do this is the delete the project. There are no other alternatives.

Handling sales leads

If a support ticket should be handled by one of the sales/onboarding teams, use the Create a lead macro in Zendesk to respond to the customer. The macro adds the sf-lead tag to the ticket, which will automatically create a new lead in Salesforce. This automation is documented in the Sales area of the handbook.

Community

Support =/= community - we consider them to be separate things.

Tutorials

We want to help teams of all sizes learn how to ask the right product analytics questions to grow their product. To help, we create content in the form of tutorials, blog posts, and videos.

We've also created a bunch of useful templates that cover many of the most popular PostHog use cases.

Support team incident response

Support | Source: https://posthog.com/handbook/support/support-incident-response

When things break, we need to make sure users know what's happening and feel supported through it. This page covers how the support team handles incidents - what we do, when we do it, and how we stay aligned with engineering, marketing, and sales.

Raising an incident

Anyone can and should raise an incident if they suspect there is one. This includes support team members. When in doubt, always raise an incident - it's much better to declare something that turns out not to be an incident than to miss a real one.

Declaring an incident doesn't trigger any external notifications. It just creates an incident channel and alerts the right people internally.

If you're seeing multiple tickets about the same issue, or if something seems seriously broken, type /incident in any Slack channel to declare one. See the full guide on raising an incident for more details.

Once you've raised the incident, you should raise your hand to watch it from a support perspective, or actively hand it over to someone else on the team.

Your role during an incident

When an incident is declared

When an incident gets declared, our incident.io workflow automatically posts to #team-support. This post asks for someone from support to raise their hand and take ownership of watching the incident. All members of the support team are responsible for making sure that an incident has a Support Watcher assigned during business hours.

Support team members aren't automatically added to incident channels. You can keep an eye on #incidents for an overview of what's currently open. When you raise your hand in #team-support to watch an incident, join the incident channel using the link in the workflow post.

When you join the incident channel, you'll be automatically assigned the Support Watcher role in incident.io. This makes it clear and visible to both the support team and the incident team who is managing the incident from a support perspective.

If nobody from support joins the incident channel, the incident lead will get a nudge reminding them to assign the Support Watcher role, along with a note that support only watches incidents during business hours.

If you're online and available during your normal working hours, raise your hand on that thread. This is informal - it's just whoever can do it. If nobody responds after a few minutes and you're around, go ahead and volunteer even if you're in the middle of something else.

We don't have on-call support coverage. You're only expected to raise your hand for incidents during your normal working hours. If an incident is declared outside of working hours, support tickets will either need to wait until support working hours resume, or be handled by the @on-call-global person from engineering.

Once you've raised your hand and joined the incident channel, you'll be assigned the Support Watcher role. You own:

Your job isn't to fix the incident. Your job is to be the bridge between the incident response and the support team, and to make sure users opening tickets get accurate information.

Using the status page

The status page is our source of truth during incidents. The incident lead is typically responsible for keeping it updated, but as the support team member watching the incident, you should make sure the messaging is clear and customer-friendly.

Review the status page messaging when it's updated to ensure:

If the status page update is too generic or unclear, work with the incident lead to improve it. You can update it yourself using /incident statuspage (/inc sp) in the incident channel.

Good status page messaging:

Feature flags are being returned but with 30-60 second delays instead of the usual <1 second response time. All other PostHog features are operating normally.

Unclear status page messaging:

Elevated errors in the feature flags service.

Always link to the status page in ticket responses. Users should be able to check it themselves for updates rather than having to ask us every hour.

Creating a macro for the incident

If the incident is likely to generate multiple support tickets (most incidents do), create a macro so the whole team can respond consistently. To create the macro:

  1. Clone the Incident information macro as your starting template
  2. Look at the incident number in incident.io (e.g., INC-123)
  3. Add the tag incident/[number] to the macro (e.g., incident/123). This ensures all tickets using this macro are automatically tagged with the incident number for tracking.

The macro should include:

Keep it simple and factual.

If there's a Comms Lead assigned:

If there's no Comms Lead assigned: Use your best judgement to create a clear, factual macro. If you think the incident warrants coordination with Marketing, mention it in the incident channel - but don't let that block you from responding to customers.

Update the macro if the situation changes significantly. If you do update it, let the Comms Lead know (if there is one). Delete the macro once the incident is resolved.

Let the team know in #team-support when you've created the macro so they know to use it.

Handling incoming tickets during an incident

If you're the person who raised your hand to watch the incident, you're also responsible for keeping an eye on the support queue during the incident. Check through the queue for any existing tickets which might have been raised before the incident was declared, and then continue to monitor for new tickets being raised related to this incident.

Sort your tickets by newest so you can easily spot new tickets coming in. This makes it much easier to catch incident-related issues as they arrive.

Don't send generic "we're working on it" messages. Use the macro if you created one, or link to the status page, explain what we know about the impact, and give them a real timeline if we have one. If we don't have a timeline, say that too.

Example response:

Hey - yes, we're seeing this too. There's an incident affecting feature flag requests right now. You can follow updates on our status page, but the short version is that flags are returning but with higher latency than normal. The team is working on it and we'll update the status page as we know more. I'll be sure to let you know when it's resolved.

Important: Always attach incident-related tickets to the open incident using the incident.io app in the right-hand sidebar in Zendesk. This helps us track the user impact and keeps everything organized. Anyone on the support team responding to incident-related tickets should do this, not just the person watching the incident.

If you're seeing the same issue across multiple tickets, drop a note in the incident channel. Sometimes support spots patterns before monitoring does. Also share this in #team-support so the rest of the team knows what to expect.

Creating proactive tickets

Sometimes we need to reach out to users proactively during an incident - for example, if a specific org caused the incident or was significantly affected. The engineering team may ask us in #team-support to create tickets for affected customers.

Before creating any proactive tickets, check with the incident lead and coordinate with the Comms Lead to ensure we're not duplicating their communications.

Keeping the team updated

As the person watching the incident, keep the rest of the support team informed in #team-support. Share:

You don't need to copy every single update from the incident channel. Just share the things that would help someone else on the team respond to a ticket about this incident accurately.

Working with the Comms Lead

For an incident that requires external comms, Marketing will appoint a Comms Lead. They own external communication strategy. We don't.

As the support team member watching the incident, you should coordinate with the Comms Lead:

If you think external comms are required but there isn't a Comms lead assigned, you can request one by asking in #team-marketing or using the @all-marketers tag in Slack.

Coordinating with TAMs and CSMs

Enterprise customers often have dedicated TAMs (Technical Account Managers) or CSMs (Customer Success Managers) from the Sales/CS team. When these customers reach out about an incident - either through their Slack channels or via tickets - we need to coordinate our response.

For minor incidents, we can usually just respond ourselves. Keep it straightforward and use the macro if you created one.

For major incidents, Sales/CS teams may want to handle communication with their customers directly. Check #cs-sales-support to see if they're coordinating a response plan. If you're unsure whether to respond to a particular customer:

Remember that TAMs and CSMs work in specific timezones. If an enterprise customer reaches out when their TAM/CSM is offline or on holiday, don't leave them waiting. Respond to their question. You can loop in their TAM/CSM as a heads-up, but the customer should get an answer from someone.

Handing over across timezones

If an incident is still ongoing when you're about to log off for the day, hand over to someone who's still working or coming online. Try to hand over to someone who has the most working hours ahead of them - this avoids multiple handovers.

Post in #team-support via the original workflow thread with:

If you're US West Coast based and logging off for the day, write detailed handover notes given that nobody in EU will be online yet. This way they can pick it up smoothly when they start their day.

If you're picking up an incident from someone in a previous timezone, read their notes, scan the incident channel for updates since those notes were written, and jump in. Raise your hand on the original #team-support workflow post if you haven't already.

After an incident resolves

Once the customer-facing impact of the incident is resolved:

For major incidents, there will be a post-mortem. Read it. If you have feedback from the support side - things we could have done better, information we were missing, communication that didn't work, patterns you saw in tickets - add it to the post-mortem document or share it in the incident channel. Your perspective on the user impact and customer communication is valuable.

Technical support subject matter experts (SMEs)

Support | Source: https://posthog.com/handbook/support/support-smes

Why we have SMEs

As we add more products to PostHog, it becomes increasingly difficult for individual support engineers to effectively work across every product. SMEs help us maintain deep expertise across our products and ensure every ticket gets answered by someone who really knows their stuff.

By allowing SMEs to own groups of PostHog products, we build the knowledge needed to delight users with better and faster answers, and develop close relationships with product teams so we can advocate for fixes and features that actually matter to users.

Product ownership

Product groups

The various PostHog products have been split into the following product groups:

A note on these groupings: These product groups are based on current ticket volumes. As products grow or new ones launch, we'll split or reorganize them. This structure will evolve with our needs.

SME ownership

All technical support engineers, regardless of SME ownership, work on:

Beyond that, we have SMEs who own specific product groups. For each product group, we select one person from EU and one from NA to maintain timezone coverage:

Flags

Data

Replay

Observability + AI & client libraries

Analytics

What SMEs actually do

Being an SME means you're the go-to person for your product group. This breaks down into three key aspects:

Own the customer perspective

Partner with engineering teams

Improve support

How to work as an SME

Your Zendesk views

SMEs each have a dedicated view in Zendesk that includes:

These views contain tickets from your specific product groups (see groupings above) and all shared product groups (analytics and unclassified tickets). If there are any unclassified tickets that appear in your view (tickets in the 'Support' group), then where possible please assign these to the correct product. Let Abigail Richardson know if there are certain types of tickets which regularly appear in the 'Support' group.

Important: These views show tickets assigned to other team members too, giving you full context of your products. Jump in if you know something off the top of your head or see someone stuck.

Your daily workflow

Start your day with your SME views. Build your knowledge. Get really good at your products. Once you're on top of your SME queue, move to the Technical support shared view which has all tickets the technical support team is responsible for.

But here's the key: you're not locked into only your SME products. The goal is expertise, not silos. If you're caught up and the shared queue needs attention, dive in. If you're swamped and someone else can help with your SME queue, ask for it.

Coverage and coordination

You and your SME counterpart in the other timezone should work together to:

As we grow, we'll need less manual coordination. For now, always consider coverage and communicate proactively.

Support team overview

Support | Source: https://posthog.com/handbook/support/support-team

The support team exists to help our users succeed with PostHog, and we do that differently than most support teams.

We're not a ticket-routing operation. We genuinely care about making our users' experience exceptional, which we do by being a deeply technical team that takes pride in solving problems ourselves. We write code, ship fixes, update docs, and build internal tooling to deliver that experience. We move fast, stay humble, and believe that great support is about empowering users, not just answering questions.

What makes us great

We communicate clearly and don't hide behind jargon. We're relentlessly curious - pulling at every thread when investigating an issue, and seeing the bigger picture beyond the immediate problem. We're always looking for ways to improve, whether that's our processes, our docs, or our own skills. We're thorough without being slow, thoughtful without overthinking, and we genuinely care about getting things right for our users.

Our values

Take ownership

Own your work from start to finish. Be proactive and self-driven. Don't wait to be told what to do. When you see a problem, jump in and solve it. Be resourceful, curious, and hands-on. If something needs doing, figure it out and make it happen. Taking ownership means being accountable for outcomes, not just tasks.

Delight users

Go beyond solving problems. Create moments that make users' days better. Be genuinely caring, reassuringly human, and empathetic in every interaction. Surprise users with your thoughtfulness and responsiveness. Bring positivity and warmth to technical conversations. When users walk away from an interaction with you, they should feel helped, valued, and hopefully a little bit delighted.

Stay humble

Check your ego at the door. Take feedback as a gift and be open to learning from anyone, regardless of their experience level or role. Share knowledge freely with the team and communicate with transparency and honesty. We get better together by staying curious, admitting what we don't know, and helping each other grow. No one has all the answers, and that's okay.

Ship fixes

Be deeply technical and hands-on. Don't just log bugs or pass tickets along - write the fix yourself. Raise PRs for docs improvements, patch code, and solve problems end-to-end. We're engineers who happen to do support, not support agents who escalate to engineers. If you can fix it, ship it. That's what makes PostHog support special.

What we do

We help users through in-app support (which routes to Zendesk), community questions, and Slack channels for enterprise customers. But we don't stop at answering questions:

We provide support Monday through Friday, 9am GMT to 5pm PST. We focus on being consistently excellent during our coverage hours, with clear expectations set for users.

What we don't do

Our long-term vision

Support should be a competitive advantage for PostHog. Users choose us partly because they know they'll get exceptional support. They stay partly because they feel valued and helped. They recommend us partly because they've had great experiences with our team.

As PostHog grows, we're scaling thoughtfully. We prioritize keeping the team technical, staying true to our values, and maintaining our user-first culture. We value attitude and aptitude over experience - we need people who can jump into the unknown and figure things out. We want to contribute code and build tools, while keeping quality and user-centricity at the core of everything we do.

The benchmark we're aiming for: other companies should measure themselves against PostHog support - not because we answer tickets quickly, but because we genuinely help users succeed.

Support zero weeks

Support | Source: https://posthog.com/handbook/support/support-zero

Support isn't just about tickets! Well... it's a lot about tickets - but we don't judge the success of support engineers solely by how many tickets they solve. Instead, we like to free up support engineers to spend some time working on other tasks which help users. These tasks can include working on their quarterly goals, building new support features, contributing small PRs for bug fixes, or whatever else they think will help us move faster.

Why are support zero weeks useful?

The goal of zero weeks is to make non-ticket time more efficient and effective, and get more of our quarterly work done as a result.

At times we can really struggle to pull ourselves away from tickets and focus on the bigger picture. Having a block of dedicated non-ticket time allows us to spend time shipping things that will help us become better as a support team, and allow us to better help our customers.

How do support zero weeks work?

Each support team member is given an allocation of 2 support zero weeks in each quarter (i.e. 10 working days). These are weeks that each team member can book.

Team members are encouraged to consider taking the same zero weeks as someone else working on the same quarterly goal (so it can be done hackathon-style, you can consider using your meetup budget, etc)

Before the quarter starts

Before your zero week

During your zero week

:warning:

Team members who are working on tickets need to be aware of ticket queue and highlight in #team-support if the workload is getting too high

After your zero week

What does this mean for side quests outside of quarterly goal work?

Troubleshooting tips

Support | Source: https://posthog.com/handbook/support/troubleshooting-tips

A collection of tips & tricks on helping to troubleshoot customer issues.

General

Feature flags

Funnels

Common funnel troubleshooting steps

Connecting frontend and backend identities

To connect frontend and backend identities, you only need to use the same distinct_id in both frontend and backend events. How you sync these depends on your system but here are some ways:

Recommended: Set the distinct_id based on a known user ID: If you have a stable internal user ID, set posthog.identify('your-user-id') on the frontend, and use that same ID in backend events. This ensures alignment across both environments

Use a signed token or cookie: Store the distinct_id in a cookie or session token shared between frontend and backend, especially if you're using server-side rendering or middleware that handles both sides.

Pass the ID from frontend to backend: When a user logs in or performs a tracked action, capture their distinct_id in the frontend (e.g., using posthog.get_distinct_id()), then include it in API requests or session headers so your backend can reuse it when sending events. Careful, you’re relying on PostHog’s distinct_id here, which may not be an expected value.

A potential pitfall is posthog.reset().