15 May 2026

|

21 min

Continuous interviewing

Learn how to set up and run continuous interviewing for product discovery. A practical guide to building a sustainable weekly interview practice that keeps your team close to users.

Continuous interviewing

Continuous interviewing is one of the most practical habits a product team can build – and one of the easiest to let slip. The idea is simple: instead of running user research as a series of big, planned studies, you talk to users every week (or close to it), building up a steady stream of small insights that inform decisions in real time. No waiting for a research sprint. No backlog of "we should really talk to users about this."

The practice of continuous product discovery has spread well beyond dedicated research teams. Designers, product managers, and engineers now run these conversations themselves – and often find it brings them closer to the problems they're trying to solve.

Whether you're a solo designer trying to stay connected to your users or a researcher looking to scale a discovery habit across a product team, this guide covers everything you need to get started and keep going.

This guide also includes video clips from a conversation with Andrew Millar, Product Design Manager at Lyssna, on how the team runs continuous interviews in practice.

Key takeaways

  • Continuous interviewing means talking to users on a regular cadence – weekly or bi-weekly – rather than in periodic, project-based studies.

  • The goal is a steady stream of small insights, not comprehensive research reports.

  • Anyone on a product team can run these sessions; you don't need a research background to get started.

  • The biggest barriers are recruiting and consistency – both are solvable with the right systems.

  • A simple question bank removes the hardest part for non-researchers: knowing what to ask.

  • Lyssna's research panel and interview tools make it easier to recruit participants and keep the practice running without manual sourcing.

What are continuous interviews?

Continuous interviews are short, recurring conversations with users that happen on a regular schedule – typically weekly or bi-weekly – rather than as part of a defined research project with a start and end date. The term is closely associated with Teresa Torres, whose book Continuous Discovery Habits made the practice a cornerstone of modern product discovery.

The key difference from standard user interviews isn't the format – sessions are still semi-structured conversations, usually 20–30 minutes long. The difference is the cadence and intent. Traditional user research tends to be episodic: a team identifies a question, recruits participants, runs a study, analyzes and synthesizes data, and presents insights. Continuous interviewing replaces that cycle with an ongoing practice.

That shift matters more than it might seem. Insights from a study conducted three months ago don't age well. Continuous interviewing keeps you current.

Why product teams use continuous interviewing

Product decisions don't pause while research catches up. Designers are making interaction choices, product managers are prioritizing features, and engineers are scoping work – all day, every day. Without a regular source of user input, those decisions get made on assumptions.

Andrew describes the difference between project-driven research and continuous interviewing this way:

"Normally we've got a particular thing we're testing... which is great, but there's also so much value in catching up with a very open agenda and just hearing from people who are using the tool day-to-day, without any specific test that you're putting in front of them or a particular goal that you have in mind."

Continuous discovery interviews address this directly. Because sessions happen regularly, insights are always recent. There's no gap between "we finished the research" and "we're now making decisions based on it" – the research is always running.

Over time, the practice builds something valuable: institutional knowledge. Themes that appear once are interesting. Themes that appear across ten sessions over two months are signals worth acting on. Those signals feed directly into the product roadmap – shaping what gets built, and when.

Andrew puts it simply: "There's something about just getting face-to-face with a customer – hearing directly from them and seeing how they use the product. It helps ground you in what you're doing every day." And when interviews feed back into what gets built, the loop closes in a way that's hard to replicate any other way: “The best part is when you can email that person back in the space of a week and say, 'You know that thing you asked for? Just refresh the app and have a look now.’”

The practice is also particularly valuable for larger organizations, where researchers are stretched across multiple teams and designers or PMs often have to make decisions without easy access to research support. Continuous interviewing distributes the research habit – it doesn't require a dedicated researcher to run.

In the video below, Andrew shares what that looks like in practice at Lyssna:

How to set up continuous interviews

Getting the structure right from the start makes the difference between a practice that sticks and one that quietly fades out.

Cadence

Weekly is ideal. It's frequent enough to build a real rhythm, and over time sessions start to connect in ways that produce genuine insight patterns. The table below gives a sense of what different cadences produce over time:

Cadence

Sessions per year

What it typically produces

Weekly

~50

Strong insight patterns; themes emerge quickly

Bi-weekly

~25

A solid discovery rhythm; realistic for most teams

Monthly

~12

Useful contextual knowledge over time

Quarterly

~4

A good starting point

The goal isn't perfection – it's consistency. One session a month, maintained over a year, will produce more useful discovery knowledge than an occasional formal study.

As Andrew shares: “I aim for one interview a fortnight if I can. Sometimes I might get three scheduled in a week, and other times it will be months depending on availability."

lightbulb-02.svg

A note on terminology: "Bi-weekly" in this context means every two weeks, not twice a week. If you're setting expectations with your team, "fortnightly" or "every other week" is clearer.

Who to interview

Most continuous interviewing programs recruit from two pools:

  • Existing customers, who can speak to current product experience in specific terms.

  • Target users who match the profile you're designing for – useful for understanding the broader problem space and unmet needs the product might address.

Think carefully about rotation. Talking to the same handful of people every week creates familiarity bias – you start to optimize for their specific experience rather than the broader user base. A rotating pool of participants, with some continuity for context and some fresh voices for new perspectives, tends to produce the most useful data.

Recruiting participants

Participant recruitment is the biggest friction point for most teams – and the most common reason a continuous interview practice fades out. When sourcing participants requires manual effort every week (reaching out to customers individually, coordinating schedules, managing responses), it becomes a job in itself.

The most effective solution is to use a research panel for ongoing recruitment. Rather than sourcing participants from scratch each week, you tap a pre-vetted pool of people who match your target profile and are ready to take part in research. Lyssna's panel includes 690,000+ participants across 124 countries, with 395+ targeting options – so you can find the right people without the sourcing overhead.

Session length and format

20–30 minutes is the sweet spot. Long enough to go deep on one or two topics; short enough to fit into a participant's working day.

Sessions should be semi-structured – you have a set of questions you want to explore, but you follow the conversation where it goes. Non-researchers sometimes try to run these sessions with a rigid script, which tends to produce surface-level answers. A natural conversation with well-chosen questions is more useful than a formal interview protocol.

You don't need to record every session, but recordings are worth capturing when a participant raises something unexpected or important. They also reduce the note-taking burden significantly.

Who runs continuous interviews?

Continuous interviewing is most commonly associated with research teams – and if you have dedicated researchers, they'll often lead the charge. But it's a much broader practice than that.

Andrew describes who typically leads these sessions:

"Quite often it's coming out of people from the UX space, whether that's designers or researchers, and then product managers as well – people who are making daily decisions about the product and just want to have more insight into the people who are using it."

For teams just getting started, joining as an observer is a low-commitment way in. Andrew's own approach when he started at Lyssna was to ask the sales team if he could sit in on customer conversations – not leading the sessions, just listening. "Even that alone is hearing from the customer," he says. "Those customer conversations are happening in different ways."

What to ask: A continuous interview question bank

The following questions are a resource to pull from, not a script to follow. The goal is to give you a bank of reliable, open-ended questions that work across a range of discovery conversations – especially useful if you're newer to interviewing and not sure where to start.

At Lyssna, the team uses a similar format: a small set of consistent opening questions, followed by a bank of optional prompts to draw from depending on what's topical. Andrew describes the approach:

"The way we set it up is we have a question bank of potential topics that we might want to talk about. We book the interviews for half an hour, and if we've got 10 minutes left and we've got through those key questions already, there's normally a bunch of things we could ask them about – like, if we're working on AI features, let's talk about how they feel about AI."

He expands on this below:

Use the questions below selectively. A 25-minute session might draw on five or six from different categories. Let the conversation guide which ones matter.

Context setting

  • Walk me through what a typical [day / week] looks like for you in terms of [relevant area].

  • How long have you been dealing with [problem or workflow you're exploring]?

  • What does your current setup look like for handling [task]?

  • Who else is involved when you're working through [problem]?

Current behavior and pain points

  • Can you walk me through the last time you had to [do the thing you're exploring]?

  • What did you do when [specific moment in their workflow]?

  • What does that part of the process actually look like right now?

  • What's the part of this that takes the most time?

  • What usually goes wrong?

  • When something doesn't work the way you expected, what do you do?

Decision-making and context

  • Who else is involved in decisions like this?

  • What would have to be true for you to change the way you currently handle this?

  • What would make you trust a new solution in this area?

  • What are the constraints you're working within?

Reactions to specific problems or concepts

Use sparingly – this category is closer to evaluation than discovery. Reserve for when you have a specific assumption to test.

  • Have you ever run into [specific problem]? What happened?

  • What's your reaction when you encounter [situation]?

  • If that changed, how would that affect the way you work?

  • We've been hearing from other users that [problem]. Does that resonate with your experience?

  • Some teams handle this by [approach]. Is that something you've tried?

Pro tip: The line between discovery and evaluation is worth highlighting. Discovery interviews surface what users do and why; evaluative questions test a specific hypothesis. Mixing the two in one session is fine, but be aware that evaluative questions can anchor the conversation and make it harder to hear things you weren't expecting.

Forward-looking questions

  • If you could change one thing about the way you currently handle [task], what would it be?

  • What would a good solution actually need to do for you?

  • What have you tried before? What worked, and what didn't?

Andrew also recommends leaving room for a more open-ended "magic wand" question: "If you had a magic wand, what would you add or what would you change about the product?" It doesn't always land, but when it does, it opens up a different way of thinking about the problem.

One more thing worth noting: if you have a second person in the session, they often make the best observers. Andrew encourages them to chime in: "I always encourage people who are joining just to chip in and say, 'Can you just tell me a bit more about that?' When you're leading the interview, it's more conversational, so you're not always there to capture everything."

What not to ask

Avoid the following question types – they tend to produce answers that reflect what participants think you want to hear rather than what they actually do:

  • Leading questions – "Would you find it helpful if...?"

  • Feature requests dressed as discovery – "If we added X, would you use it?"

  • Double-barreled questions – "Do you prefer A or B, and why?"

  • Behavior prediction questions – "Would you pay for this?"

lightbulb-02.svg

Want a version you can use in your sessions? We've put together a free Notion template with the full question bank, moderator notes, and pre- and post-session checklists. 

How to analyze and share what you learn

The approaches here are designed to be lightweight by design – not a shortcut, but a recognition that continuous interviewing lives alongside your main job. A process that requires hours of post-session synthesis won't work alongside your normal workweek.

During the session

  • Take lightweight notes in real time rather than trying to transcribe everything.

  • Tag themes as they come up – a simple system of keywords or color codes works fine.

  • Aim to capture the three or four most significant moments from the session while they're fresh, rather than trying to reconstruct the entire conversation afterward.

If you're using Lyssna to run interviews, automatic recordings and transcripts mean you can focus on the conversation during the session and refer back to specific moments later.

After each session

Build a quick synthesis habit rather than a formal debrief. Drop the transcript into an AI tool and prompt it to pull out key themes, surprises, and open questions – a process that takes minutes rather than hours. However you approach it, aim to capture:

  • Three things you learned.

  • One thing that surprised you.

  • One thing to follow up on in a future session.

This produces a running log that's far more useful than a set of raw notes.

At Lyssna, the team keeps a light, consistent format for sharing session insights. Andrew describes it:

"We take the transcript and might use AI to help pull out some of the themes. It's kind of like a structure of: a bit about who they are, what they love about Lyssna, what they would love to improve, and then any other interesting insights. Just a few bullets, and we'll post that into our customer insight channel on Slack."

He also clips short video moments from sessions when something stands out: "If there's a nice little clip, I might go in and cut that out – like a 30-second snippet. That often gets quite a bit of engagement from the team. There's something about seeing bullet points and then seeing someone actually talking about the product."

For anything actionable, the team attaches quotes and requests directly to tasks in their product management tool: "If there's a particular request or insight that can be attached to an existing request, or if it's something brand new, I'll often go and put those things into Linear so it's captured somewhere in our backlog – attached to a customer, so we could in theory go back to them in the future."

Over time

Patterns in continuous interviewing don't always announce themselves. A single participant raising something unexpected is interesting. Three participants raising the same thing across different sessions is a signal. Five is worth acting on.

Review your notes periodically – monthly works for most teams – and look for recurring themes. When the same frustration, workaround, or unmet need keeps appearing, that's a discovery worth bringing to the wider team.

lightbulb-02.svg

Pro tip: These numbers are rough guides. A single data point is an anecdote; the same thing coming up across unrelated participants is a pattern worth paying attention to.

Sharing without a formal report

Short recurring updates work better than formal reports. A weekly or bi-weekly Slack message with two or three things the team learned keeps research visible without requiring anyone to read a ten-page document. A shared notes doc that anyone can drop into is even lower friction.

Lyssna's transcripts and recordings are useful here – rather than writing up a lengthy summary, you can share a specific quote or moment from a session that illustrates a theme, with the recording available for anyone who wants more context.

How to run continuous interviews with Lyssna

Lyssna's interview tools are designed to support this kind of ongoing practice – not just individual studies.

At Lyssna, the team uses the product to run its own continuous interview program. Here's how Andrew describes the day-to-day workflow:

"You can set your availability and connect your calendar, and co-host calendars as well – so if I want someone else to join a session, I can add them to it. When I contact potential participants, they get sent a link and it gives them an invite page where they can pick a time for themselves and book it in. I don't have to do that calendar Tetris, back and forth, being like, 'Can you do 10:00 AM?' They get sent a link, they book their own slot, and I just get an email that tells me I'm talking to this person next week."

For synthesis after sessions, Zoom recordings can be pulled through automatically: "You can upload the recordings, or the recordings can get pulled through from Zoom automatically, which is quite useful for transcripts. And then if you want to take the transcript and summarize that – it's one less step to have to do somewhere else. There are definitely some time savings there."

Scheduling recurring sessions

Set up a recurring interview schedule and manage participants without rebuilding your setup from scratch each week. Once you have your screener and session structure in place, the logistics of running sessions become much lighter.

Participant recruitment

Lyssna's panel of 690,000+ participants across 124 countries, with 395+ targeting options, means you can find the right people for your sessions without manual sourcing. You can save your demographic filters as a group, so placing a new order each week takes a few clicks rather than starting from scratch every time.

Transcripts and recordings

Sessions are recorded and transcribed automatically, so you're not spending half your attention on note-taking. Transcripts make it easy to pull specific quotes, revisit moments from a session, and share evidence with stakeholders without writing a full report.

Whether you're running continuous discovery interviews yourself or scaling the practice across a larger team, Lyssna gives you the infrastructure to keep the cadence going.

Common mistakes in continuous interviewing

Even well-intentioned continuous interview practices can develop habits that quietly undermine them.

Over-formalizing sessions

Non-researchers often try to "do it properly" by adding rigid protocols, detailed moderation guides, and formal question sets. This makes sessions feel like work – for you and for participants – and usually produces worse data than a natural conversation.

Treating every session like a study

Continuous interviewing isn't a series of mini-studies. It's an ongoing conversation with your user base. Not every session needs a research question, an analysis plan, or a readout. Some sessions surface useful context that informs decisions you weren't even aware you needed to make.

lightbulb-02.svg

Pro tip: This is a harder adjustment for people with a research background than for newcomers. If you're trained to treat every project as a study with a defined question and a synthesis phase, continuous interviewing can feel methodologically incomplete. It isn't – it's a different mode with a different purpose.

Interviewing the same people repeatedly

Familiarity is comfortable but limiting. A rotating participant pool gives you a broader, more accurate picture of your users. If you find yourself talking to the same five people every month, it's time to refresh the pool.

Letting recruiting friction kill the cadence

This is the most common failure mode. If recruiting requires manual effort every week, the practice will eventually lose to other priorities. Solving the recruiting problem – with a panel, a standing opt-in list, or another sustainable source – is the most important operational decision you'll make.

Collecting insights with no system for using them

Insights that aren't visible don't influence decisions. If your notes live in a private doc that no one reads, the practice isn't contributing to your team's work in any meaningful way. A lightweight sharing system – even a Slack message – makes a significant difference.

A final word from Andrew on getting started with continuous interviews:

FAQs about continuous interviewing

How many continuous discovery interviews should you run per week?
minus icon
minus icon
Do you need to be a researcher to run continuous discovery interviews?
minus icon
minus icon
What if we don't have existing customers to interview?
minus icon
minus icon
How is continuous interviewing different from customer success calls?
minus icon
minus icon
How long should each session be?
minus icon
minus icon
Author profile image of Diane Leyman

Diane Leyman

Senior Content Marketing Manager

Diane Leyman is the Senior Content Marketing Manager at Lyssna. She brings extensive experience in content strategy and management within the SaaS industry, along with editorial and content roles in publishing and the not-for-profit sector

linkedin.svg

You may also like these articles

Try for free today

Join over 320,000+ marketers, designers, researchers, and product leaders who use Lyssna to make data-driven decisions.

No credit card required

4.5/5 rating
Rating logos