Research leaders are navigating uncharted territory in 2026. From integrating AI into workflows to proving ROI under tighter budgets, the challenges facing research teams have never been more complex – or more urgent.

We brought together Julian Della Mattia (Senior User Insights Manager at DuckDuckGo) and Nikki Anderson (Founder of Drop In Research) with Lyssna CEO Mateja Milosavljevic to explore the biggest issues research practitioners are tackling right now – and the practical approaches they're testing to solve them.

Watch the full conversation below, or read on for the top highlights on AI ethics, demonstrating value, democratization risks, and building research culture when stakeholders just want answers yesterday.

Using AI ethically in research workflows

Both Julian and Nikki are actively experimenting with AI, but they're approaching it with clear boundaries and a healthy dose of skepticism.

Nikki describes her journey: "I remember when I first heard about ChatGPT I was actually in the UK visiting friends and somebody turned to me and was like, oh, have you heard of this? And I was like, how stupid. This is never going to be anything."

But after experimenting – including intentionally pushing boundaries to understand where ethical lines should be drawn – she's found her approach. "I was scared of AI and still am a bit scared of AI. I was mainly scared of it because of the sheer possibility of it and what it actually meant for all of us."

She's discovered that AI works best for streamlining administrative tasks, not replacing the creative research process itself. "I think that AI can be great in streamlining some of the administrative tasks, but asking it to give me the answer to my research, it's not really provided anything that I can use without such heavy editing that I might as well do a lot of this stuff myself."

Julian has run into similar limitations. At DuckDuckGo, his team uses a fully private AI tool where everything is anonymized. "We use that especially for crafting documents or polishing – when you have a big data set and then you want to parse some stuff. Once you know what you want to say, once you do the analysis yourself and you have the materials to craft a report, the AI can really help you craft a report really quickly."

The key takeaway from both? You need to know what good research looks like before AI can be helpful. Nikki puts it plainly: "I feel very fortunate that I became an expert in what I did before this came along because I know what good looks like. For me, I don't need AI to write a screener. I don't need AI to write a research plan. I can do those in my sleep."

Where AI might actually help

While cautious about AI-generated research, both see potential in specific, carefully bounded use cases.

Training and education: Julian suggests AI-generated synthetic users work well as a "research gym"– a low-stakes environment where people can practice interviewing before working with real participants. "You can access any time of the day at a really low cost as your research gym, and have a sparring partner. So for that kind of training, I think there's a lot of potential in AI. I don't see that much potential in the actual research, but I do see in this education and training."

Custom GPT agents for democratization: Nikki has created single-purpose AI agents trained on specific knowledge with no internet access. "I was working at an organization recently where I did a training on how to ask open-ended questions. I trained an agent on that. It only spit out open-ended, unbiased, neutral, non-leading questions. That single purpose GPT agent allowed them to play around with things."

But she emphasizes guardrails are critical: "We just need to make sure that those environments are super guarded by creating single purpose agents that are highly trained on custom instructions, really great knowledge documents, and don't have access to the web."

Democratizing research responsibly

The conversation around democratization revealed a recurring theme – AI is just the latest version of an old problem.

"AI is just the latest representation of the latest phase of an underlying problem of people trying to bypass research or skip it," Julian explains. "The people who are willing to go and replace research with answers from ChatGPT or Claude are the same people who before said we don't have time for research. Why do we need research? I know what my customers want. It is just a 2026 addition of 'let's not spend time in research.'"

Nikki's approach to democratization has shifted based on hard-won experience. "I've worked at organizations where we try to democratize, which is like everybody should do this, everybody can do this. Everybody's sitting in a three-hour workshop. Versus now, what I do is much more 'select champion' based. The way that I try and enable research across organizations is by people who actually are interested in learning the things."

Her method involves identifying stakeholders who genuinely want to learn research, creating custom AI agents trained on specific frameworks, providing low-stakes environments where people can experiment, and maintaining human oversight on all outputs.

"What I would be okay with is you using AI as a thought partner, for brainstorming. But I would still want to see that because I'm not wasting anybody's time – yours or the participants or the organization's. If we do that due diligence early on in a project, that's when research can have a really good outcome. If it sucks from the beginning – garbage in, garbage out."

Getting stakeholders to actually use research

When it comes to demonstrating value and getting buy-in, both emphasize the same hard truth – research needs to speak the language of business, not methodology.

Nikki shares her evolution bluntly: "I used to create a deck painstakingly. That deck had a whole lot of stuff in it. It was probably 97 slides. I could have presented a thesis on the topics that I researched. But nobody cares. That's the thing, is nobody cares about the methodology."

She's learned to focus ruthlessly on what stakeholders actually need: "My research is only impactful if I talk to what people actually care about. You're only going to impact somebody if you know what that person cares about. So the people that I am presenting research to now, I talk to them about what they care about, which is normally revenue or some sort of company-based or team-based metric."

The fundamental insight is about understanding your role: "When you are an information giver, which is usually what researchers are, we give information to people so that they are making decisions. We're usually not the decision makers. You need to give the information that people care about and need in order to make decisions. If you don't know what that is, you are going to go into a presentation saying 'what am I supposed to present?'"

Julian frames this around risk: "Decisions are never fully rational. You need to cater to the emotional side and try to understand the psychology behind what other roles might be experiencing. I always use 'risk' kind of speech, which is, if we made the wrong decision, how much would that cost us? If we invest two weeks doing some research on this and we can de-risk the decision a little bit, isn't that worth not spending $100K redeveloping?"

Building research culture

Nikki acknowledges the shift in expectations when it comes to building a culture that supports UX research: "I feel like that experimentation and failure isn't there like it used to be. I used to run more creative research sessions that I don't have the ability to run now because if it fails, or if it doesn't get the exact results, or if it takes a little longer, I get in trouble."

Her strategy now involves running micro-experiments in low-stakes situations: "What I recommend doing is book an extra 15 minutes for every usability test and for that 15 minutes ask a generative research question, or ask a question in a different way that you would never normally ask it. Or with a team that you trust, run a workshop that you've never run before, but do it on a very low stakes project."

Julian's advice for the "you get one shot" scenario is to think minimum viable: "You need to think of what a minimum viable project looks like and try to really be sharp. Answer two questions and reduce the scope and make it really specific. So that way you ensure that even if you have limited resources, limited time and limited everything, you can still get some answers. If you try to boil the ocean, that's probably not going to work. Go minimal and try to get a good answer to one question, and then that will lead to a second question. It's a little flywheel."

Building cross-functional relationships

When asked which relationships matter most for research impact, Nikki offers unexpected advice: "I actually think that building with marketing, with sales, with customer support, customer solutions, account management – literally every single department has something where there's a symbiotic relationship with research."

Her approach is pain-point focused: "Look to solve people's problems, because your user research is a product that solves people's pain points at your organization. Your stakeholders are your users. Find the stakeholders whose pain points you can solve and start with them and keep going and building it because that's how it gets amplified."

These insights only scratch the surface of what Julian and Nikki shared about navigating research leadership in 2026. The full conversation digs deeper into ethical AI boundaries, organizational politics, and experimenting responsibly when resources are tight.

Watch the full panel discussion to hear more honest perspectives on the challenges research teams are facing – and the practical approaches that are actually working.

You may also like these articles

Try for free today

Join over 320,000+ marketers, designers, researchers, and product leaders who use Lyssna to make data-driven decisions.

No credit card required

4.5/5 rating
Rating logos