How to Conduct Remote Ethnographic Research at Scale
Let me be upfront: scaling ethnographic research is genuinely difficult. The whole point of ethnography is depth—immersing yourself in someone's context, observing the small details, building rapport over time. "Scale" feels like the opposite of that.
But here's what I've learned running research across South Africa, Nigeria, the UK, and Australia: you can scale ethnographic work without gutting what makes it valuable. It requires being thoughtful about methodology, honest about trade-offs, and choosing the right tools for your specific situation.
This guide covers what actually works—and what doesn't.
First, Be Honest About What You're Trading Off
Traditional ethnography means spending days or weeks embedded with participants, conducting interviews, observations, and other research activities. You’re observing their environment, picking up on things they wouldn’t think to mention, building the kind of trust that surfaces genuine behaviour rather than performed responses.
When you scale remotely, you lose some of that. There’s no way around it. You also miss out on participants' unfiltered feelings and emotional responses, which can be harder to capture without in-person interaction.
What you gain:
- Geographic reach (participants across countries, not just whoever’s near your office)
- Speed (weeks instead of months)
- Cost efficiency (no travel, accommodation, or field teams)
- Larger sample sizes for pattern recognition
- Access to moments you’d never catch in-person (2am behaviours, private spaces)
What you sacrifice:
- Serendipitous observation (you only see what participants choose to share)
- Environmental immersion (photos and videos are curated, not lived, and captured through the camera)
- Non-verbal cues and physical context
- The rapport that comes from being physically present
The question isn’t whether remote is “as good as” in-person. It’s whether the trade-offs make sense for your research objectives.
If you need to understand the cultural meaning of a ritual in a specific community, remote probably won’t cut it. If you need to understand how 200 people across three countries actually use a product in their daily lives, remote ethnography might give you richer data than flying one researcher to each location for a week. It also allows you to observe how participants perform daily routines or product interactions in their own environments.
Understanding the Cultural Context
If there's one thing that can totally make or break your market research—especially when you're dealing with global brands or trying to launch products in new territories—it's getting the cultural context right. Context isn't just some background detail you can ignore; it actively shapes how folks interact with your products, respond to your survey questions, and even interpret the videos or photos you're showing them.
When researchers dive into new markets, you can't just translate a survey and call it a day, or run the same video diary tasks everywhere and expect similar results. The way people talk about a product, the moments they actually want to share, and the types of interactive content they're comfortable with—all of this stuff is deeply influenced by local values, beliefs, and everyday realities. I've seen this firsthand: a product that's a major status symbol in one country might be viewed as just a basic necessity somewhere else. The same photo or video prompt can yield completely different responses depending on where you're running the study.
To tackle this challenge, researchers need to dig deeper than surface-level data. That means designing tasks that actually fit the local context—using language, examples, and incentives that make sense for that specific audience. It also means staying flexible with response types: some participants might prefer sending voice notes, while others are way more comfortable with photos or interactive videos. The key is letting the context drive your research approach instead of forcing a one-size-fits-all method.
Cultural context research isn't just about avoiding major missteps (though that's definitely important)—it's about boosting the relevance and impact of your insights. When you really understand the context, you can get more accurate data, spot patterns that actually matter for your business, and develop marketing strategies that truly resonate with local audiences. This is especially crucial for global brands looking to expand their success in new markets—what works perfectly in one place won't always translate directly to another, and that's something you need to plan for from the start.
When conducting remote ethnography makes sense (and when it doesn't)
Remote works well for:
- Longitudinal studies tracking behaviour over days or weeks
- Geographically dispersed populations
- Capturing in-the-moment experiences (meals, commutes, purchases)
- Sensitive topics where anonymity helps (finances, health, relationships)
- Budget-constrained projects that still need qualitative depth
- Rapid exploration before committing to deeper fieldwork
- Hard-to-reach participants (shift workers, rural populations, high-net-worth individuals who won’t sit in a focus group)
- Capturing immediate feedback from customers in real-world settings
Remote struggles with:
- Understanding physical environments in detail
- Observing group dynamics and social interactions
- Research where participants don’t know what’s relevant (you need to see it yourself)
- Populations with low digital literacy or device access, such as those without a mobile phone
- Studies requiring significant trust-building before honest disclosure
- Anything where “showing” matters more than “telling”
Be realistic. I’ve seen researchers try to force remote methods onto questions that genuinely needed in-person work, and the results were thin. I’ve also seen researchers default to expensive fieldwork when a well-designed remote study would have given them better data.
The methodology matters more than the platform
Before choosing tools, get your methodology right. A fancy platform won’t save a poorly designed study.
Sample size and saturation
For scaled ethnographic work, you’re typically looking at 20-50 participants for a focused study, or 100-300 for broader pattern recognition. The right number depends on:
- Population heterogeneity (more diverse = more participants needed)
- Research objectives (exploration vs. validation)
- Data richness per participant (10 detailed diary entries vs. one interview)
When collecting data, focus on gathering participant answers efficiently to maximize insight. Also, consider the scope of your study—wider scope may require more participants to capture the full range of experiences and contexts.
Don’t just default to “as many as we can afford.” Diminishing returns kick in faster than you’d think. I’ve run studies where 30 deeply engaged participants gave us more insight than 150 who completed the bare minimum.
Study duration
For diary studies and longitudinal work:
- 3-5 days: Good for capturing a specific experience (product trial, event)
- 1-2 weeks: Standard for understanding routines and habits
- 3-4 weeks: Needed for less frequent behaviours or building deeper rapport
- Beyond a month: Participant fatigue becomes a serious issue
Longer isn’t always better. A well-structured 5-day study with multiple daily touchpoints often beats a 3-week study where participants engage sporadically. Over time, valuable learning occurs as participants share more about their routines and experiences.
Task design
This is where most remote ethnography fails. Generic prompts get generic responses.
Bad: “Tell us about your morning routine” Better: “Walk us through making your first cup of coffee this morning—film yourself doing it and talk us through what you’re thinking”
Bad: “How do you feel about [brand]?” Better: “Next time you see a [brand] ad, screenshot it and tell us your immediate reaction before you have time to think about it”
The best remote ethnography tasks are:
- Specific and concrete (not abstract)
- Tied to real moments (not recalled experiences)
- Easy to complete in the moment (not requiring setup)
- Varied in format (mixing text, photo, video, voice keeps it interesting)
Include tasks that observe participant actions and real-life activity to gain deeper insights into their behaviors. Always provide a clear guide to help participants complete tasks confidently. Seek participant opinions and feedback on the research process to refine your approach. When designing tasks, keep the participant’s mind in mind—ensure tasks are not overwhelming and consider their mental and emotional state.
Managing participant fatigue
Drop-off is the enemy of longitudinal research. People start enthusiastic and fade.
What helps:
- Front-load your most important questions (assume you’ll lose 20-30% by the end)
- Vary the task types (text one day, photo the next, voice note after)
- Keep individual tasks short (under 5 minutes)
- Send prompts at times that make sense for the behaviour you’re studying
- Provide feedback that participants are being heard (even simple acknowledgments help)
- Be realistic about incentives (underpaying = low effort responses)
Encourage participants to fill in diary entries or surveys promptly to maintain momentum. Organise participant interactions and data efficiently to keep engagement high throughout the study. Make improvements to your study design based on participant feedback to reduce fatigue and enhance the research experience.
Platform options: honest comparison
There’s no perfect tool. Each has trade-offs depending on your research context. Some platforms offer comprehensive support throughout research projects, assisting with everything from onboarding to data analysis.
3.1 Platform Features and Analysis Tools
When evaluating analysis tools, look for features that help organise participant data efficiently. Many platforms have made significant improvements in data management, making it easier to handle large-scale qualitative and quantitative research. For example, Yazi provides robust data visualisation and raw data export, while Sago and Recollective offer advanced filtering and tagging. We recommend Yazi for teams needing high response rates in emerging markets, while Recollective is ideal for in-depth qualitative analysis. Some platforms also provide a dedicated guide or onboarding resource for new users, ensuring a smooth start to your project. Additionally, leading platforms cover a wide range of related research needs, from diary studies to AI-moderated interviews.
3.2 Messaging Platforms and Community Features
When considering messaging platforms, note that some include forums for community interaction, allowing researchers to observe and engage with participant discussions in a natural setting.
Dedicated mobile ethnography platforms
Examples: Dscout, Indeemo, Recollective, Over
Strengths:
- Purpose-built for research (good task management, media collection, analysis tools)
- Participant management and screening built in
- Often include transcription, tagging, and basic analysis features
- Professional interface that signals legitimacy to participants
- Good for B2B and professional audiences who expect "proper" research tools
Weaknesses:
- Require app download (significant friction—expect 20-40% drop-off at this step alone)
- Less familiar to participants than tools they already use
- Can feel clinical/corporate, which affects response authenticity
- Higher per-participant costs
- Less effective in emerging markets where app downloads are a barrier
Best for: Well-resourced research teams, B2B studies, Western markets with reliable internet, situations where you need robust built-in analysis tools.
Messaging platforms (WhatsApp, WeChat, Telegram)
Strengths:
- Zero friction (participants already have it, no download needed)
- Feels natural and conversational (people respond like they're texting a friend)
- Extremely high engagement rates (I've consistently seen 50-60%+ completion vs. 20-30% for app-based studies)
- Multimedia collection is native (voice notes, photos, videos are easy)
- Works in low-bandwidth environments
- Reaches populations that app-based tools can't
Weaknesses:
- Not designed for research (requires workarounds or platforms built on top)
- Less structured data output (more manual organisation needed)
- Privacy/compliance considerations vary by market
- Can feel too informal for some B2B contexts
Best for: Consumer research, emerging markets, hard-to-reach populations, studies where response rate matters more than polished interface, any situation where app-download friction would kill your sample.
Video platforms (Zoom, Teams, dedicated research tools)
Strengths:
- Closest to in-person interaction
- Can read facial expressions and body language
- Good for complex topics requiring real-time clarification
- Screen sharing enables observation of digital behaviours
Weaknesses:
- Scheduling is a nightmare at scale
- Not "in the moment"—you're capturing recall, not lived experience
- Recording and transcription adds cost and complexity
- Doesn't scale well beyond 30-50 participants without significant resources
Best for: Deep-dive interviews after initial screening, complex B2B topics, situations where real-time probing is essential, supplement to (not replacement for) diary-based methods.
Hybrid approaches
Honestly, the best results often come from combining methods:
- Diary study via messaging app → follow-up video interviews with subset
- App-based tasks for structured data → messaging check-ins for context
- Video ethnography for initial immersion → scaled survey for validation
Don't feel locked into one platform for an entire study.
Why WhatsApp specifically works for scaled ethnography
I’ll be direct: I’ve built a business around WhatsApp-based research, so take this with appropriate scepticism. But here’s what I’ve observed across hundreds of studies.
Response rates genuinely are different
In traditional email-recruited studies, 20-30% completion is considered good. With WhatsApp-native research, we consistently see 50-65%. That’s not a marginal improvement—it fundamentally changes what’s possible.
The reason isn’t magic. It’s friction. People check WhatsApp 20+ times a day. It’s where they already are. Responding feels like texting, not like “doing a survey.” The platform enables a natural conversation between researchers and participants, making the interaction feel more authentic and engaging.
Voice notes change the game
This might be the most underrated feature for ethnographic research. When people type, they edit themselves. When they send voice notes, they ramble, they think out loud, they share things they wouldn’t bother typing. Participants feel comfortable talking about their experiences in a familiar messaging environment, which leads to richer and more candid insights.
I’ve had participants send 3-minute voice notes explaining their thought process during a purchase decision. That’s data you’d never get from a text field—and it’s the kind of insight advanced qualitative research tools, like Dscout alternatives, are designed to capture.
It reaches people other methods can’t
In South Africa, WhatsApp penetration is 95%. Email-based research systematically underrepresents lower-income populations. WhatsApp doesn’t.
In Nigeria, participants who won’t download a research app will happily respond to WhatsApp messages because it’s already part of their daily life, making it a valuable tool for data-driven market research.
This matters for representativeness. If your methodology excludes large segments of your target population, your insights are skewed.
When WhatsApp doesn’t work
- Markets where WhatsApp isn’t dominant (parts of Asia, some US demographics)
- B2B research with corporate security restrictions
- Studies requiring complex branching logic or sophisticated task types
- Situations where you need built-in analysis tools
- Participants who associate WhatsApp with personal life and don’t want research intrusion
The AI question: what actually helps vs. hype
There’s a lot of noise about AI in research right now. Here’s what I’ve found genuinely useful vs. what’s mostly marketing.
Actually useful:
- Automated transcription of voice notes and videos (this used to take hours, now it’s instant)
- Translation for multi-market studies (not perfect, but good enough for initial analysis)
- AI-moderated follow-up questions that adapt based on responses (extends depth without researcher time). Understanding the specific term or language used by participants can help AI generate more relevant and context-aware prompts.
- Thematic coding assistance (surfaces patterns across large datasets faster than manual coding)
Overhyped:
- “AI-generated insights” that replace human analysis (the output is generic and misses nuance)
- Fully automated interviewing without human design (garbage in, garbage out)
- Sentiment analysis (still too crude for qualitative nuance)
The best use of AI in ethnographic research is augmentation, not replacement. Use it to handle the tedious parts (transcription, initial coding, translation) so researchers can focus on interpretation and insight generation.
Analysis at scale: this is the hard part
Collecting 500 diary entries is relatively easy. Making sense of them is where most scaled ethnography falls down. The scope of your analysis—how broad or narrow you define the range of subjects and contexts—directly impacts the depth and breadth of insights you can generate.
Structure your data from the start
- Tag responses by participant segment, date, task type as they come in
- Use consistent coding frameworks across researchers
- Build your analysis categories early (you can refine, but start with structure)
Embrace sampling within your sample
You don’t have to analyse every response with equal depth. Read everything at a surface level to identify patterns, then deep-dive into representative examples that illustrate those patterns.
Quantify where it helps, but don’t force it
Counting themes is useful for identifying prevalence. “12 of 30 participants mentioned price concerns” is more compelling than “some participants mentioned price.”
But don’t pretend qualitative data is quantitative. Small sample ethnography isn’t meant to be statistically representative—that’s what surveys are for.
Preserve participant voice
The whole point of ethnographic research is hearing how people actually talk about their experiences. When you summarise everything into researcher language, you lose that.
Include raw quotes, voice note clips, and video snippets in your deliverables. Let stakeholders hear participants directly.
Practical checklist before you start
Before launching any scaled remote ethnography:
Research design:
- Clear research objectives (not just "understand the customer")
- Defined target population with realistic recruitment plan
- Sample size justified based on objectives, not budget
- Study duration matched to behaviour frequency
- Task design tested with 3-5 pilot participants
Platform choice:
- Friction level appropriate for your population
- Technical requirements match participant capabilities
- Multimedia collection supported
- Data privacy/compliance addressed
- Analysis workflow planned
Operations: See our WhatsApp User Research: The Ultimate Guide to Gathering Rich Customer Insights to learn proven methods for collecting in-depth customer insights using WhatsApp.
- Incentive structure that rewards quality, not just completion
- Communication plan for participant questions
- Drop-off contingency (over-recruit by 20-30%)
- Timeline with analysis buffer (always takes longer than you think)
Ethics:
- Informed consent appropriate for digital context
- Data storage and retention policy
- Data analysis considerations with AI tools
- Anonymisation approach for reporting
- Clear participant exit process
Case examples
Product adoption research, Nigeria
The subject of this case was understanding the topic of home-based product adoption and usage for a new pasta product. A pasta company wanted to understand how consumers actually used a new product in their homes. Traditional approach would have been focus groups or in-home visits—expensive, limited reach.
We ran a 3-day diary study via WhatsApp with 30 participants. Day 1 captured their expectations. Day 2 asked them to document cooking the product, using their camera to take photos and record videos, as well as voice notes explaining their process. Day 3 gathered final impressions and participants' opinions about the product.
The voice notes were revelatory. Participants talked through their thought process while cooking in ways they never would have articulated in a focus group. We heard them comparing the product to competitors in real-time, explaining to family members what they were making, expressing genuine surprise or disappointment. Participants also demonstrated how they perform specific cooking tasks, providing valuable insight into their real-life actions.
Total time spent from launch to insights: 2 weeks. Cost: fraction of in-home ethnography.
Sports betting behaviour, Australia/New Zealand — For insights into analysing and categorising open text survey responses with ChatGPT, see this comprehensive guide.
The subject of this case was the topic of real-time sports betting behaviour and decision-making triggers. A research agency needed to understand real-time betting decisions—what triggers them, what environment people are in, what they’re feeling in the moment.
This is almost impossible to capture in retrospective interviews. People don’t accurately remember their emotional state when they placed a bet three days ago.
We ran a multi-day study with prompts timed around major sporting events. Participants shared screenshots of their betting apps, voice notes explaining their decisions, photos of where they were betting from. This approach allowed us to get closer to participants' real experiences and capture their activity during betting sessions.
The data captured actual in-the-moment behaviour, including the actions participants took while betting and their immediate feelings, that traditional methods would have reconstructed (and distorted) through recall.
Community feedback, conservation sector
The subject of this case was the topic of community feedback for strategic planning in conservation. A biosphere reserve needed to gather community input for their 10-year strategic plan. Traditional approach: town halls and paper surveys that systematically miss working populations and younger residents.
WhatsApp link shared through existing community groups reached 500+ participants, including demographics that would never attend a town hall. Voice note responses gave context that tick-box surveys can’t capture.
The multimedia responses—photos of local areas, voice notes explaining concerns—provided richer input for planning than standardised surveys ever could.
The honest summary
Scaled remote ethnography isn't a perfect substitute for traditional fieldwork. It's a different tool with different strengths.
When done well, it captures behaviour in context, at moments that matter, from populations that other methods miss. When done poorly, it produces shallow data that looks qualitative but lacks the depth that makes ethnography valuable.
The methodology matters more than the platform. The task design matters more than the technology. And the analysis approach matters more than the sample size.
Start with your research question. Be honest about the trade-offs. Choose tools that match your population. And don't let the "scale" part make you forget that the goal is understanding humans, not just collecting data.
%202.png)



