This is the third blog in a series of monthly reflections about our team’s work. My post follows colleagues’ Andrea Ngan and Danita Reese.  

My reflection dives deep into our qualitative research practice and gets technical about some of the strategies we use to select participants for research. 


Context

A key area of what we do at the City of Philadelphia (i.e., the City) is supporting the service improvement efforts of public-facing agencies through focused project work. These projects are research-centered. Meaning, we use qualitative research methods, like interviews and observation, to gather insights from a range of stakeholders to help our agency partners identify opportunities to improve service experiences holistically. 

We conduct research with those who have lived experience and are closest to on-the-ground service experiences, so we can understand where there are bright spots and barriers to service access and delivery. 

Examples of who we may include in research: 

  • Internal- and external-to-government advocates, staff, and leaders who sit across the hierarchy of an organization and who support, administer, and deliver City programs or services. 
  • Community groups and members who advocate for, access, and use City programs or services. (Note: Resident-facing research practices are the focus of this blog.)

The approach we use is effective when service administrators have questions like: 

  • WHY did a few people sign up for a low barrier program?
  • WHY did people drop off from the process at a specific moment in time?

While quantitative data (e.g., numbers) shows broad behaviors, qualitative data (e.g., words and stories) can give us textured insight into WHY those behaviors occur. But this approach can also raise an important question: WHO should we speak with and learn from to inform our understanding of THE WHY? For example, someone who requires language interpretation may draw attention to language barriers. Meanwhile, an experienced lawyer may point out legal risks or liabilities.

This blog offers a short reflection on our approach to determining WHO we should include in our service-related research. Note that the literature on this question is considerable.

 

How we identify research participants 

When we define who we’ll involve in our research, we balance practicalities, like time and budget, with the diversity of experiences and perspectives residents embody.

 

APPROACH ONE: Understanding research constraints 

The reality of research is that it’s always constrained. Our research can be constrained by budgets, project timelines, and stakeholder priorities. Because we’re a government organization funded by taxpayer dollars in the poorest big city in the United States, we don’t have unlimited time or money to recruit and compensate members of the public to participate in our work.   

Considering these realities, we can be tempted to use practical recruitment criteria when determining our demographic of focus (i.e., sample group and size) for research because it’s a low-barrier approach. 

Examples of practical recruitment criteria can include:

  • Who’s readily available? We interview residents who are waiting to meet with social work staff at a government building.
  • Who’s willing to participate? We interview residents who show an interest in participating.
  • Who’s close by? We interview residents who are paying their bills in the concourse of the Municipal Services Building, which is in the basement of our team’s office.  

When we use practical recruitment criteria, we can unintentionally limit our ability to include diverse resident experiences. For example, we’re more likely to speak to those with more time, emotional bandwidth, and flexibility — and not gather important perspectives from residents who have limited time or mistrust government. 

While we’re aware of constraints, we try to not design our research in full service of those constraints. Below are several ways we build our knowledge, so we can be more strategic about who we include in research.

 

APPROACH TWO: Developing a preliminary understanding of resident experiences

Before defining a demographic of focus for research, it’s helpful to first explore the spectrum of experiences residents have with a service.  

To develop a cursory but expansive view, we examine: 

  • What’s known about how residents interact with elements of a service.  
  • What’s similar across those interactions.
  • What’s different about how and why residents show up.

We probe different aspects of residents’ experiences, by trying to answer the questions below.   

  • Material components or service channels: How is the public accessing the service? Is it through eligibility forms, digital service pages, and office buildings? What’s driving resident preference and choice?  
  • Circumstances: Why is the public accessing a service? What’s prompting their use?  
  • Common challenges: Are there barriers that get in the way of the public accessing services?
  • Behavioral tendencies: How does the public approach interacting with a service? Do they prefer to problem-solve the service on their own? Or do they prefer to receive a verbal explanation from service staff?
  • Assumptions: How does the public perceive the service? What assumptions are they carrying? How do these perceptions and assumptions impact how they show up? 

We look at existing data and speak with staff to develop a preliminary understanding of resident experiences. With this baseline understanding, we can then have strategic conversations with each other and project stakeholders to determine whose experience would be most helpful to learn from during deep research, given the project’s goals.

 

APPROACH THREE: Centering specific people, voices, and lived experiences

Approach two informs approach three. Meaning, after we have a cursory view of resident experiences across their similarities and differences, we have the information we need to pause and ask:  

  • What resident experiences should we prioritize?  
  • What residents must we learn from?  
  • Why is that the case?  

Sometimes, we’ll focus on a specific set of service experiences to answer these questions. For example, we may be primarily interested in the online experience of a service, supported by Phila.gov, because that’s where there’s strong interest for change by our project partners.  

In most cases, we prioritize the perspectives of residents who’ve been marginalized by government actions. This is because — like most systems and structures in the United States — government services have historically been designed in service of white supremacy. As a result, services are structured around a fictional “average Philadelphian” who’s able-bodied, a citizen, English-speaking, highly literate with technical expertise, and relatively well-resourced, among other attributes. When communities’ lived realities don’t reflect the fictional average (e.g., people with disabilities, people with limited English proficiency, low-wealth communities, communities of color, and other historically marginalized groups), they can experience some of the greatest barriers to accessing municipal services. Services can feel unwelcoming, hostile, and burdensome. Think of a service that’s only offered online; many residents without consistent access to a computer or the Internet may not be able to access the service.  

When we’re committed to more equitable service experiences and outcomes, we center the voices of communities who’ve been marginalized by government actions. This and other considerations help us gain perspective on whose voices we should center in research and why those perspectives are crucial to informing equity-centered government decision-making.

 

APPROACH FOUR: Creating the space to learn, evolve, and change

Finally, we try to give ourselves the space to add nuance to what we thought we knew when we set up our research. 

For example, once we start building relationships and speaking in-depth with residents about their complex experiences with government services, we’re forced to grapple with:   

  • Our and our project partners’ initial assumptions and biases when setting up research. 
  • Our misunderstandings of residents’ lived experiences that were based on our initial assumptions. 

Learning is ongoing. So, as we gain new information, we push against our initial research design and adjust. We might determine we need to add a new demographic of focus or ask different questions. And that’s okay.

 

Conclusion  

Although this blog attempts to tidy up our approach to selecting participants for qualitative research, the reality is that it can be very messy, non-linear, and iterative. For example, our recruitment efforts rarely work as intended. Many who are invited don’t respond. Others refuse to engage because of their previous relationship with a department. And there are limits to what can be known. This blog tries to name a few of our considerations in a project. But each project reveals new complexity and more considerations.  

We’d love to hear what you keep in mind when you conduct research. Please send us an email to service.design@phila.gov.