Data should inform all stages of program implementation for crisis response programs, from the earliest stages of planning to ongoing monitoring and evaluation once a program launches. Collecting and using data is critical to understand what’s working well and how programs can improve and adapt throughout the process of implementation. Data should be collected, analyzed, and shared regularly with program stakeholders, including community advocates and groups tasked with oversight.

Effective data collection will help identify where there are gaps in access and delivery to ensure more equitable implementation and outcomes.

Key recommendations

  • Track key performance metrics to evaluate for equity
  • Collect feedback from a wide range of stakeholders
  • Regularly share data and evaluation updates with program and community stakeholders

Track key performance metrics to evaluate for equity

Crisis response programs should collect data on key performance metrics to evaluate if the program is achieving its intended goals and to identify areas for improvement.

Some key metrics may relate to the calls that are being responded to, for example:

  • volume of calls;
  • origin of calls; and
  • call types.

Other key metrics may relate to the on-scene crisis response and the interactions and outcomes of the response, for example:

  • response time and time on-scene;
  • types of services and supports offered;
  • post-crisis follow-up;
  • transporting and/or completing referrals to other services;
  • involuntary mental health holds or transport; and
  • calls that result in police involvement.138

Metrics on the presence of police at crisis incidents—whether dispatched by call operators or requested by the response team—and the use of involuntary hospitalization are especially crucial for transparency with community members who are concerned about these outcomes.

For all metrics and outcomes tracked, programs should collect information about the clients they serve, to identify any gaps in access and outcomes. Key information includes:

  • race and ethnicity;
  • gender;
  • mental health needs;
  • substance use needs;
  • other health needs; and
  • other basic needs, such as housing.

Some crisis response programs have adopted a focus on antiracism and equity as part of their data collection and evaluation strategies.139 For example, in San Francisco, the Street Crisis Response Team (SCRT) has stated that “each measured outcome, such as linkage to care, [police] involvement, and 5150 involuntary holds, will be measured for its ability to reduce disparities by race, ethnicity, gender identity and sexual orientation to the extent the data allow.”140

In practice, programs have encountered challenges in collecting this information in consistent and client-centered ways. For Denver’s Support Team Assisted Response (STAR) program, advocates have been frustrated that race/ethnicity was recorded as unknown for more than 30 percent of clients served in its first six months.141 Greg Townley, the lead evaluator for Portland Street Response (PSR), explained that it can be difficult for program staff to collect this information in the field and that staff have been uneasy about making assumptions as to a client’s mental health or substance use needs or demographic characteristics in instances in which clients are not able to report it themselves.142

Asante Haughton of Toronto’s Reach Out Response Network (RORN) noted that being asked to share one’s demographics and disability status can be very uncomfortable for some community members, and crisis responders should always ask for this information in a trauma-informed way.143 Rachel Bromberg of RORN shared a helpful suggestion: “perceived race might be noted, and might be easier to note, because that's really what we care about.”144 PSR’s six-month evaluation similarly recommended “noting whether or not the client is a person of color based on visual identification (which is likely already occurring internally or subconsciously).” The evaluators acknowledged “the limitation of this approach but believe it is a critical step toward enhancing our understanding of who the program is (or is not) serving.”145

Another potential solution: programs that are connected to health and human services systems may be able to fill in missing demographic information from other databases for clients who have accessed services in the past and consented to data sharing.146

In terms of tracking who is calling for help, 911 call centers do not collect demographic information about callers. However, if the location of calls can be tracked and analyzed, this may provide information about which neighborhoods are or are not receiving timely, appropriate responses.147 As discussed earlier, inequity may be introduced by screening questions, dispatch protocols, and the implicit biases of operators.

Collect feedback from a wide range of stakeholders

In every community, there is a range of different stakeholder groups who will have important feedback on the implementation of a crisis response program. Arguably the most important group is community members who are directly served by the crisis response program. Other important groups to gather feedback from include community members who are calling for help for someone else (such as family members and friends, business owners and residents, and other service providers), operational partners (such as 911 and other first responders), and program staff.

In Portland, Oregon, the evaluators of the PSR program use interviews and surveys to gather feedback from clients and other community stakeholders. Acknowledging that it would not be appropriate to seek feedback from clients during or right after the moment of crisis, the team developed the following approaches.

  • Interviews: PSR staff asks clients if they are willing to be interviewed and refers them to the evaluation team. The evaluation team then interviews clients at the time and place of their preference, “after they [have] had some time to process what happened.”148 The interviews are used to gather clients’ feedback regarding their experiences with the PSR response and any follow-up services.
  • Surveys: Street Roots, the local community organization that campaigned for the creation of PSR, helped develop and conduct a survey of unhoused community members. The survey included people who had been directly served by PSR and those who had not, and asked about their experiences with 911, PSR, and other first responders. These surveys generated valuable information about whether community members feel safe calling 911 and whether they are aware of PSR as a resource. The surveys collected demographic information and revealed that white community members were more aware of PSR than people of color, which has helped PSR understand the need for further outreach.149

Post-call surveys are one tool that crisis response programs can use to gather feedback from community members. Post-call surveys can be very brief, asking callers about their satisfaction with the experience of calling for help and the services and support provided. They can also collect demographic and geographic information that can be reviewed to identify disparities in caller experiences.150

Feedback from program staff is also essential for ongoing implementation and adaptation of programs. This could be a part of a formal program evaluation or part of regular team practice, facilitated by leadership and staff. Interviews, focus groups, and ride-alongs with staff were a key component of PSR’s evaluation. They informed some important considerations and recommendations for program expansion and sustainability, such as the types of calls and level of risk that PSR staff feel comfortable taking on and the need for support and supervision to avoid burnout and compassion fatigue.151 In Olympia, Washington, feedback from Crisis Response Unit (CRU) and Familiar Faces program team members informed the evolution of the staffing structure, leading to the addition of a nurse and a designated crisis responder to better serve client needs.152

Regularly share data and evaluation updates with program and community stakeholders

Programs should provide regular updates to key stakeholders who are in positions to learn from and implement changes based on program data. These stakeholders may include program staff and leadership, as well as government officials who may have decision-making power over program expansion and funding.

Crucially, data and evaluation updates should also be shared with the public, community advocates, and any advisory and oversight groups—something that advocates in many communities have specifically called for.153

Many programs have committed to publishing regular reports with data on their key performance metrics; for example, Portland Street Response maintains a dashboard that is updated on a weekly basis; San Francisco’s SCRT provides monthly summaries of its key performance indicators; and New York City’s B-HEARD has provided summaries at one, three, and six months to date.154

Updates to stakeholders should include qualitative data, in addition to the quantitative metrics, to provide the full context of the program’s impact.155 This can help prevent people forming overly simplistic conclusions based on a few data points.

A robust, comprehensive approach to data collection and evaluation requires resources. Portland Street Response has benefited from contracting with Portland State University researchers and Street Roots to conduct its program evaluation, as well as budgeting for gift cards to compensate community members’ time for completing surveys and interviews.156 San Francisco’s SCRT secured additional grant funding to engage external research partners and complete a rigorous evaluation, including longer-term outcomes, post-crisis.157 Depending on where a program is housed and how it is funded, access to resources for data collection and evaluation may be more limited.158 For those designing and funding crisis response programs, this is an important piece to plan for early on.