The spinal cord injury (SCI) peer support evaluation tool: the development of a tool to assess outcomes of peer support programs within SCI community-based organizations

Design

This study used and adapted the four-step process to select core outcomes set by the Outcome Measures in Effectiveness Trials (COMET) initiative and the Consensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative [9]. The four-step process has been used to develop OMIs for various disability groups [11] and for programs occurring in community settings [12]. Importantly, according to the COSMIN guideline, OMIs are not homogenous in their structure and can include various measurement approaches (e.g., single item measures, questionnaires, a score obtained through physical examination, a laboratory measurement, etc.) [9]. These steps allowed for a methodological process to identify and select outcomes that would be important for a community-based evaluation tool for SCI peer support programs. Appropriate methodologies were used for each step: Delphi consensus, measurement literature review, quality rating and community consensus methods, think-aloud, and test-retest. Ethic certificates were obtained for steps where participant data collection was conducted (McGill REB file #: 21-11-011). Participants provided informed consent before data collection.

This work was conducted by a community-university partnership. We used the integrated knowledge translation guiding principles for SCI research to guide this partnership [13]. The roles of each team member across research phases are presented in Supplemental 1.

ProceduresStep 1. Conceptual considerations

Prior to selecting outcomes of SCI peer support programs delivered by community-based organizations, an understanding of the relevant outcomes to this context was needed. Published in two separate papers, our group identified 87 outcomes relevant to SCI peer support through meta-synthesis and a qualitative study among peer support users [14, 15] (87 outcomes were listed in Supplemental 6). The peer support outcome model developed from the meta-synthesis also guided this study to ensure that each category was represented by at least one outcome (Fig. 1). We conducted two Delphi consensus studies among peer support users (Delphi 1) and peer support program coordinators and directors (Delphi 2) to identify the most important outcomes for SCI peer support. Detailed information on the Delphi methodology is available in our previous publication [16].

Fig. 1: Outcomes identified and selected organized by a peer support outcome categorization.figure 1

Note. Figure adapted from Rocchi et al. [14], reprinted by permission of the publisher (Taylor & Francis Ltd, http://www.tandfonline.com).

Step 2. Finding existing outcome measurement instruments

To reduce burden on peer support users/peer mentees who will be completing the core outcome set (hereafter referred to as the SCI Peer Support Evaluation tool), our partnership decided to use a single-item measure for each outcome. This decision aligns with a recent call for single-item measures in time-restricted conditions to facilitate response and to reduce data-processing costs [17], which are important for community-based organizations. Regarding outcomes identified in Step 1, online databases (e.g., Neuro-QOL, NIH Toolbox, PROMIS, ASCQ-Me, PsychINFO, MEDline) were individually searched by two researchers (ZS, OP) for validated single-item and multi-item measures for each outcome (see Supplemental 2). ZS and SS searched and identified items that related to the outcomes from surveys of our partnered community-based organizations. For each outcome, ZS and OP each selected validated multi-item and single-item measures that aligned with the outcome definitions. Outcome definitions were informed by the data from the meta-synthesis [14].

Step 3. Quality assessment of outcome measurement instruments

Four researchers (SS, KMG, OP, ZS) separately reviewed and rated the 97 items based on the conceptual alignment with the respective outcomes and their definitions, using a four-point scale ranging from 0 (Does not match the definition) to 3 (Greatly matches the definition). When multiple items were rated as “3” for the same outcome, the researchers indicated with an asterisk the item they felt was the best conceptual match. The four researchers then summed the rating scores and met to select the top two items for each outcome. Next, four community-based partners (CM, TC, HF, SC) and two researcher partners (HG, VN) used the same scale and procedures to rate the remaining items. The item that received the highest sum score was kept for each outcome. There were instances when the items received the same sum score for one outcome or there were inconsistent ratings across the team members (CM, TC, HF, SC, HG, VN). The team had two online meetings to discuss and decide on best matches of items to outcomes. The team also identified potential outcomes that were conceptually overlapping or not relevant. As a result, the team removed any overlapping item(s) and made minor modifications to the wording to fit the goal of the SCI Peer Support Evaluation Tool (see Fig. 2 for flow chart). Through the team meetings and follow-up email exchanges, we created a preliminary version of the SCI Peer Support Evaluation Tool consisting of the remaining outcomes and items.

Fig. 2figure 2

PRISMA-inspired flow chart for item Identification.

Step 4. Generic recommendations on the selection of outcome measurement instruments

We consulted with Executive Directors/Chief Executive Officers (CEOs) and Peer Support Program Coordinators from ten Canadian provincial community-based SCI organizations to discuss the preliminary version of the SCI Peer Support Evaluation Tool. The Directors/CEOs and Coordinators provided feedback, via an online survey, on the relevancy, appropriateness of language use, clarity, specificity/unambiguity, and unintended adverse effects of each item [Supplemental 3]. They also participated in a 2-h online consultation meeting. To modify any item(s) that received a lower rating, participants discussed in breakout rooms and large group discussions. Our team then met to modify the items for each outcome to ensure content validity.

Next, we assessed the response face validity and test-retest reliability of the evaluation tool items based on recent recommendations [17]. For face validity, we used a think-aloud method [18] and for test-retest reliability, we used a 10-day recall testing period. Adults with SCI who received peer support from the partnered community-based SCI organizations were recruited (SCI BC, SCI Saskatchewan, SCI Ontario, Ability New Brunswick). Participants first responded to the items with a researcher present on an online meeting. The researcher prompted participants to voice their thoughts to capture their understanding and approach to answering each item. Once participants answered all the questions, they were asked to reflect on the process of responding to the evaluation tool and provide feedback [Supplemental 4: think-aloud interview guide]. Ten days later, participants were asked to complete the evaluation tool again without thinking aloud. All the interviews were recorded and transcribed. Participants’ utterances for each item from the interviews were pulled from the transcripts. Six researchers (including two people with SCI who were also peer supporters/mentors) (a) rated the correspondence between participants’ responses and outcome definitions, (b) in smaller groups, discussed the main issues (e.g., clarity) and suggested changes for the items with poor correspondence ratings, and (c) met as a larger group to reach consensus on suggested changes prior to bringing forward to the partnership. Reliability of the participants’ responses at two time points was tested using interclass correlation coefficient (ICCs). See Fig. 3 for overview of SCI Peer Support Evaluation development procedures.

Fig. 3figure 3

SCI peer support evaluation tool development procedures.

Comments (0)

No login
gif