Measuring Technical Exchange Networks (TENs) at MSH
GHKC's Guide to Monitoring and Evaluating Knowledge Management in Global Health Programs was developed to describe key components of KM activities and help measure outcomes in learning and action. This resource is part of a series of case examples developed by GHKC members highlighting ways the Guide has been used and suggestions for future editions of the Guide.
Contact: Luis Ortiz Echevarria, Manager for Knowledge Management and Learning, Management Sciences for Health (email@example.com)
Management Sciences for Health (MSH) supports online Technical Exchange Networks (TENs) for staff to share ideas and discuss a wide range of technical and cross-cutting topics from maternal, newborn, and child health to monitoring and evaluation and gender. The TENs are intended to build staff capacity by increasing access to information, strengthen connections between globally dispersed staff, and catalyze putting knowledge into action. In 2015, MSH carried out a more significant assessment than the routine annual review of the communities. The assessment explored different dimensions of a community of practice and used a mix of methods to answer key questions including: Are community members satisfied with the technical content and discussions? Are community members using information exchanged in the community? Do community members see the added value of the TENs to their work and to MSH? The primary aim of the assessment was to demonstrate how the TENs add value to MSH, understand why some communities work better than others, and to develop a set of core practices in community administration.
By measuring the performance of the TENs in this way, we are speaking in a language familiar to our health technical colleagues. It gives us a lot more credibility in highlighting the role of the TENs in meeting the organization’s mission.
Management Sciences for Health (MSH) created Technical Exchange Networks (TENs) as a way for staff to exchange ideas and discuss a wide range of technical and cross-cutting topics including maternal, newborn, and child health; infectious disease; leadership development; health systems strengthening; monitoring and evaluation; and gender. The TENs are intended to build staff capacity by increasing access to technical information, strengthening connections between staff working across different projects and countries, and catalyzing action by providing opportunities for participation and putting knowledge into action.
Evaluating KM Activities
In 2015 knowledge management staff at MSH carried out an assessment of the TENs to explore different dimensions of these communities of practice (CoPs). Are community members satisfied with the technical content and discussions? Are community members using information exchanged in the community? Do community members see the added value of the TENs to their work and to MSH?
The assessment used a mix of methods to answer these questions including an online survey, key informant interviews, web analytics, and content analysis. The aim of the assessment was to demonstrate how CoPs add value to MSH. A secondary aim of the assessment was to better understand why some communities work better than others and to develop a set of core practices in community administration.
Using the KM M&E Guide
MSH regularly monitors the number of community members, number of discussions and posts, and number of countries represented. Using indicators from the Guide to monitoring and evaluating knowledge management in global health programs (KM M&E Guide), the team designed an assessment to look at other indicators of a successful CoP including user satisfaction, engagement, learning, and action. The assessment also included questions to assess less tangible dimensions of a community, for example whether the they are a safe space for honest discussion or whether they reduce duplication of effort. By measuring the performance of the TENs in this way, we are speaking in a language familiar to our health technical colleagues. It gives us a lot more credibility in highlighting the role of the TENs in meeting the organization’s mission.
The table below shows some of the indicators that MSH used and adapted for this assessment.
Future Suggestions for the Guide
The appendix on measuring CoPs is fairly well developed. At MSH, it was important for stakeholders to see the difference between the web analytics, output, and outcome indicators that are relevant to CoPs. Particularly important was the difference between absolute numbers and proportions, for example seeing number of posts coupled with proportion of discussions and number of users with proportion of active users.
Below are a few suggestions for improving this section:
- Telling the right story right. Evaluators must be guided to ask the right questions about the return on investment of KM activities. Using recognized frameworks and methods to assess KM interventions is important; it makes stakeholders more receptive to KM interventions and helps facilitate improvements to KM activities that are informed by meaningful user feedback and performance data.
- Each community is unique. Remind evaluators and stakeholders that not all communities function in the same way nor are they intended to do the same things. It is critical to discuss with stakeholders exactly what they want to learn from the assessment, what they want to test, and what they want to change based on data. This will allow for an informed and streamlined set of survey questions or questionnaire.
- Examples of adapted questions. Provide a list of the specific indicators that can be used for CoPs and examples of how questions can be adapted. It may also be helpful to include sample composite questions or Likert scales that have been tested.
- Measurement across agencies. Given the importance of connecting KM practitioners in global health through communities and networks, we may want to agree on a set of sentinel indicators for global health CoPs that can be compared across agencies.
- Further develop initial outcome level indicators. While any indicator may play an important role for an agency, we found that demonstrating the added value of the TENs was more effective the closer we got to measuring initial outcome level indicators. These should be further developed in the guide.
- Always use a mix of assessment methods. The indicators in the survey were a powerful tool for understanding how the TENs work. However, using a mixed methods approach enabled us to get at more poignant answers and ultimately better understand survey results.
The TEN Assessment was designed, implemented, and analyzed with support from Luis Ortiz Echevarria, Sara Holtz, Dan Brame, Willow Gerber, Billy McGilvray, Marlene Mouanga, and Lorine Ghabranious.