Calendar of Events

Melissa Chiappetta is an international development, education and evaluation expert who is the Founder and CEO of Sage Perspectives. She recently developed a new measurement and evaluation training program with SVP Denver to help local social mission organizations improve the way they collect and use data. We sat down with Melissa to learn more about her work and how nonprofits can make sure time spent on evaluation drives real results. 

 

What interested you in taking on this leadership role by becoming the trainer for this evaluation cohort?

I recently moved back to Colorado and had been looking for ways to get more involved in the community.  I have always liked volunteering, but when I was in DC I was working too many hours to be able to volunteer.  This cohort seemed like a great opportunity to help a lot of organizations at once.  And, I often feel like, with volunteering, you are often getting more out of it than you are offering, but when you have specific expertise that you can bring, then, it is really useful. So, I was excited about that. 

 

What is the most important takeaway you’d like the social mission organizations to get from this experience? 

I think evaluation is often seen as a box we check or something we do in order to be able to report metrics to our board members or our stakeholders.  I don’t think it is often seen as a tool for really identifying how you can improve your impact. I know the cohort participants are all mission-driven organizations that are focused on making significant  impacts in the community. I hope that they will see measurement and evaluation as an opportunity for them to identify how to improve and increase the amount of impact they are able to have in the community. 

 

What do you feel that most social mission organizations “miss” when it comes to authentic evaluation of their efforts and impact?

A lot of times we are measuring inputs and outputs, but we are often not measuring the outcomes and the impacts.  Therefore, we miss the opportunity to really learn about what is making a difference.  And, oftentimes, we don’t take the time to better understand the hows and whys behind the outcomes and impacts and that type of understanding really helps organizations to be able to adapt programming and make change. So, I think focusing more on those higher level outcomes and impacts and following up with qualitative measurements is really important.  

 

What is the best way to measure qualitatively?

After I have identified the outcomes and impacts, I like to follow up with key stakeholders including the beneficiaries, other organizations that work in the field or government officials.  And, then, I ask them questions about the hows and whys behind the outcomes along with what they think has worked particularly well and what has not. I like to really dive down deep into understanding those backgrounds. 

 

What have you learned or what has inspired you thus far during your interactions with the nonprofits in the cohort?

It’s been really interesting for me to see how much measurement some of the organizations are already doing and also some great ideas they have about where they want to take that measurement in the future. The organizations tend to be pretty small, so it is great to see that they already have some robust measurements in place.  

They are all doing great work and it has been great to hear what they are doing in the community. One of the cohort participants, Convivir, had shown that they had really thought through what to measure.  And, I think it is to Convivir’s benefit that they are starting from scratch. They are now trying to figure out “what do we do with what we have and how do we fit what we are learning into what we already have”. And, I think that is a little more complicated.   

 

So, what would take them to the next level?

I think it’s really around looking at evaluation as an opportunity to learn and adapt programming. Part of it is focusing in on what are the key learning questions and not measuring everything because there is so many things you can measure,  I think it is best for them to identify where they can have the most impact with the small amount of dollars that they have to spend on evaluation by looking at what measures could have the most impact on their programming. Narrowing down their measurements has been something that organizations are really struggling with and should continue to have conversations about.  I’m hopeful that they will be able to come out with some clear understanding on how to best focus their efforts.     

 

What else would you like to share about your experience with the evaluation cohort?

One thing you may hear from the participants too is that it is a lot of material to cover in five sessions.  So, this evaluation cohort pilot will provide a great opportunity to hear back from the organizations about what is working well for them and what’s not in order to be able to identify how we could make this work really well for other organizations moving forward.  

And, of course we are evaluating our evaluation cohort by doing a pre and post test, and are planning a focus group discussion and some follow up interviews to try and figure out what worked well and what didn’t. It is so important. One of the things I’ve encouraged  the cohort participants to do is to prioritize pilot projects or new initiatives in their evaluation efforts as these are efforts where you really want to know whether or not it had the impact you expected it to have.        

In leading this cohort, I also observed that  organizations have additional needs for assistance with their evaluation efforts than what we were able to cover in five weeks.  I believe they could use more feedback on their results framework and on where they might have gaps in terms of what they are measuring. It goes a bit beyond the scope of this particular cohort, so I believe the cohort participants are off to a great start, but could always use more feedback and assistance.