NOMCOM: measuring impact

The question that nags me possibly more than any other at PS1 is: how do we know how we’re doing?

The short answer is, we don’t. PS1 makes no formal or really even informal attempt to gauge our success against our stated mission. We are drowning in anecdotes, but we are a near desert of actual data.

Well, not quite. One of the advantages we enjoy over a lot of makerspaces in this regard is that we are 100% member-supported. So at the very least, we know how fast we are growing, and we can treat membership numbers as a rough proxy for success. This sets us apart, for example, from a maker lab that is part of a public school and serves a fixed population.

Still, this isn’t much to go on. For starters, topline growth numbers mask some worrying underlying trends, such as a high churn rate. And while it is comforting that members find enough value in PS1 to continue to pay their dues, this isn’t quite the same thing as measuring impact. Are people actually making things at PS1? Can anyone be successful at PS1, or does the space only really serve certain members? Is the community at PS1 healthy? Are we getting better or worse over time?

So I was looking forward to the Measuring Impact session. Unfortunately, the session ended up being tilted fairly heavily toward grant-supported organizations, which in retrospect makes sense: they care about measuring impact because they need to prove their worth to funders. That public school maker lab wants to justify its existence by tracking the change in self-reported “STEM identity” of its student population. PS1 doesn’t share these concerns.

Still, there were useful takeaways from the session, and the session notes contain links to some helpful resources. (If you’re looking for a single overview article to read on measuring social impact, this one is a decent place to start. I also picked up this book.)

A lot of the advice around measuring impact has a common-sense aspect:

  • Start with the mission itself. Make sure you are clear on your objectives before you try to measure results.
  • Pick metrics that are relevant. How well do the metrics actually capture the outcome you are interested in?
  • Keep it simple. Data collection should be relatively easy and the results should also be easy to explain.
  • Iterate often. Trend lines are more valuable than individual data points.
  • Use the data. The data doesn’t matter if it doesn’t inform decision making.

This may be commonsensical, but that doesn’t necessarily make it easy. It’s trivial for us to track the number of members at PS1. Measuring PS1’s local economic impact, on the other hand? Good luck with that.

The truth is, though, that I don’t actually spend much time thinking about the local economic impact of PS1. It’s not really actionable information for me, nor does it seem central to our mission.

So what metrics and outcomes should we be focusing on? My short list would look something like this:

  1. Member growth
  2. Member churn
  3. Member satisfaction
  4. Space utilization
  5. Member engagement

The first two metrics, growth and churn, fall straight out of our member database, although the truth is that this data has been surprisingly hard to get at in the past. The changeover to Wild Apricot will help a lot.

Member satisfaction remains a mystery, one that I hope to address through an upcoming member survey. Plenty more on that to come.

The survey will also shed some light on space utilization, although there are lots of people in the broader maker community attacking this problem through tooling. RFID-based systems for unlocking tools is one source of such data. There are also passive — and anonymous — data collection approaches that rely on equipment monitoring or motion sensing to measure space utilization.

Of course, these techniques can throw off a lot of data, and it is important to loop back to the principles outlined above: what are the relevant metrics? How can they be made simple to explain? What decisions will they inform?

Member engagement is an interesting puzzle. There are data-driven approaches possible here, such as measuring activity on message boards and social media. Perhaps more important though is the question of volunteership and member participation in the process of running PS1. Conventional wisdom at PS1 is that our volunteer rates are low and that this is a problem. Some actual data here would be welcome.

Perhaps the most interesting question is what is missing from this list. In particular, it feels light on the topics of community health and the quality of actual making at PS1. One of our stated goals is to “foster a creative, collaborative environment for experimentation and development in technology and art.” How are we doing?

I don’t have an immediate answer, although there are resources out there that may be relevant to the question. For example, the Community Canvas is a framework for assessing and building meaningful communities. Traditionally we have treated the community at PS1 as something that just happens rather than something that needs to be tended. But we’ll never really know how well that approach is working unless we attempt to measure it.

Post Comments:

Other News: