Last year at about this time I was knee deep in survey redesign. I had joined this awesome project that has been conducting an annual survey for over 10 years. The surveyed parties are grantees in one of NSF’s program streams. Nice as they are, they’d been quite vocal about how the survey doesn’t meet their own evaluation needs, is difficult to complete, is too long, etc.

When I joined the project, I was put onto the task of redesign. As a team, we cut over a quarter of the questions and replaced another 15 percent. Better layout improved the design, cutting out several more pages (it was, like, over 25 to begin with). We’d even conducted three public forums with the grantees to get their input on what could be more useful. I felt pretty great when we released the revised version of the survey last winter.

Then, I got the request from my boss – the one where he delegated me to complete part of the survey. (In a weird twist, because we too were funded by the NSF program stream, we also had to complete the survey. Its like the snake eating its own tail.) So I found myself trying to answer questions about the number of students we have. What? We don’t have students – we serve the other grantees! Do I write in “0” or “n/a”? How many collaborations do we have with other organizations? Geez, it depends on what you mean by “collaboration.” I’d like to think our partners find mutual benefit, but you’d really have to ask them.

In short, it was tortuous. I had always known that the types of grantees were so diverse that making a survey applicable to everyone was a difficult task. But I didn’t even think about our own work and how we would answer these questions. Half of me wants to throw the whole thing away and the other half wants to be content knowing no instrument is perfect. But when congressional budgets rely on the data to stay alive and the project has had over 10 years to get it right, we ought to be a bit closer to perfection.