Delightful, strategy-shifting, and totally free ideas for your next data viz

Don’t Even Try

I love being on the other side. I am in the midst of reviewing evaluator letters of interest – miniproposals – to evaluate one of my work projects. Rarely am I in the position to need the evaluator. Usually I am the one submitting my ideas and credentials. The pile sitting in front of me holds an incredible range of quality. For some, I am honored that they would be interested in working with us. For others, I am reminded of a mistake I made early on in my professional evaluation career.

I was hired on to a grant, which had proposed to evaluate a community initiative, after the proposal was accepted and funding had landed. My team was geeked, particularly because the local community initiative had been so successful, other cities were adopting the model. We saw this rapid replication as an opportunity – perhaps even as a meat market. Hmmmm, which one of these pretties shall we go after? We, naturally, went for the largest, the richest, the most popular options and courted those community leaders around the country. We submitted evaluation proposals to them that were all basically the same, with selected search-and-replacing. At the time, I had never actually written an evaluation proposal and I use my naivete as an excuse, thankyouverymuch.

When the first rejection letter was returned to us, I was devastated (I mean, I cried. First rejection.) It was from Denver. And their chief complaint was that the proposal didn’t reflect an understanding of the Denver context. We had talked about this particular community initiative being so necessary because the larger community of Fill-In-The-Blank was a waning industrial center that needed revitalization. Hello? Been to Denver lately? That’s not them at all. They were right to reject us. We should have done more homework before submitting that proposal.

The same mistakes are sitting in front of me: boilerplate language that shows no evidence of even trying to understand who we are and what we do. While this might seem like an easy strategy (and who knows, one of the 400 letters sent out might actually land a job…), one shouldn’t be a surprised by rejection. Just like the guy who sidles up to me at the bar, I am thinking in my head, “don’t even try.”

Vocabulary Quiz

This post has been a long time coming.

In the not so distant past, I tried to publicly criticize (I know, I know…) how authors of an evaluation book mis-taught formative and summative. Not such a big deal if they are personally in error, but a much larger offense if publishing. As a brief review:

Formative When evaluation findings are used, typically internally, to make improvements to the organization. As Stake put it, “when the cook tastes the soup.”

Summative When evaluation findings are used, typically externally, to make judgments about the organization. As Stake put it, “when the guests taste the soup.”

The authors in question tried to establish that formative was when an evaluation looks at the activities of an organization. By contrast, they said summative was when the evaluation looks at the impacts of those activities. Of course, this is not exactly the case. For example, evaluative information about the impacts of an organization can be used to make judgments, yes (that’s summative), but can also be used to make improvements to the organization (formative, here). So the authors were incorrectly conflating why organizations do evaluation (formative or summative) with the organizational areas an evaluation can examine (activities v. impacts).

My rant about this mistake began with “These ‘experts’…” and ended with “…and make twice as much as me.” (In other words, a typical tirade from me.)

But my listeners shut it down. They agreed that I was correct, but condemned my urge to be so public in my critique, saying something to the effect of “a lot of people make this same mistake.” I am fairly sure the larger mistake may be to let such misconceptions go unturned.

And now you have had your vocabulary lesson for the day. It might make you smarter than your average evaluator.

Nix the Table of Contents

If the evaluation report is so long it needs a table of contents, you know you have gone too far.

I have been researching the communication of evaluation findings in preparation for an upcoming webinar on the topic and because I have a horse I’m currently riding called How Not to Annoy People with Evaluation. Experts in the field rarely mention much on communication of findings. Those who do give a decent turn to getting the right stakeholders at the table, even thinking about different ways to display findings. But invariably, the evaluation seems to produce a written report. Many evaluation budgets aren’t large enough to rework the written tome into brochures, newsletters, and interpretive dance routines to cater the findings to different audiences. We’re often stuck with the written report.

So then why do we torture the readers with dozens of pages of inane technical information before getting to the findings? (Rhetorical. I think I have an answer for another blog post.)

Reports 200 pages in length are not useful. Plain and simple. The narrative and graphics must be concise and to the point. I was sitting in a meeting at a local foundation about two weeks ago, with two foundation folks in the room, representing different institutions. They were lamenting, as we all do, about not having enough time to fully catch up on every activity of their grantees. They pinpointed annual reports, saying even executive summaries can be too long (and I read a recent “expert” in evaluation advise an executive summary 4 to 20 pages in length!) and then they begged to the ether, “bullet points! Please bullet points!”

To make evaluation useful, we must stop producing documents that better serve as doorstops. One good sign: if you have to create a table of contents, you have too many pages.