Why Proposals Fail

Summer, as you  know, is proposal season. I’ve been up to my neck (literally – these proposals are huge) in stacks of papers, reviewing ideas seeking support from various federal agencies. Regardless of the agency, some proposals seem to fare less well for common reasons. Here’s my breakdown (and strictly mine – the weaknesses I identified were not always a shared concern with my other panelists) of why proposals fail, in no particular order:

1. The evaluation plans don’t clearly match the project’s goals and objectives. If the project is seeking to change the consumer experience but the evaluation is only looking at production of the consumer good, it will never be able to tell whether the project has met its goal. This could mean a review of the evaluation plans OR a revision to the project’s goals and objectives.

2. The evaluation is not evaluative. No targets or performance standards are set. The way the evaluation is structured will only enable them, in the end, to say descriptive things about what the project did – not how good or worthwhile it was.

3. Experimental designs typically, and surprisingly, lacked a power analysis to determine whether the project’s recruitment efforts are on track. In the era of accountability – and at a time when technology allows us to see ahead of time where we should focus our efforts – there is no excuse for a missing power analysis, at least in those designs where it is called for.

4. Letters of support were clearly written by project staff and cut-and-pasted by the supporters. Letter content was identical, save for the letterhead and signature line. I know it is unreasonable to request others to draft letters of support originally, in most cases. However, the letters I saw frequently left out key responsibilities of the supporting organizations. For  example, if the school district will need to commit to providing control condition classrooms, where no benefit to participation will be derived, that needs to be clearly agreed to up front. The danger is looking like your evaluation isn’t well-planned and hasn’t been throughly communicated to all parties.

5. The evaluation organization appears to have the collective experience necessary, but the specific individuals assigned in the proposal have no direct relevant experience in the tasks on the table. Too much narrative space is spent defending the established history of Evaluation Consultants, LLC, particularly when the actual evaluation staff bio- buried in the appendices – is weeeeeeeak.

6. It pains me to even have to write this one – but sometimes I saw proposals that did not yet have an evaluator identified. Sheesh! It is okay, Principal Investigators, to contact an evaluation team during proposal development and ask them to help you draft your evaluation plan. They will probably even help you write the evaluation section of  the proposal. You might want to draft up an memorandum of understanding that ensures they will be selected as your evaluator, should the award be granted. In the evaluation business, most of us are used to devoting a little free time to writing up plans that are (or sometimes aren’t) funded in the future. It is part of our work and it is okay for you to start talking to your evaluator the moment you start thinking about your program. In fact, it is highly encouraged. What? You don’t have one now? Go forth!

Okay, there is it – the top six reasons I saw proposals fail this summer. I’m hoping next year it will be a totally different bag. Did you see something different? Post it in the comments!

Oil In My Backyard

Right now, the midwest’s largest oil spill in history is flowing through my backyard. The pipeline, taking oil from Indiana to Canada, burst sometime Sunday or Monday this past week (3-4 days ago), sending 840,000 barrels of oil into a creek, that flows to the Kalamazoo River, that flows to Lake Michigan. As of this writing, the oil has been spotted just past at a nearby dam, right outside of Kalamazoo, where state workers are doing their best to clean it up before it fills my town and heads to the lake.

On the heels of the disaster in the Gulf, the community is hyper angry and action-oriented. Their questions are these:

1. Do we have the resources to stop the spill before it reaches the Great Lakes?

2. How much oil are we talking about here?

and

3. How in the hell did this happen?

Due to my disposition, I immediately saw these as the evaluation questions. In fact, these seem to be the most common evaluation questions of all time: What was the impact? To what extent? and What was the cause?

In the case here,

1. Yes, we have tons of resources. The oil spill hotline has turned down volunteers, saying they have had an overwhelming number of calls. We can stop this thing, if they’ll let us (outcome). But the leader of the outcome is the very company that owns the pipeline and, like BP, they are keeping others at bay (unexpected consequence).

2. New estimates from the EPA raise the total to 1 million gallons (output). Enough to fill a football field two feet deep and then a lot more. This illustrative description comes courtesy of the Freep and demonstrates another skill needed by evaluators – to describe the extent of the impact in such a way that it is understandable to a wide audience.

Freep photographer Andre J. Jackson also snapped this picture, a necessary visual of the impact at the riverside:

Canada Geese covered in oil sit along the Kalamazoo River after a pipeline ruptured in Marshall on Tuesday.   (ANDRE J. JACKSON/Detroit Free Press)

3. The cause? Enbridge Energy, whose PR-controlled Wikipedia page puts the spill at 19,500 barrels (or 20% 2% of the EPA’s estimate). Well, they are the guilty party, maybe not the cause per se. The cause is really their shoddy internal evaluation. According to the aforementioned wiki page, they have had 610 spills in the last 12 years. 610! If that sort of error rate was allowed in schools or social service organizations, they’d be run out of town. No good internal quality control can allow an average of 51 spills per year.

Look, I don’t pretend to know how to evaluate disaster response. That’s my friend, Liesel Ritchie. But what I do know is that cause-probing is clearly a natural phenomenon because there are a lot of Kalamazooans who want to hold Enbridge’s feet to the fire. And I know that there is a time when one should go native – I’ll see you at the river.

Rachel Maddow Probably Loves Evaluation.

Without a TV in my home, I haven’t been privy to the awesomeness that is Rachel Maddow until last night, in my hotel room in Little Rock. She was speaking with Richard Holbrooke, US special rep for Afghanistan and Pakistan. Maddow had recently visited Afghanistan to report on the front there, so of course, the topic of the interview was the ongoing war. Why am I blogging about war in the context of evaluation?

Maddow tells Holbrooke about the police forces she saw in Afghanistan, where, in January of THIS YEAR, they were exiting marksmen classes with a 30-35 percent accuracy rate. Then the US worked out some magical intervention that, after only a few months, rose the exit accuracy rates to 95 percent. Our allied forces are now much better at killing people. Whew.

But the clincher was when Maddow, only half-rhetorically, asks her guest: What the hell have we been doing there for the last eight years???

How did was the situation allowed to go so long with such poor performance, at the expense of thousands of lives and billions of taxpayer dollars? Well?

A lack of evaluation got us there, of course. It was the simple fact that no one took the care to collect and/or use data on the accuracy of marksmen skills.

But much like any social service intervention that promises a lot and shows early signs of success, we still have work to do. A single posttest is necessary but insufficient. Continued evaluation will be needed to demonstrate that the skill levels have been maintained two months, six months, one year down the road. The story is not yet complete, Ms. Maddow. Keep asking the hard questions.

To view part of the interview: http://www.msnbc.msn.com/id/26315908/

You Can’t Always Get What You Want

Reading a decent book, came across a great point:

The book is Start With Why and it really labors on the fact that great businesses lead with their WHY right out front, not their WHAT (or their HOW). Okay, but how does this relate to evaluation?

The author discusses how laundry detergent brands had forever been promoting how their formulas got clothes “whiter than white.” They had, smartly, conducted focus groups and asked people what they wanted out of a great laundry detergent and “whiter than white” was the answer. But when they rolled out that marketing campaign, competing with one another over which was whiter (that sounds weird), it didn’t have much of an effect on the consumer. Good thing they brought back in the scientists, anthropologists to be exact, who studied people washing their clothes. That’s when they discovered – a ha! – the first thing people do when they pull a load from the wash is to smell it. Yep. The fresh smell was more important than the level of whiteness. (Now you know why that aisle is so ridiculously scented at the supermarket, with dozens of fragrance variation.)

Back to evaluation: Focus groups are so often to go-to resource for needs assessments. Close seconds might be surveys or interviews, but these are other forms of self-report, where we are asking people directly about their needs. But those end up actually being their wants, more often than not. As Jane Davidson calls it, unconscious needs are really what we are after when we are designing programs and interventions. Those unconscious needs are the ones people are less likely to be able to articulate, simply because we humans are often lacking self-awareness. Perhaps, like the anthropologists witnessing laundry day, we should be observing a great deal more than we are asking.

Confession of a Qual Lover

Sigh. For better or worse, I am known as the qualitative person around here. While I do, admittedly, have a strong fondness for qualitative work, I know better that to think it is the right tool for every job. Yet, my reputation precedes me and I’m often asked to defend qualitative work in the tired quant-qual debate that so bores me.

Mostly, I am sick of this debate because people expect me to ride the party line and I just can’t do it. The party line (for both quant and qual) is so full of assumptions about ontology and epistemology that I end up disagreeing with the side I am supposed to be defending and that sort of takes the adversarial fun out of the debate. The main issue is this:

Qualitative work is often described as indivisible from constructivism (the perspective that reality is constructed by individuals and so multiple realities exist and evaluation should represent them) and inductive reasoning (the use of case examples to frame a study that builds to a theory). While those are nice ideas for polarizing and contrasting, I don’t buy it.

I believe there can be no such thing as true inductive reasoning among paid evaluators. According to the textbook definition, that process is such that one does not have any working theories that guide data collection or analysis, but that the theory comes from the data. Yet this is much like saying there is such a thing as value-free inquiry. Even the choice of research topic in an exploratory orientation still posits that “there is something going on in X,” which is a theory (albeit with a lowercase “t” perhaps). Though true grounded theory researchers try to go through analysis without existing theory to guide them, (1) its like saying their research experience and expertise in the topic can be ignored and (2) grounded theory itself is a theory that knowledge can be generated through this process. In other words, we wouldn’t be hired for the job if we weren’t bringing something valuable (other than a method) to the table and inductive reasoning doesn’t fit well with that.

More frustrating than my position as the token qualitative person is the rampant oversimiplification of beliefs, as if we could collapse the world into two opposing camps and never the twain shall meet. That sort of either/or-only-one-can-be-right kind of thinking is exactly the sort of debate designed strictly to annoy qual loves, like me.

If It Ain’t Broke

I found the cutest old-man optometrist. He puttered around the room, in cute old man fashion. He had a little cute old man mantra: “if it ain’t broke…”

Him: Are your contacts working okay for you?

Me: Sure, I guess.

Him: Well, if it ain’t broke…

Me: But aren’t you going to check my eyes???

He eventually did. But he must have repeated his mantra three or four more times during our appointment together.

It was while I was waiting for my eyes to dilate that I realized how “if it ain’t broke…” might be the worst phrase for an evaluator to hear. Why wait until things are broken to start fixing them? Waiting until things are broke means enduring a period of decline, a period of broken-ness, and a period of rebuilding to get things back up at the same operating level as previously. That sort of downtime impacts an organization’s productivity, effectiveness, and bottom line. When there are clear patterns and signposts established (especially in the eyecare industry), it would be much more efficient to watch out for those early warning signals and take action, rather than wait until it is broke. This is why evaluators are good at pattern recognition.

Now whenever I hear “if it ain’t broke…,” I cringe. Must be hard to examine my eyes that way.

Don’t Even Try

I love being on the other side. I am in the midst of reviewing evaluator letters of interest – miniproposals – to evaluate one of my work projects. Rarely am I in the position to need the evaluator. Usually I am the one submitting my ideas and credentials. The pile sitting in front of me holds an incredible range of quality. For some, I am honored that they would be interested in working with us. For others, I am reminded of a mistake I made early on in my professional evaluation career.

I was hired on to a grant, which had proposed to evaluate a community initiative, after the proposal was accepted and funding had landed. My team was geeked, particularly because the local community initiative had been so successful, other cities were adopting the model. We saw this rapid replication as an opportunity – perhaps even as a meat market. Hmmmm, which one of these pretties shall we go after? We, naturally, went for the largest, the richest, the most popular options and courted those community leaders around the country. We submitted evaluation proposals to them that were all basically the same, with selected search-and-replacing. At the time, I had never actually written an evaluation proposal and I use my naivete as an excuse, thankyouverymuch.

When the first rejection letter was returned to us, I was devastated (I mean, I cried. First rejection.) It was from Denver. And their chief complaint was that the proposal didn’t reflect an understanding of the Denver context. We had talked about this particular community initiative being so necessary because the larger community of Fill-In-The-Blank was a waning industrial center that needed revitalization. Hello? Been to Denver lately? That’s not them at all. They were right to reject us. We should have done more homework before submitting that proposal.

The same mistakes are sitting in front of me: boilerplate language that shows no evidence of even trying to understand who we are and what we do. While this might seem like an easy strategy (and who knows, one of the 400 letters sent out might actually land a job…), one shouldn’t be a surprised by rejection. Just like the guy who sidles up to me at the bar, I am thinking in my head, “don’t even try.”

Vocabulary Quiz

This post has been a long time coming.

In the not so distant past, I tried to publicly criticize (I know, I know…) how authors of an evaluation book mis-taught formative and summative. Not such a big deal if they are personally in error, but a much larger offense if publishing. As a brief review:

Formative When evaluation findings are used, typically internally, to make improvements to the organization. As Stake put it, “when the cook tastes the soup.”

Summative When evaluation findings are used, typically externally, to make judgments about the organization. As Stake put it, “when the guests taste the soup.”

The authors in question tried to establish that formative was when an evaluation looks at the activities of an organization. By contrast, they said summative was when the evaluation looks at the impacts of those activities. Of course, this is not exactly the case. For example, evaluative information about the impacts of an organization can be used to make judgments, yes (that’s summative), but can also be used to make improvements to the organization (formative, here). So the authors were incorrectly conflating why organizations do evaluation (formative or summative) with the organizational areas an evaluation can examine (activities v. impacts).

My rant about this mistake began with “These ‘experts’…” and ended with “…and make twice as much as me.” (In other words, a typical tirade from me.)

But my listeners shut it down. They agreed that I was correct, but condemned my urge to be so public in my critique, saying something to the effect of “a lot of people make this same mistake.” I am fairly sure the larger mistake may be to let such misconceptions go unturned.

And now you have had your vocabulary lesson for the day. It might make you smarter than your average evaluator.

Nix the Table of Contents

If the evaluation report is so long it needs a table of contents, you know you have gone too far.

I have been researching the communication of evaluation findings in preparation for an upcoming webinar on the topic and because I have a horse I’m currently riding called How Not to Annoy People with Evaluation. Experts in the field rarely mention much on communication of findings. Those who do give a decent turn to getting the right stakeholders at the table, even thinking about different ways to display findings. But invariably, the evaluation seems to produce a written report. Many evaluation budgets aren’t large enough to rework the written tome into brochures, newsletters, and interpretive dance routines to cater the findings to different audiences. We’re often stuck with the written report.

So then why do we torture the readers with dozens of pages of inane technical information before getting to the findings? (Rhetorical. I think I have an answer for another blog post.)

Reports 200 pages in length are not useful. Plain and simple. The narrative and graphics must be concise and to the point. I was sitting in a meeting at a local foundation about two weeks ago, with two foundation folks in the room, representing different institutions. They were lamenting, as we all do, about not having enough time to fully catch up on every activity of their grantees. They pinpointed annual reports, saying even executive summaries can be too long (and I read a recent “expert” in evaluation advise an executive summary 4 to 20 pages in length!) and then they begged to the ether, “bullet points! Please bullet points!”

To make evaluation useful, we must stop producing documents that better serve as doorstops. One good sign: if you have to create a table of contents, you have too many pages.

From the blog