Summer, as you know, is proposal season. I’ve been up to my neck (literally – these proposals are huge) in stacks of papers, reviewing ideas seeking support from various federal agencies. Regardless of the agency, some proposals seem to fare less well for common reasons. Here’s my breakdown (and strictly mine – the weaknesses I identified were not always a shared concern with my other panelists) of why proposals fail, in no particular order:
1. The evaluation plans don’t clearly match the project’s goals and objectives. If the project is seeking to change the consumer experience but the evaluation is only looking at production of the consumer good, it will never be able to tell whether the project has met its goal. This could mean a review of the evaluation plans OR a revision to the project’s goals and objectives.
2. The evaluation is not evaluative. No targets or performance standards are set. The way the evaluation is structured will only enable them, in the end, to say descriptive things about what the project did – not how good or worthwhile it was.
3. Experimental designs typically, and surprisingly, lacked a power analysis to determine whether the project’s recruitment efforts are on track. In the era of accountability – and at a time when technology allows us to see ahead of time where we should focus our efforts – there is no excuse for a missing power analysis, at least in those designs where it is called for.
4. Letters of support were clearly written by project staff and cut-and-pasted by the supporters. Letter content was identical, save for the letterhead and signature line. I know it is unreasonable to request others to draft letters of support originally, in most cases. However, the letters I saw frequently left out key responsibilities of the supporting organizations. For example, if the school district will need to commit to providing control condition classrooms, where no benefit to participation will be derived, that needs to be clearly agreed to up front. The danger is looking like your evaluation isn’t well-planned and hasn’t been throughly communicated to all parties.
5. The evaluation organization appears to have the collective experience necessary, but the specific individuals assigned in the proposal have no direct relevant experience in the tasks on the table. Too much narrative space is spent defending the established history of Evaluation Consultants, LLC, particularly when the actual evaluation staff bio- buried in the appendices – is weeeeeeeak.
6. It pains me to even have to write this one – but sometimes I saw proposals that did not yet have an evaluator identified. Sheesh! It is okay, Principal Investigators, to contact an evaluation team during proposal development and ask them to help you draft your evaluation plan. They will probably even help you write the evaluation section of the proposal. You might want to draft up an memorandum of understanding that ensures they will be selected as your evaluator, should the award be granted. In the evaluation business, most of us are used to devoting a little free time to writing up plans that are (or sometimes aren’t) funded in the future. It is part of our work and it is okay for you to start talking to your evaluator the moment you start thinking about your program. In fact, it is highly encouraged. What? You don’t have one now? Go forth!
Okay, there is it – the top six reasons I saw proposals fail this summer. I’m hoping next year it will be a totally different bag. Did you see something different? Post it in the comments!