I write from Fond-des-blancs, Haiti, where I’m visiting St Boniface Hospital, a partner hospital for my NGO Physicians for Haiti. We just had our 3rd conference on medical education in Haiti, which by all appearances went quite well! But that leads to the question I’ll focus this post on – how do we know whether it went well?
As I’ve slowly climbed through my medical education, the import of monitoring & evaluation (M&E) as well as quality improvement has become increasingly apparent. It is a humbling thing to realize that the sole clinician waging battle for the health of her patients is an entirely outdated model outside of the most rural locales. All healthcare in almost all settings requires a structure of support, and such systems (and even individual clinicians, for that matter) can work to improve their ability to provide the best possible health for the populace they serve.
In the case of Physicians for Haiti (P4H), we provide educational support to partner organizations in Haiti who are involved in clinical care, which in turn complicates our M&E efforts further. For a clinical organization metrics of patients seen, patients treated, vaccines delivered, and deaths within the hospital readily present themselves – present, but still fall well short of capturing the truly desired metric of health improved/effort expended for each project. Health education ideally improves the same metric, but the chain of causation between a lecture and a patient’s outcome stretches long and thin indeed (and likely bifurcates. Repeatedly). Indeed, even within well resourced settings there is a paucity of data linking education to hard patient outcomes (some of the examples I’m aware of deal with lowering blood pressure and reducing death rates in CAD patients), and the most readily available outcome measurements (clinician knowledge post education, post quiz a few months later) aren’t particularly enthralling.
Hopefully it comes as no surprise that neither myself or my organization have a magic bullet for this issue. But we do realize that by keeping the question of “what impact are we having” at the forefront of our minds while designing, implementing, and monitoring programs, we can push ourselves to keep the cycle of improvement for all of our programs short and vigorous…