Submit Site Edits

Morbidity and Mortality Conference

Morbidity and Mortality Conference is a monthly meeting involving the entire Neurology department and nursing leadership that is aimed at identifying and improving systems-based issues that lead to adverse patient outcomes.

‍It is lead by the senior residents who were on service during the previous month. Each resident gives a presentation regarding the state of the service during the month and proceeds to dive deep into a case s/he thinks useful for discussion.

For a review of M&M Purpose and Design please click the link.

Guide for Case Selection

Here are some criteria to think about when you select cases to review:

  1. Adverse outcome

  2. Death, disability, harm, or injury of some sort

  3. Preventable

  4. Lessons to be learned about cognitive or system issues

  5. Near miss: there was potential for harm but ultimately there was a good outcome

‍Try to think about potential cases in “real time” rather than retrospectively when you’re done with your block of service.  Details become forgotten very easily, and it is easier to talk to residents, nurses, and other parties involved while everything is fresh on everyone’s mind.

If you are having trouble selecting a case, you are welcome to run potential cases by either Chief Resident or Dr. Phipps.

Analyzing the Case

It is easier to analyze a case when trying to break it down to specific categories as outlined below:

What cognitive factors contributed to the outcome?

There are many cognitive factors that we all face.  Here are a few that can help jump-start your analysis:

Errors of over-attachment to a particular diagnosis

  • Anchoring: the tendency to perceptually lock on to salient features in the patient’s initial presentation too early in the diagnostic process and failing to adjust this initial impression in the light of later information. Anchoring might be severely compounded by the confirmation bias.

  • Confirmation bias: the tendency to look for confirming evidence to support a diagnosis rather than look for disconfirming evidence to refute it, despite the latter being more persuasive and definitive

  • Premature closure of the differential diagnosis: the tendency to apply premature closure to the decision making process, accepting a diagnosis before it has been fully verified. The consequences of the bias are reflected in the maxim: ‘‘When the diagnosis is made, the thinking stops.’’

‍Errors due to failure to consider alternative diagnoses

  • Multiple alternatives bias: a multiplicity of options on a differential diagnosis might lead to significant conflict and uncertainty. The process might be simplified by reverting to a smaller subset with which the physician is familiar, but might result in inadequate consideration of other possibilities. One such strategy is the three diagnosis differential: “it is probably A, but it might be B, or I don’t know (C).” Although this approach has some heuristic value, if the disease falls in the C category and is not pursued adequately, it minimizes the chance that serious diagnoses are made.

  • Representativeness restraints: drive the diagnostician toward looking for prototypical manifestations of disease: “if it looks like a duck, walks like a duck, quacks like a duck, then it is a duck.” Yet, restraining decision making along these pattern recognition lines leads to atypical variants being missed.  

  • Search satisficing: reflects the universal tendency to call off a search once something is found. Co-morbidities, second foreign bodies, other fractures, and co-ingestants in poisoning may all be missed.

‍Error due to inheriting someone else’s thinking

  • Diagnosis momentum: once diagnostic labels are attached to patients they tend to become stickier and stickier. Through intermediaries (patients, paramedics, nurses, physicians) what might have started as a possibility gathers increasing momentum until it becomes definite, and all other possibilities are excluded.

  • Framing effect: how diagnosticians see things might be strongly influenced by the way in which the problem is framed, e.g. physicians' perceptions of risk to the patient may be strongly influenced by whether the outcome is expressed in terms of the possibility that the patient might die or might live. In terms of diagnosis, physicians should be aware of how patients, nurses, and other physicians frame potential outcomes and contingencies to the clinical problem to them.

  • Bandwagon effect: the tendency for people to believe and do certain things because many others are doing so. Group-think is an example, and it can have a disastrous impact on team decision making and patient care.

‍Errors in prevalence perception or estimation

  • Availability bias: the disposition to judge things as being more likely, or frequently occurring, if they readily come to mind. Thus, recent experience with a disease might inflate the likelihood of its being diagnosed. Conversely, if a disease has not been seen for a long time (is less available), it might be underdiagnosed.

  • Base-rate neglect: the tendency to ignore the true prevalence of a disease, either inflating or reducing its base-rate, and distorting Bayesian reasoning. However, in some cases clinicians might (consciously or otherwise) deliberately inflate the likelihood of disease, such as in the strategy of "rule out worst-case scenario" to avoid missing a rare but significant diagnosis.

  • Hindsight bias: knowing the outcome might profoundly influence perception of past events and prevent a realistic appraisal of what actually occurred. In the context of diagnostic error, it may compromise learning through either an underestimation (illusion of failure) or overestimation (illusion of control) of the decision maker’s abilities.

‍Errors involving patient characteristics or presentation context

  • Fundamental attribution error: the tendency to be judgmental and blame patients for their illnesses (dispositional causes) rather than examine the circumstances (situational factors) that might have been responsible. In particular, psychiatric patients, minorities, and other marginalized groups tend to suffer from this CDR. Cultural differences exist in terms of the respective weights attributed to dispositional and situational causes.

  • Triage cueing: the triage process occurs throughout the health care system, from the self-triage of patients to the selection of a specialist by the referring physician. In the emergency department, triage is a formal process that results in patients being sent in particular directions, which cues their subsequent management. Many CDRs are initiated at triage, leading to the maxim: "Geography is destiny." Once a patient is referred to a specific discipline, the bias within that discipline to look at the patient only from their own perspective is referred to as “deformation professionnelle”.

  • Yin-yang out: when patients have been subjected to exhaustive and unavailing diagnostic investigations, they are said to have been worked up the yin-yang. The yinyang out is the tendency to believe that nothing further can be done to throw light on the dark place where, and if, any definitive diagnosis resides for the patient, i.e., the physician is let out of further diagnostic effort. This might prove ultimately to be true, but to adopt the strategy at the outset is fraught with the chance of a variety of errors.

‍Errors associated with physician affect, personality, or decision style

  • Commission bias: results from the obligation toward beneficence, in that harm to the patient can only be prevented by active intervention. It is the tendency toward action rather than inaction. It is more likely in over-confident physicians. Commission bias is less common than omission bias.

  • Omission bias: the tendency toward inaction and rooted in the principle of nonmaleficence. In hindsight, events that have occurred through the natural progression of a disease are more acceptable than those that may be attributed directly to the action of the physician. The bias might be sustained by the reinforcement often associated with not doing anything, but it may prove disastrous. Omission biases typically outnumber commission biases.

  • Outcome bias: the tendency to opt for diagnostic decisions that will lead to good outcomes, rather than those associated with bad outcomes, thereby avoiding chagrin associated with the latter. It is a form of value bias in that physicians might express a stronger likelihood in their decision-making for what they hope will happen rather than for what they really believe might happen. This may result in serious diagnoses being minimized.

  • Overconfidence/underconfidence: a universal tendency to believe we know more than we do. Overconfidence reflects a tendency to act on incomplete information, intuitions, or hunches. Too much faith is placed in opinion instead of carefully gathered evidence.

  • Zebra retreat: occurs when a rare diagnosis (zebra) figures prominently on the differential diagnosis but the physician retreats from it for various reasons: perceived inertia in the system and barriers to obtaining special or costly tests; self-consciousness and underconfidence about entertaining a remote and unusual diagnosis and gaining a reputation for being esoteric; the fear of being seen as unrealistic and wasteful of resources; under- or overestimating the base-rate for the diagnosis; the ED might be very busy and the anticipated time and effort to pursue the diagnosis might dilute the physician’s conviction; team members may exert coercive pressure to avoid wasting the team’s time; inconvenience of the time of day or weekend and difficulty getting access to specialists; unfamiliarity with the diagnosis might make the physician less likely to go down an unfamiliar road; fatigue or other distractions may tip the physician toward retreat.

What system factors contributed to the outcome?

This includes any factor related to how things are done or how groups are organized.  Examples include:

  • Were there any issues with communication with outside facilities?

  • Were there insufficient staffing issues?

  • Was there a technical problem with medical equipment?

  • Were there handoff issues?

  • Was there a deficiency in training (low volume of patients seen with a particular diagnosis)?

  • Was there an excessive volume of activity?

  • Were there ambiguous divisions of labor?

There are many more system factors, but this should be enough to start thinking about what contributed to the adverse outcome of the case in consideration.  Avoid such “systems” issues like “Neurosurgery dumped the patient on us.”  Be cognizant of concrete, actionable items that can be changed, not necessarily personality factors of various specialties.

What patient factors contributed to the outcome?

  • Were there particular co-morbidities that made the outcome more or less likely?

  • Were there family issues (relationship with patient or with medical staff) that made communication difficult?

Take Home Points and Bottom Line

Each case should have actionable items and take home points that can be implemented in the near future.  Refrain from vague, abstract goals such as “we need to pay attention more to …” or “we need to try harder.” Make the points concrete so everyone is on the same page and can make the changes required.  Some example of changes/items that can be done:

  • Changes to the system and how certain processes work (e.g., changes to sign-out, rounding procedures, etc.)

  • Becoming more aware of cognitive biases

  • Discussing the process in more detail with certain stakeholders and implementing changes based on findings

Further Reading and Resources

If interested, here are a few books and other resources that discuss patient safety and quality improvement in more detail.  While this isn’t directly related to Morbidity and Mortality Conference, it may give you a better idea about which factors contribute to poor outcomes (especially cognitive and system factors).