Skip to main content


Judging the impact of missing participant continuous data on risk of bias in systematic reviews of randomized trials

Date and Location




Monday 23 September 2013 - 10:30 - 12:00


Presenting author and contact person

Presenting author

Shanil Ebrahim

Contact person

Shanil Ebrahim
Abstract text
Background: We developed an approach to address missing participant data for continuous outcomes in meta-analyses. Objectives: To assist systematic review authors and guideline panels in judging the impact of missing participant data on risk of bias. Methods: Our approach involves a complete case analysis complemented by sensitivity analyses applying four increasingly stringent imputation strategies (Table 1). When the minimally important difference (MID) is available, we calculate the proportion of patients who benefit from the treatment,. Systematic review authors should test a range of thresholds that guideline panels might choose as an important effect. A guideline panel should choose the threshold for recommending treatment. If the entire confidence interval for the proportion is above the threshold for all plausible imputation strategies, a panel should not rate down for risk of bias. If the confidence interval includes the threshold, confidence in the importance of the treatment effect decreases. We applied our approach to a systematic review of respiratory rehabilitation for chronic obstructive pulmonary disease. Results: In the complete case analysis, the proportion of patients who achieved an improvement effect greater than the MID was 29% (95% CI of 21% to 37%) (Figure 1). Strategies 1 to 3 resulted in point estimates ranging from 24% to 18%, with lower confidence limits from 17% to 11% (Figure 1). Strategy 4 was not considered a plausible scenario. In the complete case analysis, the lower confidence limit suggests that at least 21% will achieve an important improvement. The conclusion would be similar for strategy 1 and 2. For strategy 3, if 11% benefiting would be insufficient to recommend treatment, a panel would rate down the quality of evidence for risk of bias. Conclusions: We provide a useful approach on judging the impact of missing participant data for continuous outcomes on confidence in estimates of treatment effects.