Skip to main content

Languages

Direct versus indirect comparisons in systematic reviews of test accuracy studies: An IPD case study in ovarian reserve testing

Date and Location

Session: 

P2.032

Date

Saturday 21 September 2013 - 10:30 - 12:00

Location

Presenting author and contact person

Presenting author

Junfeng Wang

Contact person

Junfeng Wang
Abstract text
Background: Comparative systematic reviews of diagnostic test accuracy compare relative accuracy of two or more tests. Direct comparisons evaluate all tests in the same study, even in the same patients, are most valid and regarded as the reference approach. Indirect comparisons are more prone to bias than direct comparisons, but excluding them may lead to a loss in precision in the summary estimates. Objectives: To investigate the difference of indirect comparisons compared with the results of direct comparisons in meta-analysis; to develop appropriate methods of adjusting indirect comparisons to improve their comparability. Methods: A dataset from Individual Patient Data (IPD) meta-analysis on the test accuracies of Anti-Müllerian Hormone (AMH), Antral Follicle Count (AFC) and Follicle Stimulation Hormone (FSH) in relation to ovarian response was used in this case study. Test accuracies were measured by the area under the ROC curves (AUCs) and compared in each pair of tests under direct and indirect comparisons. Inconsistency was defined as statistical significant difference in comparative results between the direct and indirect evidence. Results: 32 studies were included with IPD from 4762 women undergoing IVF. By comparing AUCs, the difference between AFC and FSH (0.0948, p<0.001) is significant in direct comparison but not significant (0.0678, p=0.09) in indirect comparison; while the difference between AFC and AMH is significant (-0.0830, p<0.001) in indirect comparison but not significant (-0.0176, p=0.29) in direct comparison. Adjusting for indirectness by considering covariate effect could improve the comparability but these differences still existed after covariate-adjustment. Conclusions: Comparative results of test accuracy obtained through indirect comparisons are not always consistent with those obtained through direct comparisons. There is no straight forward way to make indirect comparisons more comparable. Evidence from indirect comparisons should be assessed carefully and combined with direct comparisons after adequate assessment of the consistency and with adjustment.