Sensitivity vs Specificity vs PPV: Avoiding Common MRCP Exam Traps

A 57-year-old man is invited to participate in a population-based bowel cancer screening programme. He has no known medical illnesses, feels completely well, and is physically active. Before deciding whether to proceed, he asks you a very specific question:

“If my screening test turns out to be positive, what is the likelihood that I truly have colorectal cancer?”

Which statistical concept best answers his question?

A. Diagnostic accuracy
B. Sensitivity
C. Specificity
D. False discovery proportion
E. Positive predictive value


Correct Answer

Positive predictive value (PPV)


Detailed discussion for MRCP

This is a classic MRCP screening-statistics question, testing whether you understand what question each statistic actually answers.

The patient is not asking:

  • “How good is this test at detecting cancer?” → sensitivity
  • “How good is this test at excluding cancer?” → specificity

Instead, he is asking a post-test, patient-centred question:

“Given that my test is positive, what is the chance that I really have the disease?”

That definition corresponds precisely to positive predictive value (PPV).

The 2 × 2 (contingency) table — MUST know for MRCP

Disease present Disease absent
Test positive True positive (TP) False positive (FP)
Test negative False negative (FN) True negative (TN)

Core screening statistics (exam-defining)

  • Sensitivity = TP / (TP + FN)
    → Of all patients with disease, how many test positive?
  • Specificity = TN / (TN + FP)
    → Of all patients without disease, how many test negative?
  • Positive predictive value (PPV) = TP / (TP + FP)
    → If the test is positive, what is the chance disease is present?
  • Negative predictive value (NPV) = TN / (TN + FN)
    → If the test is negative, what is the chance disease is absent?

Likelihood ratios — increasingly important for MRCP

Likelihood ratios tell us how much a test result changes disease odds and are independent of prevalence.

  • Likelihood ratio for a positive test (LR+)
    = Sensitivity ÷ (1 − Specificity)

    → How much the odds of disease increase when the test is positive

  • Likelihood ratio for a negative test (LR−)
    = (1 − Sensitivity) ÷ Specificity

    → How much the odds of disease decrease when the test is negative

Interpretation shortcuts (very exam-useful):

  • LR+ >10 → strong evidence to rule IN disease
  • LR− <0.1 → strong evidence to rule OUT disease

High-yield MRCP insights

  1. PPV and NPV are prevalence dependent
    • In screening populations (low prevalence), PPV is often low, even with excellent tests.
    • This explains why screening programmes generate many false positives.
  2. Sensitivity and specificity are intrinsic test properties
    • They do not change with prevalence.
  3. Screening vs diagnostic tests
    • Screening → prioritise high sensitivity
    • Diagnostic confirmation → prioritise high specificity
  4. Common MRCP pitfall
    • Confusing sensitivity with PPV.
    • Sensitivity is disease-centred; PPV is patient-centred.
  5. Clinical counselling
    • When patients ask “What does my result mean?”, the correct framework is PPV or NPV.

Cheat Sheet (Exam-Ready)

  • Sensitivity = TP / (TP + FN)
  • Specificity = TN / (TN + FP)
  • PPV = TP / (TP + FP)
  • NPV = TN / (TN + FN)
  • LR+ = Sensitivity / (1 − Specificity)
  • LR− = (1 − Sensitivity) / Specificity

Golden rules

  • Low prevalence → ↓ PPV, ↑ NPV
  • PPV/NPV change with prevalence
  • Likelihood ratios do NOT
  • “Test positive — do I have it?” → PPV
  • “Test negative — am I safe?” → NPV

Flash Cards (20)

  1. Q: What does PPV measure?
    A: Probability of disease given a positive test
    Explanation: Post-test probability
  2. Q: PPV formula?
    A: TP / (TP + FP)
    Explanation: Uses only positive results
  3. Q: What does NPV measure?
    A: Probability of no disease given a negative test
    Explanation: Reassurance value
  4. Q: Sensitivity definition?
    A: Ability to detect disease
    Explanation: True positive rate
  5. Q: Specificity definition?
    A: Ability to exclude disease
    Explanation: True negative rate
  6. Q: Which measures depend on prevalence?
    A: PPV and NPV
    Explanation: Change with population risk
  7. Q: Which do not depend on prevalence?
    A: Sensitivity, specificity, likelihood ratios
    Explanation: Intrinsic test properties
  8. Q: Screening tests favour?
    A: High sensitivity
    Explanation: Minimise missed disease
  9. Q: Why is PPV low in screening?
    A: Low disease prevalence
    Explanation: More false positives
  10. Q: LR+ formula?
    A: Sensitivity ÷ (1 − Specificity)
    Explanation: Strength of positive result
  11. Q: LR− formula?
    A: (1 − Sensitivity) ÷ Specificity
    Explanation: Strength of negative result
  12. Q: LR+ >10 indicates?
    A: Strong evidence for disease
    Explanation: Useful to rule in
  13. Q: LR− <0.1 indicates?
    A: Strong evidence against disease
    Explanation: Useful to rule out
  14. Q: Best statistic for patient counselling after a positive test?
    A: PPV
    Explanation: Personal risk estimate
  15. Q: TP means?
    A: Disease present, test positive
    Explanation: Correct detection
  16. Q: FP means?
    A: Disease absent, test positive
    Explanation: Common in screening
  17. Q: FN means?
    A: Disease present, test negative
    Explanation: Missed disease
  18. Q: TN means?
    A: Disease absent, test negative
    Explanation: Correct exclusion
  19. Q: Which statistic reassures after a negative test?
    A: NPV
    Explanation: Probability of being disease-free
  20. Q: Most common MRCP error in screening statistics?
    A: Confusing sensitivity with PPV
    Explanation: Different conditional probabilities

MCQs to test yourself

1. A patient asks what a positive screening result means for him personally. Which statistic applies?
A. Sensitivity
B. Specificity
C. Diagnostic accuracy
D. Likelihood ratio
E. Positive predictive value
Answer: E – Post-test probability for the patient.

2. Which statistic is most influenced by disease prevalence?
A. Sensitivity
B. Specificity
C. LR+
D. LR−
E. Positive predictive value
Answer: E – PPV changes with prevalence.

3. Screening programmes usually prioritise:
A. High PPV
B. High specificity
C. High accuracy
D. High NPV
E. High sensitivity
Answer: E – To avoid missed disease.

4. In a low-prevalence population, PPV is typically:
A. High
B. Unchanged
C. Low
D. Equal to sensitivity
E. Zero
Answer: C – Many false positives.

5. Which statistic reassures a patient after a negative test?
A. Sensitivity
B. Specificity
C. Accuracy
D. LR+
E. Negative predictive value
Answer: E

6. Sensitivity measures:
A. Ability to exclude disease
B. Post-test probability
C. Prevalence
D. True positive rate
E. False positive rate
Answer: D

7. Specificity measures:
A. True positive rate
B. False negative rate
C. Disease prevalence
D. Post-test odds
E. True negative rate
Answer: E

8. Which statistic is prevalence independent?
A. PPV
B. NPV
C. Accuracy
D. Likelihood ratios
E. Predictive values
Answer: D

9. A test with very high sensitivity but modest PPV is best described as:
A. Poor screening test
B. Poor diagnostic test
C. Good screening test
D. Invalid test
E. Gold standard
Answer: C

10. An increase in false positives primarily reduces:
A. Sensitivity
B. Specificity
C. NPV
D. Accuracy
E. PPV
Answer: E

11. LR+ of 15 implies:
A. Weak evidence for disease
B. No diagnostic value
C. Moderate evidence against disease
D. Strong evidence for disease
E. Test is invalid
Answer: D

12. LR− of 0.05 implies:
A. Disease very likely
B. Test is inaccurate
C. Strong evidence disease is absent
D. High false-positive rate
E. High PPV
Answer: C

13. Which statistic best answers “How good is this test at detecting disease?”
A. PPV
B. NPV
C. Specificity
D. Likelihood ratio
E. Sensitivity
Answer: E

14. Which statistic best answers “How good is this test at ruling out disease?”
A. Sensitivity
B. PPV
C. Specificity
D. Accuracy
E. Prevalence
Answer: C

15. PPV increases when:
A. Sensitivity decreases
B. Specificity decreases
C. Disease prevalence increases
D. Disease prevalence decreases
E. Sample size decreases
Answer: C

16. NPV decreases when:
A. Prevalence decreases
B. Prevalence increases
C. Sensitivity increases
D. Specificity increases
E. False positives increase
Answer: B

17. Screening tests tolerate false positives mainly to avoid:
A. Overdiagnosis
B. Radiation exposure
C. Missed disease
D. Cost
E. Anxiety
Answer: C

18. Which is most useful for counselling an asymptomatic individual?
A. Sensitivity
B. Specificity
C. Accuracy
D. PPV
E. Prevalence
Answer: D

19. True negatives contribute directly to calculation of:
A. Sensitivity
B. PPV
C. LR+
D. NPV
E. False discovery rate
Answer: D

20. The most common statistical mistake in MRCP screening questions is:
A. Miscalculating prevalence
B. Ignoring false negatives
C. Confusing sensitivity with PPV
D. Confusing specificity with NPV
E. Misusing likelihood ratios
Answer: C


Summary for quick exam revision

Positive predictive value is the probability that a patient truly has a disease when a screening test is positive and is the statistic patients intuitively ask about. It is calculated from true positives divided by all positive results and is highly dependent on disease prevalence. In screening programmes, where prevalence is low, PPV is often modest despite high sensitivity. Sensitivity measures how well a test detects disease, while specificity measures how well it excludes disease. Predictive values change with prevalence, whereas likelihood ratios do not. Likelihood ratios quantify how much a test result shifts disease odds, with LR+ ruling in disease and LR− ruling it out. Screening tests prioritise sensitivity to minimise missed disease, accepting more false positives. A common MRCP error is confusing sensitivity with PPV. Understanding the 2 × 2 table is essential for solving exam questions.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top