Evaluation

To select a winner, the ACER (Average Classification Error Rate) will be calculated over all the test samples for this competition by taking the average value of the APCER and the BPCER.

where:

APCER (Attack Presentation Classification Error Rate) is the proportion of attack presentations incorrectly classified as bona fide (genuine) presentations. This error metric is analogous to false match rate (FMR) in biometric matching, that is related to false match of samples belonging to two different subjects. APCER is a function of a decision threshold t.

BPCER (Bona Fide Presentation Classification Error Rate) is the proportion of bona fide (genuine) presentations incorrectly classified as presentation attack. This error metric is analogous to false non-match rate (FNMR) in biometric matching, that is related to false non-match of samples belonging to the same subject. As APCER, the BPCER is a function of a decision threshold t.

Since we require that all algorithms deliver a liveness score in the range of 0-100, t=50 will be used as the threshold to calculate APCR.