I am reading a paper that uses the F-measure, and thought I would do a little poking around to get a good feeling for what it embodies. In a detection problem, we have true positives (TP), true negatives (TN), and the errors false alarms, or false positives (FP), and missed detections, or false negatives (FN). The precision of some detection system is defined when \(TP+FP > 0\)

$$

P := \frac{TP}{TP+FP}

$$

and its recall is defined when \(TP+FN > 0\)

$$

R := \frac{TP}{TP+FN}.

$$

The F-measure of a detection system is defined

$$

F := \frac{2PR}{P + R}.

$$

Substituting \(P\) and \(R\) into this we find

$$

F = \frac{TP}{|+| – \frac{FN-FP}{2}}

$$

where \(|+| := TP+FN\) the total number of positives in the sample.

Now we can ask what it means to say \(F \le 1/\alpha\) for \(\alpha \ge 1\)?

We see that \(\alpha\) bounds the true detection rate.

If \(F \le 1/\alpha\) then

$$

TP < \left \lceil \frac{FN+FP}{2(\alpha – 1)} \right \rceil.

$$

At the extremes, when \(\alpha = 1\), then \(

FP = FN = 0\),

or perfect detection and discrimination;

and when

\(\alpha \to \infty\), then \(TP = 0\),

or "Is the system even plugged in? Is the light on? Hold this, while I reach around the back."

So, for \(\alpha > 2\), our detection rate is less than the mean failure rate:

$$

TP < \frac{FN+FP}{2}.

$$

### Like this:

Like Loading...

*Related*