By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,873 Members | 1,034 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,873 IT Pros & Developers. It's quick & easy.

Understanding precision/recall graph

P: n/a
Two questions related to the topic

1. If I have an empty set of relevant results, then it would be
better
to have no answers from the system at all. But neither precision
nor
recall gives penalties for returning false positives in this case
(0/1=0/2=...=0/100). How people handle with this ? Is there other
measure for this cases ?

2. Let say I have ranking:

1. A
2. B *
3. C
4. D *
5. F

Where relevant answers are: B,D,E, and relevant answers found by
the
system are marked with a star (*).

Then for recall level 1/3 I have precision 1/2,
for recall level 2/3 I have precision 2/4=1/2.

The last position of ranking, false positive, is not counted in
the
precision/recall measure, as in this measure "only positions where
an
increase in recall is produced". I have the system which returns
some
false positives at the end of ranking, but how can I measure/
compare
it with other systems in terms of effectiveness, if precision/
recall
does not take it into account ?

TIA,
Maciej
Oct 3 '08 #1
Share this Question
Share on Google+
1 Reply


P: n/a
Maciej Gawinecki wrote:
Two questions related to the topic
WTF has this got to do with SGML or XML?

///Peter

[Followups reset]
1. If I have an empty set of relevant results, then it would be
better to have no answers from the system at all. But neither
precision nor recall gives penalties for returning false positives in
this case (0/1=0/2=...=0/100). How people handle with this ? Is there
other measure for this cases ?

2. Let say I have ranking:

1. A
2. B *
3. C
4. D *
5. F

Where relevant answers are: B,D,E, and relevant answers found by the
system are marked with a star (*).

Then for recall level 1/3 I have precision 1/2,
for recall level 2/3 I have precision 2/4=1/2.

The last position of ranking, false positive, is not counted in the
precision/recall measure, as in this measure "only positions where an
increase in recall is produced". I have the system which returns
some false positives at the end of ranking, but how can I measure/
compare it with other systems in terms of effectiveness, if
precision/ recall does not take it into account ?

TIA,
Maciej
Oct 3 '08 #2

This discussion thread is closed

Replies have been disabled for this discussion.