Your browser was unable to load all of the resources. They may have been blocked by your firewall, proxy or browser configuration.
Press Ctrl+F5 or Ctrl+Shift+R to have your browser try again.

What is a good starting number for confidence_penalty #13

Hi,

what is a good number for confidence_penalty? From the user guide, I assume that it is a number in [0,1) as well as robust_lambda. Is that true, if yes, what does it mean? The user guide only mentions robust_lambda, and not the confidence penalty.

EDIT:
I also have difficulties to understand when I should use a combiner or simply use the fc_layer dict of the output? What is here the difference?

Best regards,
SJF

  • replies 1
  • views 2K
  • likes 1
#2

Hi SJF,
I’ll try to improve the doc on that. Anyway, the intuition is that the more penalty you add, the more you expect your predictions to be entropic and less sharp. This may be useful when you have noisy labels you don’t trust too much.
As per the specific value to use, that is dependent on your problem. I would suggest to start with a small value and gradually increase it up to the point where performance degrades, then you will have a range of options.

Regarding having FCs in the combiner or the output feature, if you have only one output feature there is no difference, but if you have mire than one, then the combiner weights are backpropagated to from all the outputs, while the output specific ones are not.