Hi, I have a doubt with the result confidence part. In both extraction and classification models, how does azure set the confidence level? When did the user confirm (validate) that the information the model read and understood was correct or not, based on which the model is x% confident about the result? (I'm thinking of it in terms of y_pred and y_actual we do during model training on python using sklearn - which helps the model evaluate accuracy, recall, etc)
@nicknelson19755 күн бұрын
Doesn't seem like it's possible with Document Intelligence. If you process a document that had low confidence or incorrect answers, I think you'd just add it to your training set.