No human review:
Every prediction gets classified to the label with the highest confidence rate and no human intervention is determined. If an occasional wrong label does not bother you, and your main goal is to create a 100% no-touch automation, this is the choice for you. However, keep in mind that your model does not learn and improve like this.
Standard human review:
Here, you can set up basic error minimization. First, you select the maximum error rate you are comfortable with. Once you've selected it, the model will give you an estimate on how much data will have to be reviewed by a human pair of eyes (through our slack integration). The model will learn from your decisions and become more accurate, so you might want to come back here to decrease the maximum error rate after some time. Keep in mind the trade-off between model accuracy and manual human labor.
You can set up custom error settings for each label. In detail, these settings enable you to predetermine the number of false positives and false negatives you want to allow for each label. This could make sense if you, for example, want to make sure your document classifier detects all invoices but don't mind if another document type is sometimes missed. For more explanation, check out our blog post on the topic.
In case you're getting stuck or want to discuss your options in more detail, feel free to send us a message on the in-app chat and we'll take a look at this together.