You can set up different rules for human review for different AI blocks.

Essentially, this works just as you would train an employee with a question - if an unfamiliar file, perhaps even in an unfamiliar language, comes across the table, they will ask for your feedback. Next time they encounter this file, they will likely know what to do without asking.

To implement human review in Levity, just follow these steps:

Human-in-the-loop at Levity

Step 1: Click on the AI Block (left sidebar) for which you want to include human labelers.

Step 2: Click on "Human review"

Here you have three options:

  • No human review: The prediction gets classified to the label with the highest confidence rate and no human intervention is determined.

  • Standard human review: Set up basic error minimization by determining the maximum error rate you want to allow. The model will give you an estimate on how much data will have to be reviewed by a human pair of eyes.

  • Advanced settings: You can set up custom error settings for each label. In detail, these settings enable you to predetermine the number of false positives and false negatives you want to allow for each label. For more explanation, check out our blog post on the topic.

Step 3: Select your preference, for example, "Standard human review", by clicking on the box.

Step 4: Set your maximum error rate and keep in mind the trade-off between model accuracy and manual human labor. The model will learn from your decisions and become more accurate, so you might want to come back here to decrease the maximum error rate after some time.

You can now either do the reviewing by yourself or set up a team on Slack to help you out:

Step 5: Click on "Add reviewers"

  1. Connect your slack workspace and allow Levity to interact with it. This is needed since the app will send an interactive message to the labelers you select in which they can label edge cases.

  2. Select the labelers from your slack workspace.

  3. Click on save changes.

Step 6: If you want to test the labeling feature, you can do so by uploading a difficult data point to the "Testing" tab which will fall below the confidence threshold you selected in Step 4. This will trigger the interactive slack labeling message.

Did this answer your question?