Best practices for quality workflow

You can build your quality workflow in many different ways. Here are the solutions that we recommend if you want best possible results:

Using continuous feedback to collaborate with your team

Questions from labelers

In case of doubt or lack of understanding, labelers should ask questions through the issues and questions panel. Reviewers and managers then answer those questions, providing more information to the labeler.

Labeler asks a question

Labeler asks a question

Issues raised by reviewers and managers

Reviewers and managers can create issues on annotated assets and then send the assets back to the same (or different) labeler for correction.

Reviewer adds an issue

Reviewer adds an issue

Performing random and targeted reviews

Perform focused reviews in the Explore view

Use the Explore view and available filters to focus your review on the most relevant assets, classes and labelers. For instance: show only labels generated today to run a regular, daily check, focus only on new labelers to check their work more closely focus on complex categories with high error probability.

Using Explore filters for a more focused review

Use Explore filters for a more focused review

Automated review

Activate Review queue automation to make the Kili app randomly pick assets for review. Review queue automation eliminates human bias when assigning assets to be reviewed.

Activating review queue automation to make the Kili app randomly pick assets for review

Activate review queue automation to make the Kili app randomly pick assets for review

Refer to this short video for information on how to handle the Explore view and the automated review queue:

Adding quality metrics

Consensus

Consensus is the perfect choice if you want to evaluate simple classification tasks.
In the project quality settings, you can set the percentage of assets covered by consensus and the number of labelers per consensus.
Consensus results can be checked in the Explore view and in the Analytics page of your project.

Accessing quality insights based on consensus

Quality insights based on consensus

Honeypot

Use Honeypot to measure the performance of a new team working on your project. Simply set some of your annotated assets as Honeypot (ground truth).
Honeypot results can be checked in the Explore view and in the Analytics page of your project.

Setting up programmatic QA

Simplify and boost your QA process by using a QA bot. You'll avoid a lot of back and forth between your labelers and reviewers. To illustrate how automated QA works, here are example videos:

For more information, refer to our QA bot use case.

Learn more

For more information on the Kili QA process, refer to Quality management in Kili.