In case of doubt or lack of understanding, labelers should ask questions through the issues and questions panel. Reviewers and managers then answer those questions, providing more information to the labeler.
Use the Explore view and available filters to focus your review on the most relevant assets, classes and labelers. For instance: show only labels generated today to run a regular, daily check, focus only on new labelers to check their work more closely focus on complex categories with high error probability.
Activate Review queue automation to make the Kili app randomly pick assets for review. Review queue automation eliminates human bias when assigning assets to be reviewed.
Consensus is the perfect choice if you want to evaluate simple classification tasks.
In the project quality settings, you can set the percentage of assets covered by consensus and the number of labelers per consensus.
Consensus results can be checked in the Explore view and in the Analytics page of your project.
[links: Consensus and Analytics page]
Use Honeypot to measure the performance of a new team working on your project. Simply set some of your annotated assets as Honeypot (ground truth).
Honeypot results can be checked in the Explore view and in the Analytics page of your project.
Simplify and boost your QA process by using a QA bot. You'll avoid a lot of back and forth between your labelers and reviewers.
For more information on the Kili QA process, refer to Quality management.
Updated 28 days ago