Kili enables users to upload plugins and run them on Kili resources. They provide extensive modularity to make labeling processes way more efficient and robust. Users can develop plugins, upload them to Kili and set them up to be triggered specific events in the application.
A plugin is comprised of the following elements:
- Asset has been labeled (Submit button clicked)
- Asset has been reviewed (Review button clicked)
For details on triggering events, refer to our SDK documentation.
One application of plugins is to automate your quality checks: you can directly write your business rules in a Python script and upload them to Kili to have each new label automatically checked.
Let's imagine a project where we process invoices. The project has two jobs and several transcription tasks associated with them. One of the jobs is about payment information and must contain a proper IBAN number as well as currency. The IBAN must start with
FR, and the currency should be one of:
USD. Kili's interface customization options are powerful and flexible, but won't help us in this specific situation so we have to turn to Kili plugins for help. We'll set up our Kili plugin to check these two rules when labelers click Submit. If the annotations don't match our predefined rules, our QA bot will add issues to the asset and send the asset back to the labeling queue. At the end, our script will calculate labeling accuracy and insert the accuracy metric in the
json_metadata of the asset. All that with no need to engage a human reviewer.
Refer to this short video for information on how to handle Kili plugins:
Refer to this end-to-end example: Developing a QA bot.
When working with consensus for object detection tasks, it is often handy for a reviewer to access the annotations of all the labelers involved, compare them and choose the best one. With Kili plugins, this task will be much simpler.
For example: you can program your plugin to create an additional annotation that combines the annotations created by all the labelers. This way, you can instantaneously get a big-picture overview and only act when the situation calls for it. In large projects, this can save you a significant amount of time.
For projects with specialized workforce, you can split your workforce in several groups, with each of them focused on a specific portion of your labeling tasks. For instance, group A (experts in the
A domain) only does labeling for job A and group B (experts in the
B domain) only does labeling for job B. With Kili Plugins, you could then combine these workflows by aggregating the annotations done by multiple labelers as one label. Your reviewers would then be presented with the combined results.
Learn more about plugins from our SDK documentation.
For details on how to develop Kili plugins, refer to our tutorial notebook How to build a Kili plugin.
Updated 3 months ago