DocumentationRecipesReferenceGraphQLChangelog
Log In
Reference

Object detection jobs can be:

For general information on the JSON structure, refer to Generic JSON template.

For information on how to use object detection in real-life labeling projects, refer to Object/entity detection jobs.

Standard object detection jobs

Object detection job template

"OBJECT_DETECTION_JOB": {
      "content": {
        "categories": {
          "A": {
            "children": [],
            "color": "#472CED",
            "name": "A",
            "id": "category104"
          },
          "B": {
            "children": [],
            "name": "B",
            "id": "category107",
            "color": "#5CE7B7"
          }
        },
        "input": "radio"
      },
      "instruction": "<instruction>",
      "mlTask": "OBJECT_DETECTION",
      "required": 1,
      "tools": [
        "rectangle"
      ],
      "isChild": false,
      "isNew": true
    }

Job-specific settings

ParameterValueDescription
mlTask"OBJECT_DETECTION"N/A
input"radio"N/A
tools["<tool name>"]The available values for tools are:
-"rectangle"
- "polygon"
- "marker" (for point)
- "polyline" (for line)
- "vector"
isOrientedtrue | falseYou can set your bounding boxes to have a visible orientation marker. This way you can keep bbox orientation consistent in projects that require it.
You can set it up only from the project JSON settings.
This feature works only with bounding boxes.
The entry must be at the same level as isChild. Refer to this example:
"instruction": "Categories",
"isChild": false,
"isOriented": true,
"tools": ["rectangle"],

For information on how the object detection jobs look in the exported jsonResponse, refer to The structure of jsonResponse for exported object/entity detection and relation jobs.

Semantic segmentation jobs

📘

In semantic segmentation, two job types have to exist:

  • Object detection job
  • ML-model based object detection interactive job

Semantic segmentation job template

"OBJECT_DETECTION_JOB": {
      "content": {
        "categories": {
          "A": {
            "children": [],
            "color": "#472CED",
            "name": "A",
            "id": "category124"
          },
          "B": {
            "children": [],
            "name": "B",
            "id": "category126",
            "color": "#5CE7B7"
          }
        },
        "input": "radio"
      },
      "instruction": "<instruction>",
      "mlTask": "OBJECT_DETECTION",
      "required": 1,
      "tools": [
        "semantic"
      ],
      "isChild": false,
      "isNew": false,
      "models": {
        "interactive-segmentation": {
          "job": "OBJECT_DETECTION_JOB_INTERACTIVE"
        }
      }
    },
    "OBJECT_DETECTION_JOB_INTERACTIVE": {
      "content": {
        "categories": {
          "A": {
            "children": [],
            "color": "#472CED",
            "name": "A"
          },
          "B": {
            "name": "B"
          }
        },
        "input": "radio"
      },
      "instruction": "<instruction>",
      "isChild": false,
      "isModel": true,
      "isVisible": false,
      "mlTask": "OBJECT_DETECTION",
      "required": 0,
      "tools": [
        "marker"
      ]
    }

Job-specific settings

OBJECT_DETECTION_JOB

ParameterValueDescription
mlTask"OBJECT_DETECTION"N/A
input"radio"N/A
tools["semantic"]N/A
models"interactive-segmentation": {"job": "<name of the job marker job>"}Semantic segmentation JSON object

OBJECT_DETECTION_JOB_INTERACTIVE

ParameterValueDescription
mlTask"OBJECT_DETECTION"N/A
input"radio"N/A
isModel"true"Value that tells Kili that the job is in fact a model

For information on how the semantic segmentation jobs look in the exported jsonResponse, refer to The structure of jsonResponse for exported object/entity detection and relation jobs.

Pose estimation jobs

Pose estimation job template

"OBJECT_DETECTION_JOB": {
      "content": {
        "categories": {
          "A": {
            "children": [],
            "color": "#472CED",
            "name": "A",
            "id": "category117",
            "points": [
              {
                "code": "POINT_1",
                "id": "point119",
                "name": "Point 1"
              },
              {
                "code": "POINT_2",
                "id": "point121",
                "name": "Point 2"
              }
            ]
          },
          "B": {
            "children": [],
            "name": "B",
            "id": "category120",
            "color": "#5CE7B7",
            "points": [
              {
                "code": "POINT_3",
                "id": "point122",
                "name": "Point 3"
              },
              {
                "code": "POINT_4",
                "id": "point123",
                "name": "Point 4"
              }
            ]
          }
        },
        "input": "radio"
      },
      "instruction": "<Job title>",
      "mlTask": "OBJECT_DETECTION",
      "required": 1,
      "tools": [
        "pose"
      ],
      "isChild": false,
      "isNew": true
    }

Job-specific settings

ParameterValueDescription
pointsArray of JSON objectsAn array of pose estimation points
mlTask"OBJECT_DETECTION"N/A
input"radio"N/A
tools["pose"]N/A

For information on how the pose estimation jobs look in the exported jsonResponse, refer to The structure of jsonResponse for exported object/entity detection and relation jobs.