DocumentationRecipesReferenceGraphQLChangelog
Log In
Reference

For object and entity export format details, refer to these sections:

For general export format information, refer to Exported data format.



Standard object detection

📘

This structure applies to geospatial imagery as well. The only difference is that in geospatial labels x stands for longitude and y for latitude.

Check the detailed JSON property descriptions for 2D labeling tools
  • annotationsList of annotations
    • boundingPoly: Polygon of the object contour
      • normalizedVertices: List of normalized (relative to the original image: range from 0 to 1) vertices of the polygon. In geospatial labels `x` stands for longitude and `y` stands for latitude.
        • x: Abscissa of the vertex position from top left
        • y: Ordinate of the vertex position from top left
      • vertices: List of vertices of the polygon, listed as pixel coordinates (built by multiplying the normalized coordinates by the asset resolution). In geospatial labels `x` stands for longitude and `y` stands for latitude.
        • x: Abscissa of the vertex position from top left
        • y: Ordinate of the vertex position from top left
    • categories: Category of the object
      • name: Name of the category
      • confidence: Confidence (100 by default when done by human)
    • mid: A unique identifier of the object
    • score: When a pre-annotation model is used, the score is the confidence of the object detection
    • type: Type of tool used for the annotation (one of: "rectangle", "polygon", or "semantic")

Examples of exported standard 2D object detection jobs:

"json_response": {
    "JOB_0": {
        "annotations": [{
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "categories": [{ "name": "TESLA", "confidence": 100 }],
            "mid": "unique-tesla",
            "type": "rectangle",
        }]
    }
}
'json_response': {
    "JOB": {
        "annotations": [{
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "categories": [{ "name": "FERRARI", "confidence": 100 }],
            "mid": "unique-ferrari",
            "type": "rectangle",
            "children": {
                "CLASSIFICATION_JOB": {
                    "categories": [{ "name": "GREY", "confidence": 100 }]
                }
            }
        }]
    }
}
'json_response': {
    "JOB_0": {
        "annotations": [{
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "mid": "car",
            "type": "rectangle",
            "categories": [{ "name": "WHOLE_CAR", "confidence": 100 }],
        },
        {
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "mid": "left-front-wheel",
            "type": "rectangle",
            "categories": [{ "name": "WHEELS", "confidence": 100 }],
        },
        {
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "mid": "left-back-wheel",
            "type": "rectangle",
            "categories": [{ "name": "WHEELS", "confidence": 100 }],
        }]
    },
    'RELATION_JOB': {
        'annotations': [
            {
                'categories': [{'name': 'WHOLE_CAR_AND_WHEELS', 'confidence': 100}],
                'startObjects': [{'mid': 'car'}],
                'endObjects': [{'mid': 'left-front-wheel'}, {'mid': 'left-back-wheel'}],
            },
        ]
    },
}
'json_response': {
    "JOB_0": {
        "annotations": [{
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "mid": "unique-tesla",
            "type": "polygon",
            "categories": [{ "name": "TESLA", "confidence": 100 }],
        }]
    }
}
'json_response': {
    "JOB_0": {
        "annotations": [{
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
            "mid": "unique-tesla",
            "type": "semantic",
            "categories": [{ "name": "TESLA", "confidence": 100 }],
        }]
    }
}

📘

If the image has been rotated, the json_response contains an additional top-level ROTATION_JOB with information on the angle that the image was rotated by. For example: 'ROTATION_JOB': {'rotation': 90}. After rotation, all the coordinates of a bounding box get automatically adjusted.

Check the detailed JSON property descriptions for 1D labeling tools
  • annotationsList of annotations
    • categories: Category of the object
      • confidence: Confidence (100 by default when done by human)
      • name: Name of the category
    • mid: A unique identifier of the object
    • children: Nested jobs
    • point or polyline: Normalized (relative to the original image: range from 0 to 1) coordinates of the point or line. In geospatial labels `x` stands for longitude and `y` stands for latitude.
      • x: Point abscissa
      • y: Point ordinate
    • pointPixels or polylinePixels: Coordinates of the point, line, or vector, listed as pixel coordinates (built by multiplying the normalized coordinates by the asset resolution) In geospatial labels `x` stands for longitude and `y` stands for latitude.
      • x: Point abscissa
      • y: Point ordinate
    • type: Type of tool used for the annotation (one of: "marker", "vector", or "polyline")

Examples of exported standard 1D object detection jobs:

"jsonResponse": {
    "OBJECT_DETECTION_JOB_0": {
                    "annotations": [
                        {
                            "categories": [
                                {
                                    "confidence": 100,
                                    "name": "POINT"
                                }
                            ],
                            "children": {},
                            "mid": "20220701113558339-67037",
                            "point": {
                                "x": 0.6396378516199591,
                                "y": 0.2192328091653808
                            },
                            "pointPixels": {
                            	"x": 0.8727983223923027 * 474,
                            	"y": 0.2035857007889187 * 842,
                            },
                            "type": "marker"
                        }
                    ]
                }
            }
"jsonResponse": {
    "OBJECT_DETECTION_JOB_2": {
                    "annotations": [
                        {
                            "categories": [
                                {
                                    "confidence": 100,
                                    "name": "LINE"
                                }
                            ],
                            "children": {},
                            "mid": "20220701115017888-45281",
                            "polyline": [
                                {
                                    "x": 0.38086698287342297,
                                    "y": 0.44925135916230174
                                },
                                {
                                    "x": 0.4255140693825084,
                                    "y": 0.6355339876809349
                                }
                            ],
                            "polylinePixels": [
                            	{
                                    "x": 0.6555950869111132 * 474,
                                    "y": 0.2574428654046088 * 842
                            	},
                            	{
                                    "x": 0.5659240263913562 * 474,
                                    "y": 0.17890116700672754 * 842
                            	},
                            ],
                            "type": "polyline"
                        }
                    ]
                }
            }
"jsonResponse": {
    "OBJECT_DETECTION_JOB": {
                        "annotations": [
                            {
                                "categories": [
                                    {
                                        "confidence": 100,
                                        "name": "LINE"
                                    }
                                ],
                                "children": {},
                                "mid": "20220701113556786-8668",
                                "polyline": [
                                    {
                                        "x": 0.28428348960887073,
                                        "y": 0.33424208416384127
                                    },
                                    {
                                        "x": 0.4264252344133061,
                                        "y": 0.2548694859254671
                                    }
                                ],
                        	"polylinePixels": [
                            	    {
                                	"x": 0.6555950869111132 * 474,
                                	"y": 0.2574428654046088 * 842
                            	    },
                            	    {
                                	"x": 0.5659240263913562 * 474,
                                	"y": 0.17890116700672754 * 842
                            	    },
                        	],
                                "type": "vector"
                            }
                        ]
                    }
                }


Object detection in video assets

📘

  • In object detection jobs, native video and frame assets use an additional isKeyFrame property. This is a Boolean indicating if the timestamp or frame is used for interpolation.
  • If you are importing labels into a Video project, the mid field (id of an object detection annotation) cannot be longer than 50 characters.

Examples of exported object detection video jobs:

"jsonResponse": {
  "0": { //Timestamp number
    "job_0": {
      "annotations": [
        {
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
          "categories": [{ "confidence": 100, "name": "BIG" }],
          "isKeyFrame": true,
          "mid": "2020110316040863-98540",
          "score": null,
          "type": "rectangle"
        }
    }
  }
}
"jsonResponse": {
  "0": { //Frame number
    "job_0": {
      "annotations": [
        {
            "boundingPoly": [
                {
                    "normalizedVertices": [
                        {
                            "x": 0.7412807669633257,
                            "y": 0.11831185681407619
                        },
                        {
                            "x": 0.7412807669633257,
                            "y": 0.07455291056382807
                        },
                    ],
                    "vertices": [
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.11831185681407619 * 842,
                        },
                        {
                            "x": 0.7412807669633257 * 474,
                            "y": 0.07455291056382807 * 842,
                        },
                    ],
                }
            ],
          "categories": [{ "confidence": 100, "name": "BIG" }],
          "isKeyFrame": true,
          "mid": "2020110316040863-98540",
          "score": null,
          "type": "rectangle"
        }
    }
  }
}


Pose Estimation

Check the detailed JSON property descriptions
  • annotationsList of annotations
  • categories: Category of the object
    • name: Name of the category
  • children: Nested jobs
  • mid: A unique identifier of the object
  • points: List of the points composing the object
    • code: Identifier (unique for each point in an object)
    • mid: Id of the point
    • point: Normalized (relative to the original image: range from 0 to 1) coordinates of the point
      • x: Point abscissa
      • y: Point ordinate
    • type: Tool used to annotate the point. In pose estimation jobs, it's `marker`
  • type: Tool used to annotate the pose. In pose estimation jobs, it's `pose`

Examples of exported post estimation jobs:

"jsonResponse": {
                "JOB_0": {
                    "annotations": [
                        {
                            "categories": [
                                {
                                    "name": "HEAD"
                                }
                            ],
                            "children": {},
                            "mid": "20220511111820SS-508",
                            "points": [
                                {
                                    "children": {},
                                    "code": "NOSE",
                                    "mid": "20230220105343039-43309",
                                    "point": {
                                        "x": 0.16032453806928879,
                                        "y": 0.43348072704538043
                                    },
                                    "type": "marker"
                                },
                                {
                                    "children": {},
                                    "code": "LEFT_EYE",
                                    "mid": "20230220105343039-34486",
                                    "point": {
                                        "x": 0.21863386451307548,
                                        "y": 0.3704089988436732
                                    },
                                    "type": "marker"
                                },
                                {
                                    "children": {},
                                    "code": "LEFT_EARBASE",
                                    "mid": "20230220105343039-78183",
                                    "point": {
                                        "x": 0.2556378601408632,
                                        "y": 0.32873660699611673
                                    },
                                    "type": "marker"
                                }
                            ],
                            "type": "pose"
                        }
                    ]
                }
            }
"annotations": [
                        {
                            "categories": [
                                {
                                    "name": "HEAD"
                                }
                            ],
                            "children": {},
                            "mid": "20230220115140640-7038",
                            "points": [
                                {
                                   "children": {
                                        "CLASSIFICATION_JOB": {
                                            "categories": [
                                                {
                                                    "confidence": 100,
                                                    "name": "CATEGORY_1"
                                                }
                                            ]
                                        }
                                    },
                                    "code": "NOSE",
                                    "mid": "20230220105343039-43309",
                                    "point": {
                                        "x": 0.16032453806928879,
                                        "y": 0.43348072704538043
                                    },
                                    "type": "marker"
                                },
                                {
                                    "children": {},
                                    "code": "LEFT_EYE",
                                    "mid": "20230220105343039-34486",
                                    "point": {
                                        "x": 0.21863386451307548,
                                        "y": 0.3704089988436732
                                    },
                                    "type": "marker"
                                },
                                {
                                    "children": {},
                                    "code": "LEFT_EARBASE",
                                    "mid": "20230220105343039-78183",
                                    "point": {
                                        "x": 0.2556378601408632,
                                        "y": 0.32873660699611673
                                    },
                                    "type": "marker"
                                }
                            ],
                            "type": "pose"
                        },






 


Named entity recognition (NER)

Check the detailed JSON property descriptions
  • annotationsList of annotations
    • beginOffset: Position of the entity mention
    • categories: Category of the object
      • name: Name of the category
      • confidence: Confidence (100 by default when done by human)
    • content: Content of the mention
    • mid: A unique identifier of the object

Examples of exported named entity recognition jobs:

"json_response": {
    "JOB": {
        "annotations": [
            {"categories": [{ "name": "NAME", "confidence": 100 }],
            "beginOffset": 0,
            "content": "Chicago Bulls",
            "mid": "chicago"},
            {"categories": [{ "name": "NAME", "confidence": 100 }],
            "beginOffset": 30,
            "content": "Jerry Krause",
            "mid": "krause"},
            {"categories": [{ "name": "NAME", "confidence": 100 }],
            "beginOffset": 63,
            "content": "Gatorade",
            "mid": "gatorade"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 22,
            "content": "Manager",
            "mid": "manager"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 14,
            "content": "General",
            "mid": "general"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 84,
            "content": "medicine",
            "mid": "medicine"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 104,
            "content": "players",
            "mid": "players"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 116,
            "content": "coaches",
            "mid": "coaches"},
            {"categories": [{ "name": "VERB", "confidence": 100 }],
            "beginOffset": 124,
            "content": "milled",
            "mid": "milled"},
            {"categories": [{ "name": "VERB", "confidence": 100 }],
            "beginOffset": 43,
            "content": "was standing",
            "mid": "standing"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 96,
            "content": "hand",
            "mid": "hand"},
            {"categories": [{ "name": "NOUN", "confidence": 100 }],
            "beginOffset": 72,
            "content": "cooler",
            "mid": "cooler"}
        ]
    }
}
'json_response': {
    "JOB": {
        "annotations": [
            {
                "beginId": 'PART1',
                "beginOffset": 14,
                "categories": [{ "name": "NOMINAL_GROUP", "confidence": 100 }],
                "content": "Declaration of the thirteen United States of America",
                "endId": 'PART2',
                "endOffset": 41,
                "mid": "declaration",
            },
        ]
    }
}
'json_response': {
    "NAMED_ENTITIES_RECOGNITION_JOB": {
        "annotations": [
            {"categories": [{ "name": "SUBJECT", "confidence": 100 }],
            "beginOffset": 0,
            "content": "Jordan",
            "mid": "Jordan"},
            {"categories": [{ "name": "VERB", "confidence": 100 }],
            "beginOffset": 84,
            "content": "was wearing",
            "mid": "wearing"},
            {"categories": [{ "name": "VERB", "confidence": 100 }],
            "beginOffset": 111,
            "content": "tucked",
            "mid": "tucked verb"},
            {"categories": [{ "name": "COMPLEMENT", "confidence": 100 }],
            "beginOffset": 96,
            "content": "a blue sweater tucked into high-rise pants",
            "mid": "blue sweater complement"},
            {"categories": [{ "name": "VERB", "confidence": 100 }],
            "beginOffset": 47,
            "content": "peered down",
            "mid": "peered"},
            {"categories": [{ "name": "COMPLEMENT", "confidence": 100 }],
            "beginOffset": 62,
            "content": "the hefty Krause",
            "mid": "Krause complement"},
            {"categories": [{ "name": "SUBJECT", "confidence": 100 }],
            "beginOffset": 62,
            "content": "the hefty Krause",
            "mid": "Krause subject"}
        ]
    },
    "NAMED_ENTITIES_RELATION_JOB": {
        "annotations": [
            {"categories": [{ "name": "VERB_AND_SUBJECT_S", "confidence": 100 }],
            "startEntities": [{ "mid": "peered" }],
            "endEntities": [{ "mid": "Jordan" }]},
            {"categories": [{ "name": "VERB_AND_COMPLEMENT_S", "confidence": 100 }],
            "startEntities": [{ "mid": "peered" }],
            "endEntities": [{ "mid": "Krause complement" }]},
            {"categories": [{ "name": "VERB_AND_SUBJECT_S", "confidence": 100 }],
            "startEntities": [{ "mid": "wearing" }],
            "endEntities": [{ "mid": "Krause subject" }]},
            {"categories": [{ "name": "VERB_AND_COMPLEMENT_S", "confidence": 100 }],
            "startEntities": [{ "mid": "wearing" }],
            "endEntities": [{ "mid": "blue sweater complement" }]}
        ]
    }
}


NER in PDFs

📘

Annotation structure for NER in PDFs is different. Instead of beginOffset, the annotations work with the coordinates of the polygon that the data belongs to and the page number.

Check the detailed JSON property descriptions
  • annotations: List of annotations
    • annotations: List of positions of the annotation (for NER, when an annotation spans multiple lines, there will be multiple polys and a single boundingPoly)
      • boundingPoly: Object contour
        • normalizedVertices: List of vertices of the object contour. Coordinates normalized (relative to the original image: range from 0 to 1, expressed as a percentage of the total document size).
          • x: Abscissa of the vertex position from top left
          • y: Ordinate of the vertex position from top left
        • vertices: List of vertices of the object contour. Coordinates listed as pixel coordinates (built by multiplying the normalized coordinates by the asset resolution.
          • x: Abscissa of the vertex position from top left
          • y: Ordinate of the vertex position from top left
      • pageNumberArray: The numbers of pages where the annotation appears
      • polys: Coordinates of the different rectangles in the annotation. An annotation can have several rectangles (for example if the annotation covers more than one line)
    • categories: Category of the object
      • name: Name of the category
      • confidence: Confidence (100 by default when done by human)
    • mid: A unique identifier of the object

Example of an exported Named Object Recognition job with PDF:

"json_response": {
            "NAMED_ENTITIES_RECOGNITION_JOB": {
                "annotations": [{
                    "annotations": [{
			"boundingPoly": [
                            {											 
                             "normalizedVertices": [													 
                                  [
                                      {
                                      	  "x": 0.24565469293163383,
                                          "y": 0.07735917978757553
                                      },
                                      {
                                          "x": 0.24565469293163383,
                                          "y": 0.14694151081343712
                                      },
                                      {
                                          "x": 0.40961761297798377,
                                          "y": 0.14694151081343712
                                      },
                                      {
                                          "x": 0.40961761297798377,
                                          "y": 0.07735917978757553
                                      }
                                  ]
                              ],											 
                              "vertices": [
                                  [
                                      {
                                      	  "x": 367.74507531865584,
                                          "y": 163.92410196987257
                                      },
                                      {
                                      	  "x": 367.74507531865584,
                                          "y": 311.36906141367325
                                      },
                                      {
                                          "x": 613.1975666280417,
                                          "y": 311.36906141367325
                                      },
                                      {
                                          "x": 613.1975666280417,
                                          "y": 163.92410196987257
                                      }
                                  ]
                              ]
                           }
                        ],
                        "polys": [
                           {
                              "normalizedVertices": [
                                   [  
                                      {
                                          "x": 0.24565469293163383,
                                          "y": 0.07735917978757553
                                      },
                                      {
                                          "x": 0.24565469293163383,
                                          "y": 0.14694151081343712
                                      },
                                      {
                                          "x": 0.40961761297798377,
                                          "y": 0.14694151081343712
                                      },
                                      {
                                          "x": 0.40961761297798377,
                                          "y": 0.07735917978757553
                                      }
                                  ]
                              ],										 
                              "vertices": [
                                  [
                                      {
                                          "x": 367.74507531865584,
                                          "y": 163.92410196987257
                                      },
                                      {
                                          "x": 367.74507531865584,
                                          "y": 311.36906141367325
                                      },
                                      {
                                          "x": 613.1975666280417,
                                          "y": 311.36906141367325
                                      },
                                      {
                                          "x": 613.1975666280417,
                                          "y": 163.92410196987257
                                      }
                                  ]
                               ]
                            }
                        ],
                        "pageNumberArray": [
                            1
                        ]
                    }],
                    "categories": [{"name": "TITLE", "confidence": 100}],
                    "content": "Learning Active Learning from Data",
                    "mid": "article-title"
                }],
            }
        }