For object and entity export format details, refer to these sections:
- Standard object detection
- Object detection in video assets
- Pose estimation
- Named Entity Recognition
- Named Entity Recognition in PDFs
For general export format information, refer to Exported data format.
Standard object detection
Applies to geospatial imagery as well. The only difference is that in geospatial labels
x
stands for longitude andy
for latitude.
Check the detailed JSON property descriptions for 2D labeling tools
annotations
List of annotationsboundingPoly
: Polygon of the object contournormalizedVertices
: List of vertices of the polygon. In the case of a bounding box, 4 verticesx
: Abscissa of the vertex position from top left and expressed as a percentage of the total image size. In geospatial labels `x` stands for longitude.y
: Ordinate of the vertex position from top left and expressed as a percentage of the total image size. In geospatial labels `y` stands for latitude.categories
: Category of the objectname
: Name of the categoryconfidence
: Confidence (100 by default when done by human)mid
: A unique identifier of the objectscore
: When a pre-annotation model is used, the score is the confidence of the object detectiontype
: Type of tool used for the annotation (one of: "rectangle", "polygon", or "semantic")
Examples of exported standard 2D object detection jobs:
'json_response': {
"JOB_0": {
"annotations": [{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.16, "y": 0.82},
{ "x": 0.16, "y": 0.32 },
{ "x": 0.82, "y": 0.32 },
{ "x": 0.82, "y": 0.82 }
]}
],
"categories": [{ "name": "TESLA", "confidence": 100 }],
"mid": "unique-tesla",
"type": "rectangle",
}]
}
}
'json_response': {
"JOB": {
"annotations": [{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.09, "y": 0.84 },
{ "x": 0.09, "y": 0.36 },
{ "x": 0.92, "y": 0.36 },
{ "x": 0.92, "y": 0.84 }
]
}],
"categories": [{ "name": "FERRARI", "confidence": 100 }],
"mid": "unique-ferrari",
"type": "rectangle",
"children": {
"CLASSIFICATION_JOB": {
"categories": [{ "name": "GREY", "confidence": 100 }]
}
}
}]
}
}
'json_response': {
"JOB_0": {
"annotations": [{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.16, "y": 0.82 },
{ "x": 0.16, "y": 0.32 },
{ "x": 0.82, "y": 0.32 },
{ "x": 0.82, "y": 0.82 }
]}
],
"mid": "car",
"type": "rectangle",
"categories": [{ "name": "WHOLE_CAR", "confidence": 100 }],
},
{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.54, "y": 0.59 },
{ "x": 0.43, "y": 0.59 },
{ "x": 0.43, "y": 0.83 },
{ "x": 0.54, "y": 0.83 }
]}
],
"mid": "left-front-wheel",
"type": "rectangle",
"categories": [{ "name": "WHEELS", "confidence": 100 }],
},
{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.81, "y": 0.57 },
{ "x": 0.74, "y": 0.57 },
{ "x": 0.74, "y": 0.77 },
{ "x": 0.81, "y": 0.77 }
]}
],
"mid": "left-back-wheel",
"type": "rectangle",
"categories": [{ "name": "WHEELS", "confidence": 100 }],
}]
},
'RELATION_JOB': {
'annotations': [
{
'categories': [{'name': 'WHOLE_CAR_AND_WHEELS', 'confidence': 100}],
'startObjects': [{'mid': 'car'}],
'endObjects': [{'mid': 'left-front-wheel'}, {'mid': 'left-back-wheel'}],
},
]
},
}
'json_response': {
"JOB_0": {
"annotations": [{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.16, "y": 0.52},
{ "x": 0.16, "y": 0.76 },
{ "x": 0.49, "y": 0.83 },
{ "x": 0.82, "y": 0.76 },
{ "x": 0.82, "y": 0.49 },
{ "x": 0.70, "y": 0.32 },
{ "x": 0.48, "y": 0.32 },
]}
],
"mid": "unique-tesla",
"type": "polygon",
"categories": [{ "name": "TESLA", "confidence": 100 }],
}]
}
}
'json_response': {
"JOB_0": {
"annotations": [{
"boundingPoly": [{
"normalizedVertices": [
{ "x": 0.16, "y": 0.52},
{ "x": 0.16, "y": 0.76 },
{ "x": 0.49, "y": 0.83 },
{ "x": 0.82, "y": 0.76 },
{ "x": 0.82, "y": 0.49 },
{ "x": 0.70, "y": 0.32 },
{ "x": 0.48, "y": 0.32 },
]}
],
"mid": "unique-tesla",
"type": "semantic",
"categories": [{ "name": "TESLA", "confidence": 100 }],
}]
}
}
If the image has been rotated, the
json_response
contains an additional top-levelROTATION_JOB
with information on the angle that the image was rotated by. For example:'ROTATION_JOB': {'rotation': 90}
. After rotation, all the coordinates of a bounding box get automatically adjusted.
Check the detailed JSON property descriptions for 1D labeling tools
annotations
List of annotationscategories
: Category of the objectconfidence
: Confidence (100 by default when done by human)name
: Name of the categorymid
: A unique identifier of the objectchildren
: Nested jobspoint
orpolyline
: List of pointsx
: Point abcissa. In geospatial labels `x` stands for longitude.y
: Point ordinate. In geospatial labels `y` stands for latitude.type
: Type of tool used for the annotation (one of: "marker", "vector", or "polyline")
Examples of exported standard 1D object detection jobs:
"jsonResponse": {
"OBJECT_DETECTION_JOB_0": {
"annotations": [
{
"categories": [
{
"confidence": 100,
"name": "POINT"
}
],
"children": {},
"mid": "20220701113558339-67037",
"point": {
"x": 0.6396378516199591,
"y": 0.2192328091653808
},
"type": "marker"
}
]
}
}
"jsonResponse": {
"OBJECT_DETECTION_JOB_2": {
"annotations": [
{
"categories": [
{
"confidence": 100,
"name": "LINE"
}
],
"children": {},
"mid": "20220701115017888-45281",
"polyline": [
{
"x": 0.38086698287342297,
"y": 0.44925135916230174
},
{
"x": 0.4255140693825084,
"y": 0.6355339876809349
},
{
"x": 0.5339427080474303,
"y": 0.6225751961318127
},
{
"x": 0.6861072681906399,
"y": 0.5869385193717263
}
],
"type": "polyline"
}
]
}
}
"jsonResponse": {
"OBJECT_DETECTION_JOB": {
"annotations": [
{
"categories": [
{
"confidence": 100,
"name": "LINE"
}
],
"children": {},
"mid": "20220701113556786-8668",
"polyline": [
{
"x": 0.28428348960887073,
"y": 0.33424208416384127
},
{
"x": 0.4264252344133061,
"y": 0.2548694859254671
}
],
"type": "vector"
}
]
}
}
Object detection in video assets
In object detection jobs, native video and frame assets use an additional
isKeyFrame
property. This is a Boolean indicating if the timestamp or frame is used for interpolation.
Examples of exported object detection video jobs:
"jsonResponse": {
"0": { //Timestamp number
"job_0": {
"annotations": [
{
"boundingPoly": [
{
"normalizedVertices": [
{ "x": 0.24283568900708732, "y": 0.5538364851214209 },
{ "x": 0.24283568900708732, "y": 0.3020356974943339 },
{ "x": 0.3729654281518853, "y": 0.3020356974943339 },
{ "x": 0.3729654281518853, "y": 0.5538364851214209 }
]
}
],
"categories": [{ "confidence": 100, "name": "BIG" }],
"isKeyFrame": true,
"mid": "2020110316040863-98540",
"score": null,
"type": "rectangle"
}
}
}
}
"jsonResponse": {
"0": { //Frame number
"job_0": {
"annotations": [
{
"boundingPoly": [
{
"normalizedVertices": [
{ "x": 0.24283568900708732, "y": 0.5538364851214209 },
{ "x": 0.24283568900708732, "y": 0.3020356974943339 },
{ "x": 0.3729654281518853, "y": 0.3020356974943339 },
{ "x": 0.3729654281518853, "y": 0.5538364851214209 }
]
}
],
"categories": [{ "confidence": 100, "name": "BIG" }],
"isKeyFrame": true,
"mid": "2020110316040863-98540",
"score": null,
"type": "rectangle"
}
}
}
}
Pose Estimation
Check the detailed JSON property descriptions
annotations
List of annotationscategories
: Category of the objectname
: Name of the categoryconfidence
: Confidence (100 by default when done by human)kind
: Job kind. In pose estimation jobs, this is always "POSE_ESTIMATION"mid
: A unique identifier of the objectpoints
: List of the points composing the objectcategories
: Category of the object which point belongs tocode
: Identifier (unique for each point in an object)jobName
: The job which annotated point belongs tomid
: Id of the pointname
: Name of the pointpoint
: Coordinates of the pointx
: Point abscissay
: Point ordinatetype
: Tool used to annotate the point. In pose estimation jobs, it's `marker`
Examples of exported post estimation jobs:
"jsonResponse": {
"JOB_0": {
"annotations": [
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {},
"jobName": "JOB_0",
"kind": "POSE_ESTIMATION",
"mid": "20220511111820SS-508",
"points": [
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {},
"code": "POINT_A1",
"jobName": "JOB_0",
"mid": "20220511111820SS-508",
"name": "Point A1",
"point": {
"x": 0.3817781479042206,
"y": 0.10908308099784103
}
},
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {},
"code": "POINT_A2",
"jobName": "JOB_0",
"mid": "20220511111820SS-508",
"name": "Point A2",
"point": {
"x": 0.31799659574838424,
"y": 0.3245229905019995
}
},
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {},
"code": "POINT_A3",
"jobName": "JOB_0",
"mid": "20220511111820SS-508",
"name": "Point A3",
"point": {
"x": 0.5129859123390841,
"y": 0.3569199693748055
}
},
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {},
"code": "POINT_A4",
"jobName": "JOB_0",
"mid": "20220511111820SS-508",
"name": "Point A4",
"point": {
"x": 0.5439655233862045,
"y": 0.12366172149060362
}
}
],
"type": "marker"
}
]
}
}
"jsonResponse": {
"JOB_0": {
"annotations": [
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {},
"jobName": "JOB_0",
"kind": "POSE_ESTIMATION",
"mid": "20220511112452SS-21114",
"points": [
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {
"CLASSIFICATION_JOB": {
"categories": [
{
"confidence": 100,
"name": "CATEGORY_1"
}
]
}
},
"code": "POINT_A1",
"jobName": "JOB_0",
"mid": "20220511112452SS-21114",
"name": "Point A1",
"point": {
"x": 0.4853515625,
"y": 0.203125
}
},
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {
"CLASSIFICATION_JOB_0": {
"categories": [
{
"confidence": 100,
"name": "CATEGORY_2"
}
]
}
},
"code": "POINT_A2",
"jobName": "JOB_0",
"mid": "20220511112452SS-21114",
"name": "Point A2",
"point": {
"x": 0.400390625,
"y": 0.43229166666666685
}
},
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {
"CLASSIFICATION_JOB_1": {
"categories": [
{
"confidence": 100,
"name": "CATEGORY_3"
}
]
}
},
"code": "POINT_A3",
"jobName": "JOB_0",
"mid": "20220511112452SS-21114",
"name": "Point A3",
"point": {
"x": 0.625,
"y": 0.4878472222222222
}
},
{
"categories": [
{
"confidence": 100,
"name": "POSE_A"
}
],
"children": {
"CLASSIFICATION_JOB_2": {
"categories": [
{
"confidence": 100,
"name": "CATEGORY_4"
}
]
}
},
"code": "POINT_A4",
"jobName": "JOB_0",
"mid": "20220511112452SS-21114",
"name": "Point A4",
"point": {
"x": 0.685546875,
"y": 0.22048611111111116
}
}
],
"type": "marker"
}
]
}
}
Named entity recognition (NER)
Check the detailed JSON property descriptions
annotations
List of annotationsbeginOffset
: Position of the entity mentioncategories
: Category of the objectname
: Name of the categoryconfidence
: Confidence (100 by default when done by human)content
: Content of the mentionmid
: A unique identifier of the object
Examples of exported named entity recognition jobs:
'json_response': {
"JOB": {
"annotations": [
{"categories": [{ "name": "NAME", "confidence": 100 }],
"beginOffset": 0,
"content": "Chicago Bulls",
"mid": "chicago"},
{"categories": [{ "name": "NAME", "confidence": 100 }],
"beginOffset": 30,
"content": "Jerry Krause",
"mid": "krause"},
{"categories": [{ "name": "NAME", "confidence": 100 }],
"beginOffset": 63,
"content": "Gatorade",
"mid": "gatorade"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 22,
"content": "Manager",
"mid": "manager"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 14,
"content": "General",
"mid": "general"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 84,
"content": "medicine",
"mid": "medicine"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 104,
"content": "players",
"mid": "players"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 116,
"content": "coaches",
"mid": "coaches"},
{"categories": [{ "name": "VERB", "confidence": 100 }],
"beginOffset": 124,
"content": "milled",
"mid": "milled"},
{"categories": [{ "name": "VERB", "confidence": 100 }],
"beginOffset": 43,
"content": "was standing",
"mid": "standing"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 96,
"content": "hand",
"mid": "hand"},
{"categories": [{ "name": "NOUN", "confidence": 100 }],
"beginOffset": 72,
"content": "cooler",
"mid": "cooler"}
]
}
}
'json_response': {
"JOB": {
"annotations": [
{
"beginId": 'PART1',
"beginOffset": 14,
"categories": [{ "name": "NOMINAL_GROUP", "confidence": 100 }],
"content": "Declaration of the thirteen United States of America",
"endId": 'PART2',
"endOffset": 41,
"mid": "declaration",
},
]
}
}
'json_response': {
"NAMED_ENTITIES_RECOGNITION_JOB": {
"annotations": [
{"categories": [{ "name": "SUBJECT", "confidence": 100 }],
"beginOffset": 0,
"content": "Jordan",
"mid": "Jordan"},
{"categories": [{ "name": "VERB", "confidence": 100 }],
"beginOffset": 84,
"content": "was wearing",
"mid": "wearing"},
{"categories": [{ "name": "VERB", "confidence": 100 }],
"beginOffset": 111,
"content": "tucked",
"mid": "tucked verb"},
{"categories": [{ "name": "COMPLEMENT", "confidence": 100 }],
"beginOffset": 96,
"content": "a blue sweater tucked into high-rise pants",
"mid": "blue sweater complement"},
{"categories": [{ "name": "VERB", "confidence": 100 }],
"beginOffset": 47,
"content": "peered down",
"mid": "peered"},
{"categories": [{ "name": "COMPLEMENT", "confidence": 100 }],
"beginOffset": 62,
"content": "the hefty Krause",
"mid": "Krause complement"},
{"categories": [{ "name": "SUBJECT", "confidence": 100 }],
"beginOffset": 62,
"content": "the hefty Krause",
"mid": "Krause subject"}
]
},
"NAMED_ENTITIES_RELATION_JOB": {
"annotations": [
{"categories": [{ "name": "VERB_AND_SUBJECT_S", "confidence": 100 }],
"startEntities": [{ "mid": "peered" }],
"endEntities": [{ "mid": "Jordan" }]},
{"categories": [{ "name": "VERB_AND_COMPLEMENT_S", "confidence": 100 }],
"startEntities": [{ "mid": "peered" }],
"endEntities": [{ "mid": "Krause complement" }]},
{"categories": [{ "name": "VERB_AND_SUBJECT_S", "confidence": 100 }],
"startEntities": [{ "mid": "wearing" }],
"endEntities": [{ "mid": "Krause subject" }]},
{"categories": [{ "name": "VERB_AND_COMPLEMENT_S", "confidence": 100 }],
"startEntities": [{ "mid": "wearing" }],
"endEntities": [{ "mid": "blue sweater complement" }]}
]
}
}
NER in PDFs
Annotation structure for NER in PDFs is different. Instead of
beginOffset
, the annotations work with the coordinates of the polygon that the data belongs to and the page number.
Check the detailed JSON property descriptions
annotations
: List of annotationsannotations
: List of positions of the annotation (for NER, when an annotation spans multiple lines, there will be multiplepolys
and a singleboundingPoly
)boundingPoly
: Polygon of the object contournormalizedVertices
: List of vertices of the polygon. In the case of a bounding box, 4 verticesx
: Abscissa of the vertex position from top left and expressed as a percentage of the total image sizey
: Ordinate of the vertex position from top left and expressed as a percentage of the total image sizepageNumberArray
: The pages where the annotation appearspolys
: Coordinates from the different rectangles in the annotation. An annotation can have several rectangles (for example if the annotation covers more than one line)categories
: Category of the objectname
: Name of the categoryconfidence
: Confidence (100 by default when done by human)mid
: A unique identifier of the object
Example of an exported Named Object Recognition job with PDF:
'json_response': {
'NAMED_ENTITIES_RECOGNITION_JOB': {
'annotations': [{
'annotations': [{
'boundingPoly': [{
'normalizedVertices': [[
{'x': 0.28, 'y': 0.12},
{'x': 0.28, 'y': 0.15},
{'x': 0.72, 'y': 0.12},
{'x': 0.72, 'y': 0.15}
]]
}],
'polys': [{
'normalizedVertices': [[
{'x': 0.28, 'y': 0.12},
{'x': 0.28, 'y': 0.15},
{'x': 0.72, 'y': 0.12},
{'x': 0.72, 'y': 0.15}
]]
}],
'pageNumber': 1
}],
'categories': [{'name': 'TITLE', 'confidence': 100}],
'content': 'Learning Active Learning from Data',
'mid': 'article-title'
}],
}
}