This step-by-step guide will help you get started quickly. The full documentation and specifications are provided here.

Please note that all calls to the Zenus API require an authorization header. The authorization header will look like: {“Authorization”: }
Your API token can be found on your account summary page.

Basic Usage

Step 1: Create a person

{"descriptor": [string] }
Response Content: {"descriptor": [string], "person_id": [int]}

Each time you create a person, our system generates a unique person_id. Please check the response object to keep track of the person_ids.

Step 2: Create more persons
Repeat Step 1 to create as many persons as you will need.

Step 3: Enroll images for each person
This API call enrolls one image for one person at a time. The images should include only the person’s face. Please consult the full documentation for detailed image specifications.

files:{"file": [image_file_object] }
Response Content: {"box": {"top": [int], "left": [int], "right": [int], "bottom": [int]}}

Step 4: Enroll more images
You may upload between one and 10 different photos of a person’s face for each person you have created. As you may imagine, the more images the better. This is how our system can recognize them in the future.

Step 5: Create a group and add persons to it

{"descriptor": [string]}
Response Content: {"group_id":  [int], "descriptor": [string], "verification_threshold": [number], "identification_threshold": [number]}

Each time you create a group, our system generates a unique group_id. Please check the response object to keep track of group_ids.

Step 6: Add more persons
Each group should have at least a few persons added to it. The maximum number of persons per group is one thousand.

Response Content: {"message": "success"}

Step 7: Automated computation of the verification and identification thresholds
This API call computes the optimal thresholds for a group. It is recommended to use this endpoint only after enrolling all the images and adding all the persons to the group. The processing time is a few seconds – sit tight.

{"target": "FRR", "target_value":  [number]}
{"target": "FAR", "target_value": [number] }
Response Content: {"verification_threshold": [number], "verification_frr": [number], "verification_far": [number], "identification_threshold": [number], "identification_frr": [number], "identification_far": [number]}

Please pass either an appropriate false-acceptance-rate (FAR) or false-rejection-rate FRR. The FRR and FAR are real numbers between 0 and 1 (see definitions). We recommend setting them between 0.005 and 0.02. Values smaller than 0.005 will result in either large FAR or FRR values.

Step 8: Beginning a session
In order to use the verification and identification API endpoints, you must use some of the credit you have been allotted. This API endpoint lets you schedule a session to begin testing rest calls immediately. The number of minutes argument must be an integer divisible by 10. The start endpoint should be used for testing purposes only and it should not be used in production environments.

{"minutes": [int] }
Response Content: {“message”: “engine access granted”}

To declare usage ahead of time for multiple devices, use the scheduling endpoint. Sessions must be scheduled at least five (5) minutes in advance in order to give Zenus time to prepare resources.

  "service_name": "images" | "stream",
  "group_id": [int],
  "start_time": [utc timestamp],
  "minutes": [int],
  "qps": ["queries per second" required by "images"]
Response Content: {session_id: [string]}

Step 9: Call the Identification and/or Verification API endpoints

You provide a group_id, a claimed identity (person_id) from that group, and the image of the person which made the identity claim. The returned boolean will be true if the matching score is above the verification_threshold for that group.

files:{"file": [image_file_object]}
Response Content: {"box": {"top": [int], "left": [int], "right": [int], "bottom": [int]}, "score": [number], "result": [boolean]} 

You provide a group_id and the image of an unidentified person. The response content includes the bounding box of the face detected and a list with all the person_ids for which the corresponding scores are above the identification_threshold for the given group_id. Otherwise, the list will be empty.

files:{"file": [image_file_object]}
Response Content: {"box": {"top": [int], "left": [int], "right": [int], "bottom": [int]}, "matches": [{"score": [number], "id": [int]}, ...]}

Step 10: Streaming

Please see the document How to Stream for an explanation of streaming over WebSockets