To set up a Benchmark QA Workflow, you need to create two distinct Projects:

  • Benchmark Project: This Project establishes the “ground-truth” labels, which serve as the benchmark for evaluating annotator performance.
  • Production Project: In this Project, annotators generate the production labels. Annotator performance is scored against the ground-truth labels from the first Project.

STEP 1: Register Files with Encord

You must first register your files with Encord. This includes files that are used to establish ‘ground-truth’ labels, and your production data.

2

Create a Folder to Store your Files

  1. Navigate to Files under the Index heading in the Encord platform.
  2. Click the + New folder button to create a new folder. A dialog to create a new folder appears.
  1. Give the folder a meaningful name and description.

  2. Click Create to create the folder. The folder is listed in Files.

3

Create JSON file for Registration

To register files from cloud storage into Encord, you must create a JSON file specifying the files you want to upload.

While you can use a CSV file, we strongly recommend using JSON files for uploading cloud data to Encord for better compatibility and performance.

Find helpful scripts for creating JSON files for the data registration process here.

All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all images, image groups, videos and DICOM files in your cloud storage.

For a list of supported file formats for each data type, go here
Encord supports file names up to 300 characters in length for any file or video for upload.
4

Import your Files

STEP 2: Create Benchmark Project

The Benchmark Project establishes ground truth labels.

1

Create a Benchmark Dataset

Create a Dataset containing tasks designed to establish ground truth labels. These files will be used to generate ‘gold-standard’ labels against which annotator performance will be evaluated. Be sure to give the Dataset a clear and descriptive name.

Learn how to create Datasets here
2

Create an Ontology

Create an Ontology to label your data. The same Ontology is used in the Benchmark Project AND the Production Project.

Learn how to create Ontologies here
3

Create a Workflow Template

Create a Workflow template to establish ground truth labels and give it a meaningful name like “Establishing Benchmarks”. The following example template is just one approach; however, the process for creating benchmark labels is flexible, allowing you to choose any Workflow that suits your requirements.

For information on how to create Workflow templates see our documentation here.
4

Create the Benchmark Project

Ensure that you:

  • Attach ONLY the Benchmark Dataset to the Project.
  • Attach the Benchmark Workflow Template to the Project.
  1. In the Encord platform, select Projects under Annotate.
  2. Click the + New annotation project button to create a new Project.
  1. Give the Project a meaningful title and description, for example “Benchmark Labels”.
  2. Click the Attach ontology button and attach the Ontology you created.
  3. Click the Attach dataset button and attach the Dataset you created.
  4. Click the Load from template button to attach the template you created in STEP 2.3.
  1. Click Add collaborators. Add collaborators to the Project and add them to the relevant Workflow stages.

  2. Click Create project to finish creating the Project. You have now created the Project to Establish ground-truth labels.

STEP 3: Create Benchmark Labels

Complete the Benchmark Project created in STEP 2 to establish a set of ground truth labels for all data units in the Benchmark Dataset.

To learn how to create annotations, see our documentation here.

STEP 4: Create Production Project

Create a Project where your annotation workforce labels data and is evaluated against benchmark labels.

1

Create a Production Dataset

Create a Dataset using your Production data. Give the Dataset a meaningful name and description to distinguish it from the Benchmark Dataset created in STEP 2.

2

Create a Production Workflow Template

Create a Workflow template for labeling production data using Benchmark QA and give it a meaningful name like “Benchmark QA Production Labels”

The following Workflow template is an example showing how to set up a Workflow for Benchmark QA.

  • A Task Agent is used to route tasks depending on whether they originates in the Benchmark Dataset or the Production Dataset.

  • A script is will be added to the Consensus block of the Production Workflow to evaluate annotator performance.

3

Create The Production Project

Ensure that you:

  • Attach both the Benchmark Dataset AND the Production Dataset when creating the Production Project.
  • Attach the SAME Ontology you created for the Benchmark Project.
  • Attach the Production Workflow Template to the Project.
  1. In the Encord platform, select Projects under Annotate.
  2. Click the + New annotation project button to create a new Project.
  3. Give the Project a meaningful title and description, for example “Benchmark QA Production Labels”.
  4. Click the Attach ontology button and attach the Ontology you created. Attach the SAME Ontology you created for the Benchmark Project.
  5. Click the Attach dataset button and attach the Benchmark AND the Production Datasets.
  6. Click the Load from template button to attach the “Benchmark QA Production Labels” template you created in STEP 4.2.
  7. Click Add collaborators. Add collaborators to the Project and add them to the relevant Workflow stages.
  8. Click Create Project to create the Project. You have now created the Project to label production data and evaluate annotators against the benchmark labels.
4

Create and run the SDK script for the Agent node

Create and run the following benchmark_routing.py script to check whether a data unit is part of the Benchmark Dataset, or the Production Dataset.

  • If a task is part of the Benchmark Dataset, the task is routed along the “Yes” pathway and proceeds to the Consensus 1 stage of the Production Project, where annotator performance is evaluated.
  • If the task is not part of the Benchmark Dataset it is routed along the “No” pathway and proceeds to the Annotate 1 stage of the Production Project, where production data is labeled.
Run this script each time new production data is added to the Production Dataset
benchmark_routing.py
# Import dependencies
from encord.user_client import EncordUserClient
from encord.workflow import AgentStage

#Replace <project_hash> with the hash of your Project
PROJECT_HASH = "<project_hash>"

BENCHMARK_DATASET_HASH = "<benchmark_dataset_hash>"

#Replace <private_key_path> with the full path to your private key
SSH_PATH = "<private_key_path>"

# Authenticate using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(
ssh_private_key_path=SSH_PATH
)

# Specify the Project that contains the Task agent.
project = user_client.get_project(PROJECT_HASH)

# Specify the Task Agent
agent_stage = project.workflow.get_stage(name="Benchmark Task?", type_=AgentStage)
benchmark_dataset = user_client.get_dataset(BENCHMARK_DATASET_HASH)
benchmark_data_hashes = {data_row.uid for data_row in benchmark_dataset}

for task in agent_stage.get_tasks():
    if task.data_hash in benchmark_data_hashes:
        task.proceed(pathway_name="YES")
    else: 
        task.proceed(pathway_name="NO")
5

Create a script for the Review & Refine stage

Crete the following sample_evaluation.py script for the Consensus 1 stage in the Production Project. The script compares the annotator’s labels in the Production Project with the ground truth labels established in the Benchmark Project.

All tasks in this stage are rejected and routed to the Archive stage, as they do not constitute production data. The point of the Consensus block is to evaluate annotator performance.

sample_evaluation.py
from encord import EncordUserClient, Project

from encord.workflow import (
    ConsensusReviewStage,
)


user_client: EncordUserClient = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
)

production_project_id = "<production_project_hash>"
production_project = user_client.get_project(production_project_id)

groundtruth_project_id = "<groundtruth_project_hash>"
groundtruth_project = user_client.get_project(groundtruth_project_id)


review_stage = production_project.workflow.get_stage(name="Consensus 1 Review", type_=ConsensusReviewStage)
task_data_hashes = [t.data_hash for t in review_stage.get_tasks()]

# download all submitted labels and ground-truth labels
label_rows_to_evaluate = production_project.list_label_rows_v2(
    data_hashes=task_data_hashes, 
    include_all_label_branches=True,
)
with production_project.create_bundle() as bundle:
    for lr in label_rows_to_evaluate:
        lr.initialise_labels(bundle=bundle)

groundtruth_label_rows = groundtruth_project.list_label_rows_v2(data_hashes=task_data_hashes)
with groundtruth_project.create_bundle() as bundle:
    for lr in groundtruth_label_rows:
        lr.initialise_labels(bundle=bundle)


# for each ground-truth label row from the ground-truth project
# 1. grab the corresponding task to be benchmarked
# 2. evaluate the submitted labels against the ground truth, specifying the annotator being evaluated

for gtlr in groundtruth_label_rows:
    # get all label row submissions against the ground truth task
    benchmark_label_rows = [bmlr for bmlr in label_rows_to_evaluate if bmlr.data_hash == gtlr.data_hash]
    target_task = next(review_stage.get_tasks(data_hash=gtlr.data_hash), None)

    if target_task is None:
        print("Could not find corresponding benchmark task in production project")
        continue

    # for each label row to be benchmarked, run the evaluation passing in annotator
    for bmlr in benchmark_label_rows:
        # skip the 'main' branch, it does not yet have any annotator submissions.
        if bmlr.branch_name == "main":
            continue

        annotator = next(task_option.annotator for task_option in target_task.options if task_option.branch_name == bmlr.branch_name)
        print(f"Evaluate task. Data UUID: {gtlr.data_hash}, Label Branch: {bmlr.branch_name}, Annotator: {annotator}")
        # my_eval_function(gtlr, bmlr, gtlr) <- fill in your logic here!

STEP 5: Create labels

Once your Production Project is set up, annotators can begin labeling the production data. Tasks from both the Benchmark Dataset and the Production Dataset are assigned to annotators. Their performance is then assessed based on how accurately they label the Benchmark tasks.

To learn how to create annotations, see our documentation here.

STEP 6: Evaluate Annotator Performance

Run the sample_evaluation.py script created in STEP 4.5 to evaluate annotator performance.