Automated QA Workflow
To set up a Benchmark QA Workflow, you need to create two distinct Projects:
- Benchmark Project: This Project establishes the “ground-truth” labels, which serve as the benchmark for evaluating annotator performance.
- Production Project: In this Project, annotators generate the production labels. Annotator performance is scored against the ground-truth labels from the first Project.
STEP 1: Register Files with Encord
You must first register your files with Encord. This includes files that are used to establish ‘ground-truth’ labels, and your production data.
Create a Cloud Integration
Create a Folder to Store your Files
- Navigate to Files under the Index heading in the Encord platform.
- Click the + New folder button to create a new folder. A dialog to create a new folder appears.
-
Give the folder a meaningful name and description.
-
Click Create to create the folder. The folder is listed in Files.
Create JSON file for Registration
To register files from cloud storage into Encord, you must create a JSON file specifying the files you want to upload.
While you can use a CSV file, we strongly recommend using JSON files for uploading cloud data to Encord for better compatibility and performance.
Find helpful scripts for creating JSON files for the data registration process here.
All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all images, image groups, videos and DICOM files in your cloud storage.
Encord enforces the following upload limits for each JSON file used for file registration:
- Up to 1 million URLs
- A maximum of 500,000 items (e.g. images, image groups, videos, DICOMs)
- URLs can be up to 16 KB in size
Optimal upload chunking can vary depending on your data type and the amount of associated metadata. For tailored recommendations, contact Encord support. We recommend starting with smaller uploads and gradually increasing the size based on how quickly jobs are processed. Generally, smaller chunks result in faster data reflection within the platform.
clientMetadata
) to specify key frames, custom metadata, and custom embeddings. For more information go here or here for information on using the SDK.Import your Files
STEP 2: Create Benchmark Project
The Benchmark Project establishes ground truth labels.
Create a Benchmark Dataset
Create a Dataset containing tasks designed to establish ground truth labels. These files will be used to generate ‘gold-standard’ labels against which annotator performance will be evaluated. Be sure to give the Dataset a clear and descriptive name.
Create an Ontology
Create an Ontology to label your data. The same Ontology is used in the Benchmark Project AND the Production Project.
Create a Workflow Template
Create a Workflow template to establish ground truth labels and give it a meaningful name like “Establishing Benchmarks”. The following example template is just one approach; however, the process for creating benchmark labels is flexible, allowing you to choose any Workflow that suits your requirements.
Create the Benchmark Project
Ensure that you:
- Attach ONLY the Benchmark Dataset to the Project.
- Attach the Benchmark Workflow Template to the Project.
- In the Encord platform, select Projects under Annotate.
- Click the + New annotation project button to create a new Project.
- Give the Project a meaningful title and description, for example “Benchmark Labels”.
- Click the Attach ontology button and attach the Ontology you created.
- Click the Attach dataset button and attach the Dataset you created.
- Click the Load from template button to attach the template you created in STEP 2.3.
-
Click Add collaborators. Add collaborators to the Project and add them to the relevant Workflow stages.
-
Click Create project to finish creating the Project. You have now created the Project to Establish ground-truth labels.
STEP 3: Create Benchmark Labels
Complete the Benchmark Project created in STEP 2 to establish a set of ground truth labels for all data units in the Benchmark Dataset.
STEP 4: Create Production Project
Create a Project where your annotation workforce labels data and is evaluated against benchmark labels.
Create a Production Dataset
Create a Dataset using your Production data. Give the Dataset a meaningful name and description to distinguish it from the Benchmark Dataset created in STEP 2.
Create a Production Workflow Template
Create a Workflow template for labeling production data using Benchmark QA and give it a meaningful name like “Benchmark QA Production Labels”
The following Workflow template is an example showing how to set up a Workflow for Benchmark QA.
-
A Task Agent is used to route tasks depending on whether they originates in the Benchmark Dataset or the Production Dataset.
-
A script is will be added to the Consensus block of the Production Workflow to evaluate annotator performance.
Create The Production Project
Ensure that you:
- Attach both the Benchmark Dataset AND the Production Dataset when creating the Production Project.
- Attach the SAME Ontology you created for the Benchmark Project.
- Attach the Production Workflow Template to the Project.
- In the Encord platform, select Projects under Annotate.
- Click the + New annotation project button to create a new Project.
- Give the Project a meaningful title and description, for example “Benchmark QA Production Labels”.
- Click the Attach ontology button and attach the Ontology you created. Attach the SAME Ontology you created for the Benchmark Project.
- Click the Attach dataset button and attach the Benchmark AND the Production Datasets.
- Click the Load from template button to attach the “Benchmark QA Production Labels” template you created in STEP 4.2.
- Click Add collaborators. Add collaborators to the Project and add them to the relevant Workflow stages.
- Click Create Project to create the Project. You have now created the Project to label production data and evaluate annotators against the benchmark labels.
Create and run the SDK script for the Agent node
Create and run the following benchmark_routing.py
script to check whether a data unit is part of the Benchmark Dataset, or the Production Dataset.
- If a task is part of the Benchmark Dataset, the task is routed along the “Yes” pathway and proceeds to the Consensus 1 stage of the Production Project, where annotator performance is evaluated.
- If the task is not part of the Benchmark Dataset it is routed along the “No” pathway and proceeds to the Annotate 1 stage of the Production Project, where production data is labeled.
Create a script for the Review & Refine stage
Crete the following sample_evaluation.py
script for the Consensus 1 stage in the Production Project. The script compares the annotator’s labels in the Production Project with the ground truth labels established in the Benchmark Project.
All tasks in this stage are rejected and routed to the Archive stage, as they do not constitute production data. The point of the Consensus block is to evaluate annotator performance.
STEP 5: Create labels
Once your Production Project is set up, annotators can begin labeling the production data. Tasks from both the Benchmark Dataset and the Production Dataset are assigned to annotators. Their performance is then assessed based on how accurately they label the Benchmark tasks.
STEP 6: Evaluate Annotator Performance
Run the sample_evaluation.py
script created in STEP 4.5 to evaluate annotator performance.