This comprehensive guide demonstrates how to create a pre-classification task Agent that uses GPT-4 to automatically classify images and route them to specific annotation stages based on their classification. This approach is particularly useful when you have specialized annotators. Images with uncertain classifications are automatically added to the Archive for further review.

This guide makes the following assumptions:

  • You have an active OpenAI subscription with an API key.
  • Your images are stored in AWS.

STEP 1: Import your Images

1

Set Up AWS

Before you can do anything with the Encord platform and cloud storage, you need to configure your cloud storage to work with Encord. Once the integration between Encord and your cloud storage is complete, you can then use your data in Encord.

In order to integrate with AWS S3, you need to:

  1. Create a permission policy for your resources that allows appropriate access to Encord.
  2. Create a role for Encord and attach the policy so that Encord can access those resources.
  3. Activate Cross-origin resource sharing which allows Encord to access those resources from a web browser.
  4. Test the integration to make sure it works.
See our AWS integration documentation for a detailed explanation of setting up AWS to work with Encord.

You have the following options to integrate AWS and Encord:

2

Create AWS Integration in Encord

Create an S3 bucket to store your files if you haven’t already. Your S3 bucket permissions should be set to be blocking all public access.

In the Integrations section of the Encord platform, click +New integration to create a new integration.

Select AWS S3 at the top of the chooser.

It is essential you do not close this tab or window until you have finished the whole integration process. If you use the AWS UI for integration, we advise opening the AWS console in a separate tab.
See our AWS integration documentation for a detailed explanation of how to set up the AWS integration.
3

Create JSON file for import

See our documentation on JSON import files for more comprehensive information

Create a JSON file based on the templates provided below. imageMetadata is optional unless you are using a client-only access integration.

The title field is optional. If omitted, the image file path and name are used as the default title. For example, if the file is located at https://encord-solutions-bucket.s3.eu-west-2.amazonaws.com/path/to/my/bucket/image23.mp4, the title defaults to /path/to/my/bucket/image23.jpg.

Key or FlagRequired?Default value
”objectUrl”Yes
”title”NoThe file’s path + title
”imageMetadata”No
”clientMetadata”No
”createVideo”Nofalse
imageMetadata must be specified when a Strict client-only access integration is used. In all other cases, audioMetadata is optional, but including it significantly reduces import times.
Keys / Flags that are not required can be omitted from the JSON file entirely.
4

Create a Folder to Store Your Images

All files in Encord must be stored within folders. Therefore, you need to create a folder before uploading any data to Encord. To create a folder:

  1. Navigate to Files under the Index heading in the Encord platform.
  2. Click the + New folder button to create a new folder. A dialog to create a new folder appears.
  1. Give the folder a meaningful name and description.

  2. Click Create to create the folder. The folder is listed in Files.

5

Upload Your Images to Encord

We recommend uploading smaller batches of data: limit uploads to 100 videos and up to 1000 images at a time. Familiarize yourself with our limits and best practices for data import before uploading data to Encord.
  1. Navigate to Files section of Index in the Encord platform.
  2. Click + Upload files. A dialog appears.
  1. Select the folder you created in step 4.
  2. Click the Import from private cloud option.
  3. Select the integration you created in step 2 to add your cloud data.
We recommend turning on the Ignore individual file errors feature. This ensures that individual file errors do not lead to the whole upload process being aborted.
  1. Click Add JSON or CSV files to add a JSON or CSV file specifying cloud data that is to be added.

STEP 2: Set Up Your Project

1

Create a Dataset

  1. Click the New dataset button in the Datasets section in Annotate.
  1. Give your Dataset a meaningful title and description. A clear title and description keeps your data organized.
Toggle Looking to create a mirrored dataset? to create a Mirrored Dataset.
  1. Click Create dataset to create the Dataset.

Attach files

We recommend uploading smaller batches of data: limit uploads to 100 videos and up to 1000 images at a time. You have the option to create multiple Datasets, all of which can be linked to a single Project. Familiarize yourself with our limits and best practices for data import before uploading data to Encord.
  1. Navigate to the Datasets section under the Annotate heading.
  2. Click the Dataset you want to attach data to.
  3. Click +Attach existing files.
If the files you want have not been uploaded into Encord yet, click +Upload files to upload new files.
  1. Select the folders containing the files you want to attach to the Dataset. To select individual files, double-click a folder to see its contents, and select the files you want to add to the Dataset.

  2. Click Attach data to attach the selected files to the Dataset.

2

Create an Ontology

Learn how to create Ontologies here.

Create a new Ontology that includes:

  1. A radio classification called “Animal” with two options: One called “Cat” the other called “Dog”.

  2. Any other Objects you want to include in your Ontology.

3

Create Your Workflow template

Learn how to create Workflow templates here.

Create the following Workflow template by dragging the necessary components onto the canvas. For instructions on creating Workflows see our documentation here. Ensure your Agent node has the name “Agent 1”

4

Create a Project

  1. In the Encord platform, select Projects under Annotate.
  2. Click the + New annotation project button to create a new Project.
  1. Give the Project a meaningful title and description.

If you are part of an Organization, an optional Project tags drop-down is visible. Project tags are useful for categorizing and finding your Projects. Select as many tags as are relevant for your Project.

  1. Click the Attach ontology button.

  2. Select the Ontology you created previously from the list using the Select button.

  1. Click OK to attach the Ontology to the Project.

  2. Click the Attach datasets button.

  3. Select the Dataset you created previously from the list using the Attach button.

  1. Click OK to attach the Dataset(s) to the Project.
  1. Click the Load from template button to use a Workflow template.
  1. Select the template you want to use and click Load template.

  2. Click Create project to finish creating the Project.

STEP 3: Configure Your Agent

  1. Create a python file called imagePreClassification.py.

  2. Paste the following script into imagePreClassification.py replacing:

    • <your_openai_api_key> with your OpenAI API key.
    • <private_key_path> with the path to your private key.
    • <project_hash> with the hash of the Project you created in STEP 2.
Pre-Classification for Images Example
# Import dependencies
from encord.user_client import EncordUserClient
from encord.workflow import AgentStage
import openai
import base64
import requests
import json

openai.api_key = "<your_openai_api_key>"

def get_classification_from_the_model(media_content):
    """
    Example function that passes media to OpenAI's ChatGPT API along with the prompt
    and parses the result.
    """
    prompt = """
    You are an image analysis expert. You're working on a project that includes annotation of different pets images.
    Your task is to assign one of the following tags to the image: "Cat", "Dog", "Other".

    Reply in JSON format of the following structure: { "classification": Cat|Dog|Other }
    """

    completion = openai.ChatCompletion.create(
        model="gpt-4o-mini",
        messages=[
            ChatCompletionSystemMessageParam(role="system", content=prompt),
            ChatCompletionUserMessageParam(
                role="user",
                content=[
                    ChatCompletionContentPartImageParam(
                        image_url=ImageURL(url=f"data:image/jpeg;base64,{media_content}", detail="auto"),
                        type="image_url",
                    )
                ]
            ),
        ],
        response_format=ResponseFormat(type="json_object"),
        max_tokens=1000,
    )

    raw_text_completion = completion.choices[0].message.content
    try:
        parsed_result = json.loads(raw_text_completion)
        return parsed_result["classification"].lower()
    except Exception as e:
        print(f"Failed to process the model response: {e}")
        return None


# Authenticate using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
)

# Specify the Project that contains the Task agent. Replace <project_hash> with the hash of your Project
project = user_client.get_project("<project_hash>")

radio_classification = project.ontology_structure.get_child_by_title(
title="Animal",
type_=Classification,
)

cat_option = radio_ontology_classification.get_child_by_title(
title="Cat", type_=Option
)

dog_option = radio_ontology_classification.get_child_by_title(
title="Dog", type_=Option
)

# Specify the Task Agent
agent_stage = project.workflow.get_stage(name="Agent 1", type_=AgentStage)

for task in agent_stage.get_tasks():
    # Got a task for the following data unit
    print(f"{task.data_hash} -> {task.data_title}")

    # Getting a label row for the data unit
    label_row = project.list_label_rows_v2(data_hashes=[task.data_hash])[0]
    label_row.initialise_labels(include_signed_url=True)

    # Downloading the media:
    media_response = requests.get(label_row.data_link)
    media_content = base64.b64encode(media_response.content).decode("utf-8")

    # Now we can send the media to OpenAI:
    model_response = get_classification_from_the_model(media_content)

    # Mapping the response to the appropriate answer and pathway
    classification_mapping = {
        "Cat": (cat_option, "Cat"),
        "Dog": (dog_option, "Dog")
    }

    # Create a classification instance if response is either cat or dog
    if model_response in classification_mapping:
        answer_option, pathway = classification_mapping[model_response]

        classification_instance = radio_ontology_classification.create_instance()
        radio_classification_instance.set_answer(answer=answer_option)
        label_row.add_classification_instance(radio_classification_instance)

        task.proceed(pathway_name=pathway)
    else:
        task.proceed(pathway_name="Other")
  1. Save imagePreClassification.py.

STEP 4: Run your Agent

Run imagePreClassification.py.

The script processes all images in the Dataset. If additional images are added, the script must be re-run to include them.

STEP 5: Start Labeling

Now your annotators can start annotating images.

See our guide on how to label here.

Flow Diagram