Skip to main content
Skip table of contents

XNAT OHIF Plugin Installation and Administration

Deploying the Pre-built plugin

  1. Copy the plugin jar, download here, to the plugins directory of your XNAT installation. The location of the plugins folder varies based on how and where you have installed your XNAT. If using xnat-docker-compose, the plugins folder is located at xnat-data/home/plugins relative to the root directory.
  2. Restart your Tomcat server with sudo service tomcat7 restart, or docker-compose restart xnat-web if using xnat-docker-compose.


Initialising the viewer in a populated database

In the likely event you are installing this plugin on an XNAT with an already populated database, all existing JSON metadata will be invalid due to changes in the metadata required by the viewer since plugin version 2.1. The outdated metadata for a session will be automatically regenerated the first time a user views the session but this will involve a short delay while the metadata is created.

An admin may call one of the REST commands in order to initiate a background regeneration of the required metadata. The REST commands have various levels of granularity ranging from single session through single subject, single project to entire XNAT instance. This allows the admin to prioritise metadata regeneration for their most used data. In the case of large data volumes, the REST command may appear to terminate with a time out e.g. a 504 return value. If this is observed, do not resubmit the command as the original server-side processing will continue and this can be monitored in the plugin's log file ohifviewer.log in XNAT's normal logging directory. We recommend only running one metadata regeneration command at a time to avoid IO contention.


The plugin fatjar includes EtherJ 1.1.3 and could cause clashes with other fatjars


NVIDIA AI-Assisted Annotation (AIAA) server set up

The XNAT OHIF Viewer integrates from v3.0 the NVIDIA AI-Assisted Annotation tools (see Using the XNAT OHIF Viewer section). In order to use these tools, an AIAA server needs to be set up, and its URL ($AIAA_URL:LOCAL_PORT where AIAA_URL is typical the floating IP assigned to the machine used to deploy the AIAA server) specified in XNAT by the site administrator. 

This is a quick guide for site administrators to set up an AIAA server in an appropriate machine using Clara Train SDK v4.0. This version runs as two independent Docker containers, one for AIAA and one for Triton (the NVIDIA backend inference engine), via docker-compose (older versions used a single container, see below). Please make sure your system fulfils the requirements specified here: https://docs.nvidia.com/clara/tlt-mi/nvmidl/installation.html#installation and that you have installed Docker and docker-compose (instructions can be found here for compose: https://docs.docker.com/compose/install/). It is possible to run the AIAA server without the Triton engine and without Docker compose, however the performance might be worse, especially for pre-trained models provided by NVIDA - please see official documentation for that or get in contact if you would like to work without the Triton engine. 

The set up process requires three steps: i) install Docker container for AIAA, ii) run both with docker-compose and iii) upload models to the server. More information can be found in the official NVIDIA documentation, links below. 

1) Install the Docker AIAA Image. Note: check the latest version available from NVIDIA, at the time of writing it is v4.0 

BASH
docker pull nvcr.io/nvidia/clara-train-sdk:v4.0

It is recommended to create a directory that will be linked to the Docker one (needs to be accessible by AIAA, running as non-root), so you can easily access models, logs, etc

BASH
mkdir aiaa_workspace
chmod 777 aiaa_workspace


2) Start the server. This will be done by running the AIAA and Triton containers using docker-compose. 

First let's create a docker-compose.yml file that will contain instructions for docker-compose

BASH
version: "3.8"
services:
  clara-train-sdk:
    image: nvcr.io/nvidia/clara-train-sdk:v4.0
    command: >
      sh -c "chmod 777 /workspace &&
        start_aiaa.sh --workspace /workspace --engine TRITON --triton_ip tritonserver \
          --triton_proto ${TRITON_PROTO} \
          --triton_start_timeout ${TRITON_START_TIMEOUT} \
          --triton_model_timeout ${TRITON_MODEL_TIMEOUT} \
          --triton_verbose ${TRITON_VERBOSE}"
    ports:
      - "${AIAA_PORT}:5000"
    volumes:
      - ${AIAA_WORKSPACE}:/workspace
    networks:
      - aiaa
    shm_size: 1gb
    ulimits:
      memlock: -1
      stack: 67108864
    depends_on:
      - tritonserver
    logging:
      driver: json-file
  tritonserver:
    image: nvcr.io/nvidia/tritonserver:21.02-py3
    command: >
      sh -c "chmod 777 /triton_models &&
        /opt/tritonserver/bin/tritonserver \
          --model-store /triton_models \
          --model-control-mode="poll" \
          --repository-poll-secs=5 \
          --log-verbose ${TRITON_VERBOSE}"
    volumes:
      - ${AIAA_WORKSPACE}/triton_models:/triton_models
    networks:
      - aiaa
    shm_size: 1gb
    ulimits:
      memlock: -1
      stack: 67108864
    restart: unless-stopped
    logging:
      driver: json-file
    deploy:
      resources:
        reservations:
          devices:
          - driver: nvidia
            device_ids: ['0']
            capabilities: [gpu]
networks:
  aiaa:

And inside another new environment file docker-compose.env, we will add the following - change the workspace directory name if you have created one with a different name, and the AIAA_PORT if you want to use a different one in your system

BASH
AIAA_PORT=5000
AIAA_WORKSPACE=/path/to/aiaa_workspace
TRITON_START_TIMEOUT=120
TRITON_MODEL_TIMEOUT=30
TRITON_PROTO=grpc
TRITON_VERBOSE=0

Now let's run docker-compose with the environment file created: 

BASH
sudo docker-compose --env-file docker-compose.env -p aiaa_triton up --remove-orphans -d

At this point, you should be able to access the AIAA server. For example, in a web browser you can go to http://$AIAA_URL:LOCAL_PORT.  


3) Upload models. Once the server is up and running (check it does for example accessing from a web browser $AIAA_URL:LOCAL_PORT), you can upload models to it. Only the models that have been uploaded to the AIAA server in use will be seen and accessed by the OHIF viewer. In addition to the models available in the NVIDIA NGC catalogue, you can upload your own ones. Please check the official NVIDIA documentation linked below to find examples and tutorials. 

        a) From NGC:

Check the links below to find the list of available models. Once you choose a model to upload (e.g. clara_pt_spleen_ct_segmentation) you can upload it by simply using: 

BASH
curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_pt_spleen_ct_segmentation" -H "accept: application/json" -H "Content-Type: application/json"  -d '{"path":"nvidia/med/clara_pt_spleen_ct_segmentation","version":"1"}'

         b) Locally: 

In order to use the NGC registry as above you need to install its command-line interface (CLI). An example to do so in a Linux machine (from this link) is as follows. For more information check the NVIDIA documentation in the links below. 

BASH
wget -O ngccli_cat_linux.zip https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip -o ngccli_cat_linux.zip && chmod u+x ngc
md5sum -c ngc.md5
echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile
ngc config set

No you can download a model locally and upload it by pointing to its local directory (model_name): 

BASH
 ngc registry model download-version nvidia/med/clara_pt_liver_and_tumor_ct_segmentation:1

 curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_pt_liver_and_tumor_ct_segmentation" -F "config=@clara_pt_liver_and_tumor_ct_segmentation_v1/config/config_aiaa.json;type=application/json" -F "data=@clara_pt_liver_and_tumor_ct_segmentation_v1/models/model.ts"

Note that from Clara v4.0, a TorchScript version of the model is needed, therefore model.ts should be used. 

      c) Check the models are correctly uploaded by either checking $AIAA_URL:LOCAL_PORT/logs

              or `curl http://$AIAA_URL:LOCAL_PORT/v1/models`


4) Tell the OHIF viewer where it can find the AIAA server. Once all the above checks have been completed, you need to register the server with XNAT, so that the OHIF viewer knows where to look. This is a task that can only be done by the XNAT administrator. Either a single AIAA server can be set up for the whole XNAT installation, or the AIAA server address can be set for individual projects.

  This is achieved using the ohif-aiaa REST xapi, which can be accessed, as usual, via the command line  

BASH
curl -u $ADMIN_USER:$ADMIN_PASS -X PUT "$XNAT_URL/xapi/ohifaiaa/projects/$PROJECT_NAME/servers" -H "accept: */*" -H "Content-Type: application/json" -d "[ \"$AIAA_URL:LOCAL_PORT\"]"

or, interactively, via XNAT's Swagger interface, which you can get to from the Administer → Site Administration menu options, followed by clicking on Miscellaneous in the tab bar on the left hand side.


See more documentation from NVIDIA: 

AIAA documentation: https://docs.nvidia.com/clara/tlt-mi/aiaa/index.html

Clara SDK installation: https://docs.nvidia.com/clara/clara-train-sdk/aiaa/quickstart.html#installation

Loading models: https://docs.nvidia.com/clara/clara-train-sdk/aiaa/loading_models.html

NGC Registry CLI: https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html



DEPRECATED: Clara v3.1

The set up process requires three steps: i) to install a Docker image, ii) to start the container and iii) to upload models to the server. More information can be found in the official NVIDIA documentation 

Before starting, check system requirements: https://docs.nvidia.com/clara/tlt-mi/nvmidl/installation.html#installation

1) Install the Docker Image. Note: check the latest version available from the link above. At the time of writing, the most recent version is 3.1.01

BASH
docker pull nvcr.io/nvidia/clara-train-sdk:v3.1.01


2) Start the server. There are different ways in which it can be started, for example, add the following in a bash script and run it: 

BASH
export NVIDIA_RUNTIME="--gpus all"

export OPTIONS="--shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864"

export LOCAL_WORKSPACE=/local/path/

export REMOTE_WORKSPACE=/aiaa-experiments

export LOCAL_PORT=5000

export REMOTE_PORT=80

export DOCKER_IMAGE="nvcr.io/nvidia/clara-train-sdk:v3.1.01"

docker run $NVIDIA_RUNTIME $OPTIONS -it -d --name aiaa-server --rm -p $LOCAL_PORT:$REMOTE_PORT -v $LOCAL_WORKSPACE:$REMOTE_WORKSPACE $DOCKER_IMAGE start_aas.sh --workspace $REMOTE_WORKSPACE

The way above will run the docker and start the AIAA server (via script start_aas.sh) directly, returning the ID of the Docker container. Alternatively, you can run the container (in bash) and manually start the server (e.g. start_aas.sh --debug 1 &). 

The option suggested above will also create the LOCAL_WORKSPACE folder to access and persistently store the workspace data (models, sessions, inferences, etc.) 

At this point, you should be able to access the AIAA server. For example, in a web browser you can go to http://$AIAA_URL:LOCAL_PORT and should look like: 




JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.