XNAT OHIF Plugin Installation and Administration

Deploying the Pre-built plugin

  1. Copy the plugin jar, download here, to the plugins directory of your XNAT installation. The location of the plugins folder varies based on how and where you have installed your XNAT. If using xnat-docker-compose, the plugins folder is located at xnat-data/home/plugins relative to the root directory.
  2. Restart your Tomcat server with sudo service tomcat7 restart, or docker-compose restart xnat-web if using xnat-docker-compose.

Initialising the viewer in a populated database

In the likely event you are installing this plugin on an XNAT with an already populated database, all existing JSON metadata will be invalid due to changes in the metadata required by the viewer since plugin version 2.1. The outdated metadata for a session will be automatically regenerated the first time a user views the session but this will involve a short delay while the metadata is created.

An admin may call one of the REST commands in order to initiate a background regeneration of the required metadata. The REST commands have various levels of granularity ranging from single session through single subject, single project to entire XNAT instance. This allows the admin to prioritise metadata regeneration for their most used data. In the case of large data volumes, the REST command may appear to terminate with a time out e.g. a 504 return value. If this is observed, do not resubmit the command as the original server-side processing will continue and this can be monitored in the plugin's log file ohifviewer.log in XNAT's normal logging directory. We recommend only running one metadata regeneration command at a time to avoid IO contention.


The plugin fatjar includes EtherJ 1.1.3 and could cause clashes with other fatjars

NVIDIA AI-Assisted Annotation (AIAA) server set up

The XNAT OHIF Viewer v3.0 integrates the NVIDIA AI-Assisted Annotation tools (see Using the XNAT OHIF Viewer section). In order to use these tools, an AIAA server needs to be set up, and its URL ($AIAA_URL:LOCAL_PORT where AIAA_URL is typical the floating IP assigned to the machine used to deploy the AIAA server) specified in XNAT by the site administrator. 

This is a quick guide for site administrators to set up an AIAA server in an appropriate machine and it has been tested with v3.1 of Clara Train SDK. The set up process requires three steps: i) to install a Docker image, ii) to start the container and iii) to upload models to the server. More information can be found in the official NVIDIA documentation, links below. 

Before starting, check system requirements: https://docs.nvidia.com/clara/tlt-mi/nvmidl/installation.html#installation

1) Install the Docker Image. Note: check the latest version available from the link above. At the time of writing, the most recent version is 3.1.01

docker pull nvcr.io/nvidia/clara-train-sdk:v3.1.01


2) Start the server. There are different ways in which it can be started, for example, add the following in a bash script and run it: 

export NVIDIA_RUNTIME="--gpus all"

export OPTIONS="--shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864"

export LOCAL_WORKSPACE=/local/path/

export REMOTE_WORKSPACE=/aiaa-experiments

export LOCAL_PORT=5000

export REMOTE_PORT=80

export DOCKER_IMAGE="nvcr.io/nvidia/clara-train-sdk:v3.1.01"

docker run $NVIDIA_RUNTIME $OPTIONS -it -d --name aiaa-server --rm -p $LOCAL_PORT:$REMOTE_PORT -v $LOCAL_WORKSPACE:$REMOTE_WORKSPACE $DOCKER_IMAGE start_aas.sh --workspace $REMOTE_WORKSPACE

The way above will run the docker and start the AIAA server (via script start_aas.sh) directly, returning the ID of the Docker container. Alternatively, you can run the container (in bash) and manually start the server (e.g. start_aas.sh --debug 1 &). 

The option suggested above will also create the LOCAL_WORKSPACE folder to access and persistently store the workspace data (models, sessions, inferences, etc.) 

At this point, you should be able to access the AIAA server. For example, in a web browser you can go to http://$AIAA_URL:LOCAL_PORT and should look like: 


3) Upload models. Once the server is up and running (check it does for example accessing from a web browser $AIAA_URL:LOCAL_PORT), you can upload models to it. Only the models that have been uploaded to the AIAA server in use will be seen and accessed by the OHIF viewer. In addition to the models available in the NVIDIA NGC catalogue, you can upload your own ones. Please check the official NVIDIA documentation linked below to find examples and tutorials. 

        a) From NGC: Check the links below to find the list of available models. Once you choose a model to upload (e.g. clara_ct_seg_spleen_amp) you can upload it by simply using: 

curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/clara_ct_seg_spleen_amp" -H "accept: application/json" -H "Content-Type: application/json" -d '{"path":"nvidia/med/clara_ct_seg_spleen_amp","version":"1"}'

         (Note that our example is using 5000 as $LOCAL_PORT

         b) Locally: Once you have downloaded a model locally (see documentation below) you can upload it by pointing to its local directory (model_name): 

model_name=clara_ct_seg_spleen_amp

ngc registry model download-version nvidia/med/$model_name:1

curl -X PUT "http://127.0.0.1:$LOCAL_PORT/admin/model/$model_name" -F "config=@$model_name/config/config_aiaa.json" -F "data=@$model_name/models/model.trt.pb"

      (Check the name of the downloaded directory matches $model_name, otherwise edit the last command above accordingly)

     Note that in order to use the NGC registry as above you need to install its command-line interface (CLI). An example to do so in a Linux machine (from this link) is as follows. For more information check the NVIDIA documentation in the links below. 

wget -O ngccli_cat_linux.zip https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip -o ngccli_cat_linux.zip && chmod u+x ngc
md5sum -c ngc.md5
echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile
ngc config set


      c) Check the models are correctly uploaded by either checking $AIAA_URL:LOCAL_PORT/logs

              or `curl http://$AIAA_URL:LOCAL_PORT/v1/models`


wget -O ngccli_cat_linux.zip https://ngc.nvidia.com/downloads/ngccli_cat_linux.zip && unzip -o ngccli_cat_linux.zip && chmod u+x ngc
md5sum -c ngc.md5
echo "export PATH=\"\$PATH:$(pwd)\"" >> ~/.bash_profile && source ~/.bash_profile
ngc config set



4) Tell the OHIF viewer where it can find the AIAA server. Once all the above checks have been completed, you need to register the server with XNAT, so that the OHIF viewer knows where to look. This is a task that can only be done by the XNAT administrator. Either a single AIAA server can be set up for the whole XNAT installation, or the AIAA server address can be set for individual projects.

  This is achieved using the ohif-aiaa REST xapi, which can be accessed, as usual, via the command line  

curl -u $ADMIN_USER:$ADMIN_PASS -X PUT "$XNAT_URL/xapi/ohifaiaa/projects/$PROJECT_NAME/servers" -H "accept: */*" -H "Content-Type: application/json" -d "[ \"$AIAA_URL:LOCAL_PORT\"]"

or, interactively, via XNAT's Swagger interface, which you can get to from the Administer → Site Administration menu options, followed by clicking on Miscellaneous in the tab bar on the left hand side.


See more documentation from NVIDIA: 

AIAA documentation: https://docs.nvidia.com/clara/tlt-mi/aiaa/index.html

Clara SDK installation: https://docs.nvidia.com/clara/tlt-mi/nvmidl/installation.html#installation

Loading models: https://docs.nvidia.com/clara/tlt-mi/aiaa/loading_models.html

NGC Registry CLI: https://docs.nvidia.com/dgx/ngc-registry-cli-user-guide/index.html

$label.name