Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

 Read further if:

  • you develop or run image processing locally in bash;
  • you want to use XNAT as data source and/or remote archive;
  • you want to automate processing of multiple XNAT sessions and want to report your results in spreadsheets.
  • you want to focus on local script development and minimize programming effort to interact with XNAT.

Use case: compute BET whole brain volume for all MPRAGE scans in an MRI project in XNAT and save statistics in a spreadsheet.

1. Write a bash script that computes brain volume on a DICOM brain (assuming FSL v4+ is on the path).

brain_volume.sh
#!/bin/bash
in=$1 #input DICOM scan
dcm2nii $in
cp $in/[0-9]*.nii head.nii
if [ ! -f "head.nii" ]; then exit -1; fi
bet head brain -m
vol=(`fslstats brain_mask -V`); vol=${vol[1]}
#report the volume in mm3 to console.
echo $vol

2. Create a spreadsheet listing all scans that you want to process. In many cases, you'll have such spreadsheet; if not, you can start with the one created using the one generated by XNAT:


Now, modify the spreadsheet so that column names don't contain spaces, empty and irrelevant cells are removed, and change column "Scans" to "T1", with scan numbers that we want to process per each session. If any fields have trailing zeroes, you'll need to modify the csv outside of Excel to add them.



3. Now we are ready to adapt the brain_volume.sh to process all sessions from test.csv.

brain_volume.xt
#!/bin/bash
#this line will convert test.csv to a bash source test.params that assigns corresponding arrays.
rcsv test.csv `pwd`/test.params; source `pwd`/test.params; rm test.params
#as a result, the following arrays are assigned:
#MR_ID=("001_obscured" "002_obscured" ...
#Subject=("001" "002" "003" ...
#Scans=("SPGR" "SPGR" "SPGR" ...
#now, iterate over all MR sessions.
set -x
for ((i=0; i<${#MR_ID[*]}; i++)); do
    #this will create directory named ${MR_ID[i]} and cd to it.
    set_context ${Subject[i]} ${MR_ID[i]}
    #load specified scan from XNAT.
    load_type "${Scans[i]}"
    #find the scan that was created by load_type.
    set -x
    in=`ls -d study*`
    #compute brain volume, similarly to brain_volume.sh
     dcm2nii $in
    #select 
    cp $in/[0-9]*[0-9].nii head.nii
    if [ ! -f "head.nii" ]; then exit -1; fi
    bet head brain -m
    vol=(`fslstats brain_mask -V`); vol=${vol[1]}
    #except, now we use special command to save a variable with this offline session.
    save_vars vol
    break
done
#return to the starting directory.
set_context null 
#this will go through all sessions and generate a spreadsheet summarizing variables saved with "save_vars" command.
summary mpr_vol

  

4. Test the batch processing.

We will run the developed script on a single session to see if it works. For that, uncomment break statement on line 26. Now we are ready to run the script:

running brain_volume.xt
xtolm -sr https://central.xnat.org -u your_xnat_central_user -pr surfmask_smpl -o brain_volume.xt
cat mpr_vol.csv

After making sure that mpr_vol.csv produces correct output, comment the break statement on line 26 and rerun the brain_volume.xt. Now we'll have the BET calculations saved in mpr_vol.csv file ready for analysis (note that Excel removed trailing zeroes from the subject again).


Xtolm Commands:

Online: mostly, load/save scans, resources and metadata as resources.

        load_scans <id1>[,...]  load scan(s) to DICOM by ID to the processing dir

        load_type <type>        load scan(s) DICOM of a given type to the processing dir

        load_dir <dir1>[,...]           load resource dir(s) to the processing dir

        load_file <fil1> [fil2..]       load resource file(s) to the processing dir

        load_workspace                  load context variables from XNAT session (stored under unique analysis ID)

        save_workspace                  save context variables to XNAT session (stored under unique analysis ID)

        save_dir <dir>                  write resource dir to XNAT (overwrites existing)

Offline: The purpose is to enable repeated analysis. Each study is loaded into a separate directory and is processed within 'processing context': XNAT subject and experiment label. Each study directory can have DICOM, resource dirs, and some configuration files that store context information. Workspace is defined as all user-defined variables that should be saved, i.e. key-value pairs.

        set_context <subject> <session> set current context of subject and session

        save_vars [var1] [,..]          save one or more variables to current context

        summary <label>                 generate a comma-separated summary of existing contexts

        help                            list console commands

        quit                            quit interactive console

Command Details

set_context: move to another processing dir, load/init workspace variables
save_vars: save local vars (name+value) to context (offline file within processing dir or online in XNAT experiment).
load/save workspace: sync context variables with XNAT session (load/save in designated resource)

Script execution mode: the user writes a script which is interpreted by xtolm, converted to a bash script and executed.

xtolm -r <script file> [...]

Interactive mode : line-by-line command input.
Offline mode: all code that interacts with XNAT is ignored.

Useful capabilities

rcsv: convert a csv file into a set of bash arrays - can be used in xtolm to change session context automatically, load specific scans/dirs, etc.
summary: create a summary spreadsheet that aggregates all contexts accross the batch processing session.
xtolm design features:

  • flexible coupling with online XNAT server. The analysis can be saved/loaded from/to XNAT using unique analysis ID. However, completely offline processing is possible (-o mode), when repeated or advanced analyses are required. Context is still saved in session configuration files. The design tolerates the same script in offline mode, ignoring all local<->xnat transactions.
  • repeatability of analysis - helped by re-using saved session context. Modifying the script and re-running is easy.
  • minimal local data structure requirements and hopefully no knowledge of XNAT programming can be tolerated.
  • advantages: those of bash. widely used, a bash script can be modified to work with xtolm.
  • disadvantages: those of bash, e.g. debugging takes getting used to; associative arrays require bash 4.3+

















  • No labels