XW2021 Town Hall: XNAT Container Services

Time: Day 2, 9:30 – 10:00 am CDT

Host: Matt Kelsey, Washington University School of Medicine

Panelists: Sarah Keefe (Washington University School of Medicine), Kate Alpert (Radiologics), Baxter Rogers (Vanderbilt University), and Ryan Sullivan (University of Sydney)

Other Participants: Pradeep Reddy Raamana (University of Pittsburgh), Isabel Restrepo (Brown University), David Cash (University College London), Simon Doran (Institute for Cancer Research, London), and Alexander Bartnik (SUNY Buffalo)


Questions and Answers

When possible and as time permitted, questions that were brought up in the Q&A module during each talk were addressed in real time by the presenter. Other responses were entered in the Q&A interface itself. Those written responses are included below.

QuestionsAnswers
Are there any training resources ("Hello World" examples, FAQs, etc.) on using XNAT container service?
Are containers required for running pipelines? I hope not!(Answered live in session)

The files are copied to the build directory - isn't this slow? (I know, probably more secure) Could it not be an option instead? Some of my pipelines manipulated the XML files as well as image files and I've not easily been able to convert these.

Also - any plans for custom execution forms?

  • Hi Dan - We will try and answer this later in the webinar or perhaps in the Spatial Chat session
Does Freesurfer 711 container service works with batch launch processing dashboard?(Answered live in session)
Containerization is the way forward for reproducible research. However, Docker's business model and support is changing, and Docker Swarm often isn't allowed to run on HPC. I've been discussing potential work to support Kubernetes with Kate and Tim, but I'm curious on how many other institutions support Kubernetes on their HPC
  • Thanks guys - I think there is going to be some further discussion around this area in the SpatialChat
  • From Adrian Versteeg: "@Ryan Sullivan we do not yet support K8s on our HPC, but we plan on doing so. I agree it's THE way forward in reproducible research!!"
How close would singluarity/slurm implementation be as we are using this at our HPC and looking into how to leverage the current container service with this environment?
  • Please make sure you come by the breakout session on Spatial Chat and join that session as it will be easier to answer that question in that setting.
... Relation to BIDS-Apps?
I hope if Slurm is supported, Sub Grid Engine will be too - the submission API is very similar
  • Hopefully! Certainly designing in an extensible fashion would be our goal. One hurdle that comes to mind is that SLURM offers a REST API, I'm not sure if SGE does the same (that said, I don‚Äôt know if the REST API would be accessible given HPC clusters‚Äô institutional firewalls, etc., so I don‚Äôt know if XNAT would end up depending on this).
A lot of great work has been done creating dockerized BIDS apps, but this always needs conversion from the XNAT storage to the BIDS format. The xnat2bids command that I found on the NRG github is a few years old and I haven't had any success with it. Are there plans to update or improve how the container service works with BIDS apps? Open to unmuting myself to discuss more!
  • Sorry to hear that, would you check out
https://github.com/radiologics/docker-images/tree/master/setup-commands/xnat2bids, and let me know if that works better for you? I can PR it back to the main repo if so.
When trying to use the batch plugin, it only shows the old pipeline options. Is there any extra setting needed to activate the connection between batch and container service?
  • I would recommend posting your issue to the xnat discussion group. You will need to add & enable conmands on your XNAT site (and relevant projects)
Is it possible to adapt the Container Service plugin to issue jobs to an IBM-LSF queue manager?
  • It's not on our current roadmap, but I imagine it might be similar to interfacing with Slurm or TORQUE. Perhaps a cluster side process to pull queued jobs from CS/XNAT would work. A lighter-weight solution might be to launch these jobs via CS launched IBM-LSF aware container job.
Did you consider some standardization on container interfacing in any way? It  could be worthwhile to have a look at biocontainers for instance.
  • Thanks Marcel - I've just taken a look at biocontainers.
For people who do processing using xnat containers, is this made possible because their XNAT is used just by one lab, or do lots of different labs have very similar processing pipelines? I ask because where I am we have maybe 40 different users of our XNAT and no two of them do the same thing to their data, so we haven't seen any advantage to doing processing on the XNAT server vs on the cluster as individual users.
  • I'm in the situation where I work with a few groups who have a lot of data in a few XNATs and all require overlapping processing workflows. When the groups I work with need to run more specific workflows on smaller amounts of data they will typically download data to local servers for processing. That is partially due to timing constraints and limited developer availability - I definitely agree that the container development process is a big undertaking for smaller, less frequently used processing workflows in a situation like that.
I'm new to Containers.  If I use a Container on RHEL 8, is  everything "bundled" (ie no configuration etc. needed)?

$label.name