The Developer Track is a two-day hack-a-thon, with attendees working side by side with XNAT developers on whatever you'd like to work on. You can work individually, but we encourage groups of like-minded attendees to band together to really put some muscle toward building something that will add to the XNAT ecosystem.
That said, what do you want to work on? Here's a list of topics so far (Note: if you added a project and don't see it listed here, give it a couple of minutes and it'll show up). When you've accomplished something, share it on the XNAT Extras NITRC project (https://www.nitrc.org/projects/xnat_extras).
-
Custom report: NIH Accrual Form — Build a custom Admin report page inside XNAT that can query subject data for semi-annual NIH Accrual reporting. Turn this into a module.
-
Charting/Reports Integration — It'd be awfully slick to integrate various charting tools in XNAT, including google charts.
-
XNAT to XNAT copy-sync — Enable a one-click copy of data from a project in one XNAT to a project in another XNAT. A nice use case: A lab has some fMRI data that they've just published a paper on. They'd like to share it via OpenFMRI. To do this, they go to the project in their XNAT, click share–>OpenFMRI and submit. Some complications that will need to be worked out: anonymization, authorization, syncing of data models.
-
R-XNAT — A library to access XNAT from R.
-
NBIA Interface — Build an XNAT interface to the NCI's NBIA system.
-
Cloud-based XNAT — Discuss and begin implementing various approaches to hosting XNAT on the cloud, hosting XNAT data on the cloud, hosting XNAT pipelines on the cloud, hosting stratus clouds on cumulus clouds, etc
-
Statistics Engine — Build a new service in XNAT to execute R-scripts against dynamically generated data sets. The scripts could come from a library of scripts preloaded on the server and/or uploaded by the user with the request. There are probably some security and compute time complications that will need to be addressed.
-
Common Data Elements — Identify some commonly used data types amongst groups, harmonize them, and get them up on XNAT Marketplace.
-
REDCap Integration — Lots of centers are using XNAT and REDCap. Wouldn't it be nice to have some integration between the two? Achievable through the web interfaces? Through PyXNAT/PyCap? What about some sort of single sign-on?
-
INCF Dataspace Crawler — The INCF has developed a data file sharing system called Dataspace. The system can be used to share unstructured data between investigators and with the world. For neuroimaging data hosted on Dataspace, it would be valuable to have an XNAT frontend to organize and distribute the data to users, computing resoures, etc. The project would develop a "crawler" that crawls through dataspace to identify neuroimaging data sets, extracts relevant metadata and then posts it to XNAT. It could be engineered to discover resources or it could be directed by a user to execute on a specified location in Dataspace.
-
Protocol Validation — Develop a tool that imports selected DICOM fields from a single scan (series) and uses that information to confirm that subsequent scanning sessions used the same scanning parameters for that series (e.g. TR, Flip angle, TE, IR-Prep, etc). This would be a useful tool for confirming that an imaging site has maintained consistency in imaging parameters across scan sessions.
-
XNAT Resource Synchronizer — Access to files stored in XNAT resources can be complicated; one can download from the web interface, REST calls, WebDAV, or xnat_sync, but none are easy enough to use to make XNAT resources natural places to hold anything but archive data.
-
PyXNAT ongoing development — Pyxnat has proven to be an extremely valuable resource to the XNAT community. This project will focus on how best to maintain PyXNAT.
-
MATLAB interfaces — Given the wide use of MATLAB in image analysis, it'd be nice to have a set of MATLAB modules for getting and sending data from XNAT. Some early prototypes have been developed at Wash U but have never been used in production. A nice use case: pulling data into SPM for analsysis