You are viewing an old version of this page. View the current version.
Why a remote relay?
Many studies collect imaging data at multiple sites. A single XNAT system is often used in studies like this to aggregate the data in a single, secure system from which the data can be reviewed, processed, and shared. However, getting the data from the remote sites to that central XNAT system can be challenging. XNAT's web-based upload tools provide a manual solution that is often a good choice. But for study sites with high volume data collection, complex protocols, or large images, a more automated approach can reduce the effort and error rate.
Within a local network, this sort of automation is typically achieved by sending data directly from the scanner to XNAT using DICOM network protocols. However, DICOM is not a secure protocol and should not be used on wide-area unsecured networks.
We have implemented XNAT Remote Data Relays to enable DICOM data to be automatically and securely sent from a remote scanner to a central XNAT system.
What is a remote relay?
The XNAT Remote Data Relay receives data from the scanner over standard DICOM protocols and then forwards that data to the central XNAT over the secure, encrypted HTTPS protocol. What's the secret sauce? The relay actually runs a lightweight XNAT instance that includes the Xsync plugin. Xsync is quite flexible and can be configured at the individual project level to relay data to one or more remote XNAT systems. The XNAT relay server is very lightweight and can be run on very small footprint hardware such as an Intel NUC.
What hardware do I need?
The XNAT Remote Data Relay can be run on most any hardware or virtual computing systems. In multi-center studies where a central coordinating site is building and shipping the relays, a small footprint system can be a cost effective and usable approach. The sample systems illustrated below follow the small footprint principle. They are relatively cheap (<$600), super portable, and have proven to be quite hardy.
Relay for DICOM-only
Very little compute power is needed; however, reliability is essential. We have had very good results from the Intel NUC computing platform with SSD storage. Here is a typical build we have used in 2016/2017:
- Intel NUC Kit NUC6i3SYK
- 16 GB RAM
- 250 - 500 GB SSD
(Choose your drive capacity based on the volume of data that will pass through. Typically, we use the Samsung 850 EVO line and expect them to last 5-7 years. For higher volume, the 850 PRO line is a good option).
Relay for DICOM and Siemens Raw k-space Data
Siemens raw k-space data can be rather data high volume and requires more storage. While the NUC with an SSD could potentially store 4TB of data if using all SSD, the cost becomes very high. In this case we build a mini-server.
- Supermicro 5028A-TN4
- 16 GB RAM
- Two 120 GB SSDs
Used for mirrored boot/OS and SSD cache for disk pool
- Two 1 ft. SATA cables for SSDs to be mounted internally.
- Four 4-8 TB enterprise grade SATA disks
What are the network and security requirements?
- A single 1 Gb/s ethernet connection on a network that can be reached by the scanner and has access to the internet.
Siemens Raw k-space
In order to collect raw data the data relay must be connected directly to the scanner's back end network.
- A scanner network tap at either the host computer or MARS computer
- A 1 Gb/s Ethernet connection on a network that can be reached by the scanner and has access to the internet
- An Ethernet connection on a network that administrators can access
- Note that the relays have only been tested for Siemens Prisma MRI scanners. Similar configurations will likely work for other platforms but these have not been tested.
- Must be reachable by the scanner(s) that will send DICOM
- HTTP/HTTPS access to the the Internet
- Internal users must be able to reach the relay via HTTP/HTTPS
- SMTP must be allowed to either a local relay or directly to the NRG mail relay.
- Administrator access via SSH
- Firewall should restrict access to only required users and administrators
- No labels