Multi-channel MR Image

Reconstruction Challenge (MC-MRRec)

Summary

Magnetic resonance (MR) is a sensitive diagnostic imaging modality that allows specific investigation of the structure and function of the brain and body. One major drawback is the overall MR acquisition time, which can easily exceed 30 minutes per subject. Lengthy MR acquisition times are costly (~$300 USD per exam), prone to increased motion artifacts that degrade image quality, reduce patient throughput, and contribute to patient discomfort. Parallel imaging (PI) and compressed sensing (CS) are well known approaches that can speed-up MR exams by collecting fewer k-space samples than stated by the Nyquist sampling theorem. Deep learning (DL) methods have the potential to allow greater speed-ups than PI and CS. Some DL studies in the literature indicate up to 10-fold acceleration with little-to-no loss in image quality. To put that into perspective, in this challenge we use 1 mm isotropic 3D T1-weighted brain scans that took about six minutes to acquire. Acquiring one tenth of the k-data would reduce the acquisition time to 35-40 seconds! Deep learning reconstruction algorithms are divided into four groups: (1) k-space-domain, (2) image-domain, (3) domain transform, and (4) hybrid k-domain and image-domain learning. Currently, there is no clear winner among these proposed models. This is partly due to the lack of benchmark datasets that allow fair comparisons. The fastMRI initiative is an important step in rating these approaches. Our challenge is a complementary initiative that provides 3D brain data. Working with volume data allows one to undersample in both the phase-encode (ky) and slice-encode (kz) directions. Most studies so far have used models that are specific to receiver coils with a given number of channels. Our challenge tackles this issue by:

    • Comparing different DL-based MR reconstruction models on a large dataset (> 200 GB)

    • Assessing reconstruction model generalizability to various datasets acquired with a different number of channels

The challenge has two separate tracks, and teams are free to decide whether to submit to just one track or both; we encourage teams to submit to both tracks. Each track will have a separate ranking.

    • Track 01: sampling pattern masks will be provided for R = {5,10} (R is the acceleration factor) and submissions will be evaluated only on the 12-channel test data

    • Track 02: sampling pattern masks will be provided for R = {5,10} and submissions will be evaluated for both the 12-channel and 32-channel test data

In these two tracks, we will assess MR reconstruction quality and noticeable loss of high resolution detail, especially at the higher acceleration rates. By having two separate tracks, we hope to determine whether a generic reconstruction model applicable to various multi-channel coils will have a decreased performance (if any) compared to more coil-specific reconstruction models.

Dataset Description

We are providing 167 three-dimensional (3D), T1-weighted, gradient-recalled echo, 1 mm isotropic sagittal acquisitions collected on a clinical MR scanner (Discovery MR750; General Electric Healthcare, Waukesha, WI). The scans are from presumed healthy subjects (44.5 ± 15.5 years, range 20-80 years). The datasets were acquired using either a 12-channel (117 scans) or 32-channel coil (50 scans). Acquisition parameters were TR/TE/TI = 6.3 ms / 2.6 ms / 650 ms (93 scans) and TR/TE/TI = 7.4 ms / 3.1 ms / 400 ms (74 scans), with 170 to 180 contiguous 1.0-mm slices and a field of view of 256 mm x 218 mm. The acquisition matrix size [Nx, Ny, Nz] for each channel was [256, 218, 170-180], where x, y, and z denote readout, phase-encode and slice-encode directions, respectively. In the slice-encode (kz) direction, only 85% of the k-data was collected; the remainder (15% of 170-180) should be zero-filled. Because k-space undersampling only occurs in the phase-encode and slice-encode directions, we have taken the inverse Fourier transform (iFT) along kx and provide hybrid (x, ky, kz) datasets. This effectively reduces k-space undersampling from 3D to 2D. The partial Fourier reference data was reconstructed by taking the iFT of each individual channel and combining these using the conventional "square-root sum-of-squares" algorithm. The dataset train / validation / test split is summarized in the table below.

Evaluation Metrics, Statistical Analysis and Ranking

For each submission, we will post on the results section the average +/- standard deviation of the following commonly used metrics for assessing MR reconstruction quality:

    • Visual information fidelity (VIF)

    • Structural similarity (SSIM)

    • Peak signal to noise ratio (pSNR)

These metrics follow non-normal distributions. Consequently, we will use a non-parametric Friedman chi-squared test to evaluate the statistical significance between the different submissions across the three metrics. Post-hoc testing to assess specific pair-wise differences will be performed using a Dunn's test with Bonferroni correction. A p-value < 0.05 will be used as the level of statistical significance. The ranking will be obtained by sorting the weighted average ranking among these three metrics. The ranking weights are 0.4 for VIF, 0.4 for SSIM and 0.2 for pSNR. We chose to give higher weights to VIF and SSIM because they have a better correlation with the human perception of image quality. Although pSNR is a commonly used metric, it has poor correlation with experts’ opinion. The initial and last 50 planes in the x-direction (i.e., superior and inferior portions of head) will not be used to compute the metrics because they have little anatomy present.

Sample Code

We provide a github repository that includes useful scripts for loading the data, computing the metrics, and a template on how to create your submission file. It also has the sampling patterns for R = 5 and R = 10 for the different matrix sizes. The repository structure is as follows:

    • Data - Contains the sampling pattern masks for R = 5 and R = 10

    • Modules - Python modules to help load and process the data

    • JNotebooks

      • getting-started: scripts illustrating how to load, visualize, and apply undersampling maks to the data; it also illustrates a simple image generator

      • evaluation-system: scripts for metrics computation and ranking of different submissions; statistical analysis script will be included soon

      • reference: sample code illustrating how the test set references are computed; references are available only to the challenge organizers

      • zero-filled-baseline: zero-filled reconstruction baseline for the challenge; files are saved in the same format as the challenge submission format

      • unet-baseline (pending): U-net reconstruction baseline

Submit your algorithm

  • 2-page description of your method. If the method was already published, please send us the reference

  • Teams have to submit reconstructions on the test sets by sending to roberto.medeirosdeso@ucalgary.ca a compressed file (.zip) via download link (Dropbox, GDrive,...). The file format should be .h5 (HDF5) with a key named "reconstruction". The folder structure convention is the following:

    • <team_name>.zip

      • Track01

        • 12-channel-R=5

        • 12-channel-R=10

      • Track02

        • 12-channel-R=5

        • 12-channel-R=10

        • 32-channel-R=5

        • 32-channel-R=10

Please name the reconstruction files using the same name as their corresponding k-space files. Do not save the initial and last 50 slices in the frequency-encode (i.e., x) direction. This will make the submission file smaller. Also, please recall that we will not compute the metrics on these slices. A submission example is provided here.

  • We will compute the metrics and post them on the results page. The initial post will be during the 2020 Medical Imaging with Deep Learning conference

  • Teams are required to make their codes publicly available

  • No additional data other than the 67 volumetric datasets can be used to train the submitted models

  • Pre-evaluation: Teams are allowed to submit up to 3 solutions before the challenge deadline. We will provide them feedback using a small subset (10%) of the test set to compute the evaluation metrics. The metrics on the full test set will be revealed during MIDL. For the pre-evaluation, submit reconstructions for the following samples:

    • Track 01:

      • 12-channel: e14939s3_P44032.7.h5, e15790s3_P01536.7.h5, e13991s3_P01536.7.h5, e15046s3_P59392.7.h5, e15791s4_P11264.7.h5

    • Track 02:

        • 12-channel: e14939s3_P44032.7.h5, e15790s3_P01536.7.h5, e13991s3_P01536.7.h5, e15046s3_P59392.7.h5, e15791s4_P11264.7.h5

        • 32-channel: e16578s3_P19968.7.h5, e16842s3_P55296.7.h5, e16214s13_P25088.7.h5, e16587s3_P20992.7.h5, e16869s3_P03584.7.h5

Important dates

Train and validation set release: already available (download here)

Test set release (only undersampled k-data): already available (same download link as above)

Team registration deadline: 20 May 2020 Open. Teams can register at any time, but prior to 01 July 2020 if planning to participate of MIDL - register here

Submission deadline: 20 June 2020 01 July 2020

Results: The results will first be released during the 2020 Medical Imaging with Deep Learning conference. Thereafter, the challenge will continue as an online challenge and the results page will be updated accordingly.

It is important to note that these dates are related to initial results of this challenge that will be presented during 2020 Medical Imaging with Deep Learning. After the conference, the challenge will remain open / active as an online challenge, and we will keep accepting new submissions.

Challenge Code of Conduct

Participating team members agree to uphold the following code of conduct:

Rule 1: All team members shall act honestly, with integrity and competence, to achieve the goals of the challenge.

Rule 2: All team members shall not engage in any conduct involving dishonesty, fraud, deceit, or misrepresentation seeking to influence the results of the challenge.

Rule 3: All team members will act with courtesy and professional respect towards the organizers of the challenge and members of other teams participating of the challenge.

Publication Plans

We intend to invite the members of at least the top 5 teams for each challenge track that actively participated of the challenge submission during 2020 Medical Imaging with Deep Learning conference to write and submit a paper to a scientific journal (still TBD) summarizing the results of the challenge.

Ethics

The data used in this challenge was acquired as part of the Calgary Normative Study (CNS), which is a longitudinal project that investigates normal human brain ageing by acquiring quantitative MR imaging data. The study was approved by the Conjoint Health Research Ethics Board (CHREB), which reviews applications from researchers affiliated with the Faculties of Kinesiology, Medicine and Nursing at the University of Calgary. Approval number is REB 15-1285. Data provided for this challenge were anonymized.

Organizers

This challenge is being organized by the University of Calgary members of the Calgary Campinas dataset:

  • Dr. Roberto Souza

  • Dr. Richard Frayne

  • Dr. Wallace Loos

  • Youssef Beauferris

They are the only people with access to the fully sampled test set.

Questions or Feedback?

If you have any questions, issues and/or suggestions, or wish to provide constructive feedback, please contact Dr. Roberto Souza (roberto.medeirosdeso@ucalgary.ca). He should be able to answer all of your queries and post them to the FAQ page to help others.

Updates

05 July 2020

  • Included challenge zoom session information

10 June 2020

  • Our session during MIDL is scheduled to happen 09 July 2020 between 2 pm and 5 pm Montréal time

  • Submissions received until 01 July 2020 are eligible to participate during the MIDL challenge session

  • Presentation details were added

20 April 2020

  • Our dataset is now available for download from OneDrive, GDrive and Amazon Simple Storage Service. If your old download links expired, just submit the download form again.

  • Team registration and submission deadlines are now open as long as we have funds to keep the challenge running.

27 February 2020

  • Due to some people having difficulties to download the dataset, we decided to split it in 3 separate zip files that users can download based on which tracks of the challenge they intend to participate. We believe downloading the files will be much easier now and you should be able to access the files using the same download link. The zip files are:

    • Raw-data/Multi-channel/12-channel/train_val_12_channel.zip -> data used to develop models

    • Raw-data/Multi-channel/12-channel/test_12_channel.zip -> 12-channel undersampled test set for R = {5,10}

    • Raw-data/Multi-channel/32-channel/test_32_channel.zip -> 32-channel undersampled test set for R = {5,10}

  • We noticed a small offset error in the test sets we provided to you. The fixed test sets should be available for download by Monday (02 March 2020). It is very important to use the fixed test sets in your submission!

  • There are three different matrix sizes [Nx, Ny, Nz] in the dataset: 256 x 218 x [170 or 174 or 180]. We updated our github page to provide sampling patterns for these different matrix sizes (https://github.com/rmsouza01/MC-MRRec-challenge).

  • In the slice-encode (kz) direction, only 85% of the k-data was collected. This effectively means that for Nz =170, k-space was collected up to index 145, for Nz = 174, k-space was collected up to index 148, and for Nx = 180, k-space was collected up to index 153. The remaining slices were zero-filled to the proper dimension. The zero-filling was done to achieve 1 mm isotropic resolution, which is not relevant at this point of the challenge.

  • After exporting the raw data using (Orchestra Toolbox; GE Healthcare), you may note some residual values in the regions that should be zero. It shouldn't have noticeable effects on your results, but it would be good practice to explicitly do the zero-filling when loading the data.

  • We have contacted the authors of the paper "Mason et al. Comparison of Objective Image Quality Metrics to Expert Radiologists’ Scoring of Diagnostic Quality of MR Images. IEEE transactions on medical imaging." They advised us to normalize the images to unit intensity and use sigma_nsq = 0.4 when computing the visual information fidelity metrics. The challenge github page will soon be updated to reflect this change.

26 February 2020

  • Included names of the challenge organizers in this page.

  • The Multi-channel.zip (~93 GB) was added to the dataset download link. This zip file has all the challenge data and it is the easiest way to download it. (discontinued - there are 3 zip files now. See more recent updates.)

  • Included explanation about having to explicitly do zero-filling after position 85% in the slice-encode direction.