Call for Participation:
Lumen + External Elastic Laminae (Vessel Inner and Outer Wall) Border Detection in IVUS Challenge
We are pleased to announce a Lumen + External Elastic Laminae (Vessel Inner and Outer Wall) Border Detection in IVUS Challenge associated with MICCAI 2011 “Computing and Visualization for (Intra) Vascular Imaging workshop (to be held on September 18, 2011 in Toronto, Canada). Participants will be provided with several IVUS pullback datasets from different frequencies and with various degrees of difficulty. Manual annotations will be available for evaluating the results (comparison and scoring). The aim of this challenge is to provide the participants with an opportunity to present/evaluate their segmentation methods with challenging datasets.
B. The Challenge
The segmentation challenge is divided into two categories:
- Segmentation of the inner wall (lumen segmentation)
- Segmentation of the outer wall (media/adventitia segmentation)
A team can choose to participate in one or both categories. Clinical recordings from 20 MHz and 40 MHz probes will be provided. The data types are:
- Single-frame datasets with adjacent frames in DICOM & RF (40 MHz) format chosen at random instants of the cardiac cycle (not gated).
- Multi-frame datasets in DICOM (20 MHz) format (sequences, ECG or image-based gated). Even though the 3D sequences consist of 20-50 gated frames to provide sufficient spatial context, in this challenge, only a specified subset of up to 5 frames will be used for evaluation.
The results of all the submitted methods will be evaluated and discussed during a designated session of the workshop.
C. The Datasets
The datasets that will be provided for the challenge will be separated into:
- A training set: A first group of datasets to be used for training will be provided on August 26, 2011, along with manual annotations (representing 25% of the available data).
- A test set: A second group of datasets will be provided on August 26, 2011. However, the annotations will ONLY be released on the day of the workshop.
- A validation set: A small subset of frames (around 10), that belong to the test set, will be randomly chosen. This data set will be provided on the day of the challenge.
Participants can use the training set provided by the organizers to tune their algorithm(s) on unknown IVUS formats. However, if the datasets provided by the organizers do not contain sufficient frames for training a classifier, then, the participants are allowed to use their own training sets.
The data will have the following attributes:
|Number of frames in this sequence||1-5 (2D) / 20-50 (3D)|
|Guidewire Artifacts (not applicable for Volcano)||No/Yes|
|Catheter near vessel wall||No/Yes|
A MATLAB script will be provided with the training set to evaluate the results in a unified way (evaluation script). A strict data format will be defined in order to run automatic error computations. Participants will run the segmentation algorithm (before the workshop), and will provide the contours obtained on the test set to the organizers before September 9, 2011 (HARD DEAD LINE, no extension possible). The segmentation results will be evaluated using the evaluation script. For methods that require initialization before segmentation, such an interaction will be allowed as long as a detailed description of the initialization (process) is submitted with the algorithm description. No manual edits to the results of the algorithm are allowed.
To assess the execution time of the methods, the participants are required to run the algorithm over the validation set on the day of the challenge and provide the contours to the organizers. The validation set will be distributed at the beginning of the workshop, and the participants will have the full day for providing the results.
D. Data set description
Four heterogeneous datasets have been prepared for the challenge. Each dataset includes a group of five contiguous frames chosen at specific vessel locations, so that both temporal and spatial information are exploited. Each frame is labeled as follows: frame_XX_YYYY_ZZZ where XX is the patient number, YYYY is the frame number, and ZZZ is the frame number in the pullback.
A description of the four datasets is provided below.
- Dataset_A is composed of 77 groups of five consecutive frames, obtained from a digital 40 MHz IVUS scanner, acquired from different patients. The middle frame (frame_XX_YYYY_003) is provided in both the DICOM and RF formats, while the frames (...._001 and ...._002) and (...._004 and ...._005) are provided only in the DICOM format (.png images). Each RF file is provided in a .dat file built as a concatenation of 1024 samples for 256 lines. Each line is obtained by sampling the RF signal using 12 bits and a sample rate of 200 Msamples/sec).
- Dataset_B is composed of 435 groups of five consecutive frames, obtained from a 20 MHz IVUS scanner, acquired from 10 patients. All the frames are provided in DICOM format (.png images). All the frames having the same XX number belong to the same pullback and the central frame “_003” has been chosen at subsequent gating positions.
For all the four datasets, only the middle frame (…_003) is manually labeled and should be reported ( as the result of the segmentation algorithm).
Two separate annotation files (for lumen and media) are available in the directory “LABELS” as a sequence of Cartesian coordinates. The corresponding files are named ” lum_frame_ XX_YYYY_003.txt” and “med_frame_XX_YYYY_003.txt”, for lumen and media, respectively.
The data folder distributed to the participants contains the training-set (about 25% of the whole data set) while he DCM/RF folders contain the training-set and the test-set (the remaining 75% of the data set not annotated, that should be segmented by each participants).
The segmentation results of each method will be compared with manual segmentations provided by one or two expert observers. The score for each method will be computed as the average Jaccard similarity measurement for all frames. Contours will be evaluated based on the Hausdorff maximum and mean error distances. An additional evaluation score may be used depending on the degree of independence of the method (initialization, user intervention). The average segmentation time per frame will also be accounted. Finally, the capability of the method to be run on RF or DICOM data (as applicable), as well the flexibility of the method (ability of the method to segment both 20 MHz and 40 MHz data) will be discussed.
G. Data Reporting
Participants must strictly adhere to the format of the dataset provided by the organizers (including case sensitive names) when providing the segmentation results. Each participant must submit a text file (no other formats will be accepted), which contains the Cartesian coordinates of the lumen and media (a maximum of one decimal digit is allowed). For e.g.,
The format in which the data should be reported is given below:
- Each participant will be assigned a number.
- If Participant 1 wants to submit results for Dataset_A and Dataset_B, then, a zip file with the following directory structure (please pay attention to the letter casing) must be emailed to IVUSChallenge@172.27.141.4:
- Each “LABELS” directory could contain lumen and/or media files (lum_frame_ XX_YYYY_003.txt and/or med_frame_ XX_YYYY_003.txt). However, it is important that the above-specified format is strictly followed.
H. Submission procedure and guidelines
- Send an email to IVUSChallenge@172.27.141.4 stating your interest in participating in this challenge. The details of how to download the data will be provided upon the receipt of the email. The data will be available for download on August 26, 2011. The deadline for confirming participation in the challenge is September 2, 2011. Emails received after this date will not be considered.
- Participants will send their results via email to IVUSChallenge@172.27.141.4. In addition to the results participants are required to send a one-page summary of their method(s) and the corresponding list of authors. . The method may or may not have been published.
- Participants are required to present their methodology during a teaser session that will be held at the workshop.
- General scores will be discussed by the organizers at a designated session at the workshop.
A journal paper summarizing the challenge results will be co-authored by the challenge participants and the challenge organizers. The paper will follow the format of the following paper: http://www.ncbi.nlm.nih.gov. From each team the first and last author of each method will be listed as co-authors. Prof. Kakadiaris will be the senior author of the paper.
I. A summary of the important dates:
- Availability of data (training set & test set) for download: August 26, 2011
- Last date for confirming participation: September, 2, 2011
- Last date to submit results of segmentation on the test set to organizers: September 9, 2011
J. Material to submit
Template for the teaser session
Ioannis A. Kakadiaris, University of Houston (email@example.com)
Simone Balocco, University of Barcelona (firstname.lastname@example.org)
Petia Radeva, University of Barcelona (email@example.com)
Gozde Unal, Sabanci University (firstname.lastname@example.org)
Manolis Vavuranakis, University of Athens (email@example.com)
Andreas Wahle, University of Iowa (firstname.lastname@example.org)
Tomas Kovarnik, Charles University, Prague (email@example.com)
John J. Lopez, Loyola University Medical Center (firstname.lastname@example.org)
Stephane Carlier, UZ Brussel (email@example.com)
Question: participant X
Answer: Balocco Simone, on behalf of the organizing committee
Q: Our algorithms extracts 50 points for each IVUS frame for the segmentation (50 for Lumen and 50 for outer wall). The results of the training set you provide, show 360 points. Should we perform some kind of interpolation in order for our method to also extract 360 points for each IVUS frame?
A: The segmentation evaluation will be evaluated using the matlab script distributed with the data sets.
In principle the script performs a spline interpolation of the contours, I believe there will be no problem in providing a fewer number of points.
however please check using the matlab script that the results of with and without the interpolation are similar.
Q: For the comparison between our results and the annotation ones: Is the comparison performed point by point? In that case, the first point we detect in the lumen should be compared with the annotations’ first point? And then follow the same order for all successive points (clock wise or counter clock wise)?
A: The contours performance will be computed using the Hausdorff distance, so in principle you don’t need to guarantee (according to my
experiments) the same order of successive points. Please double check also this issue. If you have any problem please do not hesitate to write us.
Q: I do not have an script to convert Polar coordinates to Cartesian.
Can you please provide to the participants a script to convert from a binary image to this coordinates?. If not, can you please provide detailed instructions of how we should convert our data to coordinates?
A: if you use matlab, you may want to use the pol2cart matlab function, interpolated with a spline.
In the case you code such function please send the code to us, so that it can be shared among the participants.
You can add credits about the code in the header of the matlab file (as I did for the performance evaluation script) Please be compliant with the provided manual annotations of the training-set. For instance verify that the contours you submit, are correctly oriented (you can check this by superimposing the contours to the images. Please use the provided annotations as example).
Q: Can you please give more information regarding how to use the validation script? what Kind of input it need? The function has no comments
A: The validation script simply computes a series of error measures given two contours. The contours are column [:,2] vector files.
the third parameter is the size of the image, which is different in each data-set.