Segmentation: The Real Struggles Behind Converting DICOM to Patient-specific 3D Printable Models

Want to write a piece for 3DHEALS Expert Corner? Email us: info@3dheals.com

Disarticulating Congenital Heart - FDMAnatomy: Disarticulating Congenital Heart Disease ; Purpose: Provide example of Transposition of Great Arteries Mustard Switch procedure; Print Technique: Fused Deposition Modeling (FDM); Image Source: Computed Tomography of the Chest, 1 mm voxel resolution; Segmentation Difficulty: Very Difficult; ensuring no overlap of structures was difficult, many different models; Credit: Chris Letrong, Shannon Walters — Stanford University Department of Radiology, 3D and Quantitative Imaging Laboratory.

In a future healthcare world, physicians may be able to 3D Print any particular part of a patient’s anatomy with the press of a button. At present, however, precision DICOM image segmentation is more complex than many articles and presentations seem to suggest. I use medical 3D software daily to accomplish 3D replication, visualization, and quantification. Even before 3D Printing, I observed that most automatic and manual segmentation tools could use significant improvement. Since attempting more than 50 patient-specific 3D anatomic models, this observation is reinforced. Perhaps the repeatability and usability of many segmentation tools suffer due to lack of user input during the development of algorithms. Medical 3D software developers may have an opportunity to improve segmentation algorithms by leveraging user knowledge and preferences.
I perceive a lack of connection between those who spend thousands of hours using segmentation and those who design software that segments DICOM data. Many anatomic structures that may need 3D printing (and thus, segmentation) are not clearly delineated, homogenous, isolated, and uniform due to pathology or anomaly. Additionally, image quality adds the variables of graininess, artifact, slice thickness, and anatomic coverage. These variations in quality image data from CT or MR scanners increase the difficulty to successfully implement automatic segmentation. Perhaps we can move toward semi-automatic segmentation, with software vendors accepting various logic to improve segmentation time and accuracy. Other limitations for segmentation exist such as user familiarity with software tools, understanding of anatomy, and understanding the need for a 3D printed model. This post will focus on segmentation tools, provide a perspective on current limitations based on image quality, and propose some actions to help us arrive at the distant future of truly automatic segmentation.

What is segmentation?

Each vendor has a unique set of terminology for this, but the essence of segmentation is to identify and isolate voxels that represent any anatomy of interest. Two implementations of segmentation are; a) assigning a mask to a dataset indicating active voxels or b) deletion/removal of voxels not included in segmentation. Managing the models and masks is also unique in methodology and terminology per software vendor.
Methods of segmentation are both automatic and manual. Automatic segmentation can be threshold- or atlas-based. Threshold-based segmentation uses pixel brightness and patterns throughout the DICOM data to isolate or remove structures.  Atlas-based segmentation uses a database of anatomic structure shapes and attempts to find similar patterns in the current DICOM dataset. Many vendors have a threshold-based automatic segmentation method and “freehand” manual segmentation method.

What should be segmented?

The segmentation in this post refers to identifying and isolating anatomic structures within DICOM datasets. Structures are typically differentiated in the datasets by either discreet pixel intensity values ore relational differences in signal intensity. The context of any given DICOM acquisition must be taken into account to understand which intensity values represent which anatomic structures; contrast, dose, timing, and patient status can impact the pixel intensity of any given organ.
For CT Scans, brightness measures are standard for various structures across most scanners; but many factors can affect whether the image accurately reflects such brightness with the correct patterns. The CT brightness measure is referred to as Hounsfield Units (HU). MRI signal intensity is dependent upon habitus, coil selection, magnetic fields, distance to coil, and much more. With more variables, MRI has a higher possibility of signal variations when multiple factors contribute. The signal intensity using phased-array MRI coils can cause gradients of signal intensity for a structure, such as the posterior surface of a kidney measuring double the signal intensity of the anterior surface.
Ultimately, segmentation of any anatomic structure is largely based on identifying the voxel intensity values which represent it. This is likely why most automatic segmentation tools are based on threshold. Unfortunately, many factors hinder optimal imaging that make such segmentation a simple task.  

Issues with segmentation

Several issues with threshold-based segmentation are;
Heterogeneous Structures: osteoporosis is an example; a patchy-looking bony structure rather than a nicely delineated bone shape of a normal young person.
Image Noise: mottles the appearance of the entire dataset, making homogenous structures appear heterogeneous. This will impact the ability to identify in entire organs and/or shapes.  
Artifacts: metal implants, various types of motion, and other artifacts contribute to inaccurate representations of anatomic structures. At present, I have not witnessed any threshold-based segmentations able to correct for artifacts.
Several issues with atlas-based segmentation are:
Non-standard anatomic representations: Many 3D Prints are likely to be of non-standard anatomy; atlas databases are typically built upon normal anatomy.
Image Noise: mottles the appearance of the entire dataset, making border detection much harder for the algorithms to identify.
Artifacts: metal implants, various types of motion, and other artifacts contribute to inaccurate representations of anatomic structures. Borders of affected structures will not conform to atlas models because the signal characteristics will not match any in the database.  

What can be done?

First and foremost, all software must keep a robust manual segmentation tool as a backup to any automatic/semi-automatic method. Despite any adherence to my suggestions or others’, it is unlikely in the near-term that every patient condition and image type can be accommodated using automatic segmentation. With that said, I believe that three steps can help vendors achieve greater results in providing automatic segmentation that works despite the many image quality issues that arise.

  1. User-driven development of segmentation algorithms
  2. Semi-automatic approach, allowing logic to drive the segmentation approach
  3. Validation of segmentation algorithms on standardized datasets

1. User-Driven Development

Developers should request feedback/involvement from users to improve segmentation algorithms. Much of current medical 3D software is likely designed around radiologist workflows, largely due to radiologists being the most obvious users of 3D software and having a role in purchasing decisions. There is a growing cohort of 3D Imaging Laboratories that utilize non-radiologists (technologists and others) to perform advanced functions on patient DICOM datasets. This non-radiologist population will likely grow as 3D Printing and other kinds of visualization and quantification proliferate. Vendors that singularly accommodate radiologists concerns may not achieve the needs of other users.

2. Semi-Automatic Segmentation

Given the myriad of image quality issues that will not dissipate soon, developers should acknowledge the need to overcome issues of heterogeneity, artifact, and image noise. Perhaps this could manifest as a questionnaire that optionally appears when segmentation begins. This questionnaire may ask the user about image quality factors; whether and where artifacts exist, ask users to set bounding boxes, and perhaps ask the users to quickly identify each structure in the dataset with a click. Using such logic, perhaps future segmentation can use atlas or threshold tools better to identify the desired anatomy with much more information from which to base segmentation algorithms on.

3. Validation of Segmentation

This may be far-fetched, but it would be nice to have an independent set of DICOM data from which 3D software can be applied to and potentially scored. Segmentation could be a single category of analyses, with subcategories of the vendor, MR, CT, and even further subcategories of an artifact, image noise, and so on. If all developers were forced to test on the same data, users could more effectively evaluate which tools might be best for their specific location and requirements. To fairly apply this tool, anonymized data from each CT/MR vendor with all the different kinds of equipment and image quality variable representations must be collected and prepared for analysis.

Where to begin?

Similar to gaining traction from 3D Printer vendors toward medical community needs, our community will need to show the returns on vendors investing resources to solve our problems. As this community grows, the issues will become more important. Radiologists involved with 3D printing at this point are in a position of leverage and should begin demanding that segmentation tools accommodate the myriad of needs that 3D printing will ultimately present. Perhaps the suggestion of a semi-automated approach is something that will benefit radiologists in other workflows than 3D Printing. All of us can track issues, articulate them carefully and think about ways to rate software based on objective measures such as the validation method suggested above. Lastly, without direct involvement with the developers, it is difficult to perceive how segmentation tools will ever truly meet the needs of the users, their needs, and the data they are forced to work with.
Shannon Walters, MS RT(MR), Stanford University Department of Radiology, 3D and Quantitative Imaging Laboratory
Shannon Walters, MS RT(MR), Stanford University Department of Radiology, 3D and Quantitative Imaging Laboratory  
Shannon been a radiologic technologist since 1998 and completed a Masters of Information Systems in 2014.  He has worked in Stanford 3D and Quantitative Imaging Laboratory since 2008, assuming the role of Manager in 2013. The field of advanced visualization is a perfect fit for Shannon’s intense interests in computers and healthcare.  Shannon has been involved with 3D Printing since 2013 and has generated more than 50 patient-specific models as of mid-2016.

Related Articles:

Part 1: Considerations for Implementing a 3D Printing Core Service in Your Hospital: A Technical Analysis

Part 2: Considerations for Implementing a 3D Printing Core Service in Your Hospital: A Technical Analysis

Interview: Jeffrey Sorenson, President and Chief Executive Officer of TeraRecon

Comments