Clinical Applications of Medical Modelling (Part 2) 3D Scanning

Want to write a piece for 3DHEALS Expert Corner? Email us: info@3dheals.com

There’s no question that computer modeling, simulation, and additive manufacturing have transformed clinical medicine around the world. What’s always fascinated me, though, is the variety of ways these technologies have been implemented in different hospitals and even different departments within the same hospital. As a prototyping fellow at Sinai BioDesign, a design and prototyping group within the Mount Sinai Hospital in New York, I’ve seen firsthand almost all of the ways both 3D printing and scanning can be leveraged within a health system. Like many other tools and data streams, there’s no single way 3D medical data is acquired or used within the health system. In each case, though, these printing, scanning, and rendering applications are crucial to clinical care and research. Last week, in part 1 of this  Expert Corner blog, I focused on use cases for 3D modeling and printing. For part 2 this   I’ll be discussing clinical applications of 3D scanning technologies, the other side of the medical 3D coin.

Unlike medical modeling – the 3D rendering of anatomy (oftentimes patient-specific anatomy) – 3D scanning always resides in the digital domain. Nevertheless, there are numerous different technologies employed to capture 3D scan data, each with its own set of pros and cons that, much like the various life cycles of anatomical data covered in last week’s Expert Corner, have utility to different clinical practices. To begin with, it is worth noting the three major groups of 3D scanning technologies: LIDAR, textured light, and photogrammetry.

LIDAR


Figure 1. An example of LIDAR scanning from Faro, a manufacturer of scanners, showing Jay Leno getting his face scanned by a hand-operated laser scanner, as well as the resulting 3D surface (inset)

LIDAR stands for Light Detection And Ranging and is essentially radar or sonar but with infrared light as the signal source instead of radio waves or sound waves. A scan is acquired by rastering an infrared laser light source over an object, capturing its contours and textures. Either the scanning source or the object being scanned must be spatially fixed, though, in order to establish a fiducial reference and coordinate system by which the cloud of 3D points can be plotted as the scan is taken. Together, these points are interpolated to form the surfaces of the scanned object. Depending on the scanning hardware being used, the resolution of LIDAR scanning ranges from several millimeters to a few microns. It’s also worth noting that LIDAR scanning is line-of-sight (LoS), meaning anything the light source can’t reach can’t be scanned. Sometimes a workaround can be achieved by changing the positioning of the scanner or the subject. The highest resolution LIDAR scanners are often employed during the quality assurance portion of the medical device development process to ensure manufacturing is within required tolerances. At lower resolutions, LIDAR scanning is sometimes used to capture complex surfaces of the human body, such as the face.

Textured Light


Figure 2. An example of textured light scanning, showing a pattern projected onto an object and the resulting 3D cloud of points reproduced through the algorithmic analysis of the distortion of the pattern (courtesy 3ders.org)

Textured light scanning works by projecting a series of different sized grids, either in the visible or infrared spectrum, onto the surface of an object. A camera linked to the projection system detects the projected grid and the distortions to the pattern made by the object being scanned are recorded. Based on how the pattern is distorted, an algorithm extrapolates the surface of the object, rendering it in 3D. Like LIDAR scanning, either the scanning unit or the object can be rotated to capture all angles since this scanning approach is also LoS. Textured light is typically regarded as the second-highest resolution 3D scanning technology, after LIDAR. The ultimate determinant of textured light scanning resolution is the combination of the resolution of the projected grid and the resolution of the camera detecting the grid – usually on the order of a millimeter.

It’s also worth pointing out that, for both LIDAR and textured light scanning, there are many materials that are “un-scannable” because of their interaction with the visible or infrared light being employed in the scanning process. Many metallic objects are extremely difficult to scan with LIDAR or textured light because of the way their surface reflects the incident scanning light. In these cases, a temporary matte coating can be applied to improve the optical properties of the material.

Photogrammetry

I often tell people that if there’s a method of 3D scanning that they’ve seen before, it’s photogrammetry. This approach captures a 3D rendering of an object by taking 2D pictures from multiple positions, either by using one camera and capturing one angle at a time, or more commonly through the use of large camera arrays. Photogrammetry is particularly appealing in many fields because (at least in the case of multi-camera arrays) it is an extremely quick method of 3D scanning. In a 360-degree photogrammetry array – on the order of 80 cameras oriented around a platform at many different angles – it takes only as long as it takes to snap a picture to capture a 3D scan. Compare this to LIDAR and textured light, which can take several minutes at their fastest.

Figure 4. An example of a photogrammetry scan with (left) and without (right) the 2D texture mapped to the 3D surface. While some features, such as the eyebrow are still relatively easy to distinguish, details around the eye become much harder to pinpoint.

As is often the case, though, the advantages of photogrammetry come with tradeoffs. Chief among them is the poor resolution of the technique. This often comes as a surprise to many because, at first glance, the 3D models rendered appear very detailed. This detail, though, does not reflect the geometry of the 3D object produced through the scanning technique but rather reflects the 2D texture of the scanned object mapped to a low-resolution. It is this 2D texture – comprised of the individual images of several dozen cameras, often DSLRs – that creates the illusion of a 2D model. Remove that texture (as would be the case for any single-color method of 3D printing) and all your left with is a blob generally shaped like the object you were scanning. Don’t get me wrong, there are certainly times where the geometry captured by photogrammetry is sufficient for the end application of the 3D model, and some geometries capture better than others (think about what a person with their arms raised might look like, versus someone with their arms crossed), but on the whole, this method has a resolution on the order of centimeters, or worse.

Clinical Applications


Figure 5. The Fit3D scanner and an example of the resulting scan (source: Fit3D)

So why did I go to all that effort to explain these three major groups of scanning technologies? Because different ones are employed by different clinical fields, and for different reasons. At Mount Sinai alone, two or more of these 3D scanning technologies are being used almost daily.

The first example I’d light to highlight is the Fit3D scanner used by Sinai’s Institute for Next Generation Healthcare. This device is part scanner, part scale, and uses a LIDAR scanner and rotating platform to measure a patient’s weight, body fat percentage, and water content, among other metrics (similar to any other high-tech digital scale today) while also capturing a full-body 3D scan. The end result, on the scanning side, is a marble statue-like rendering of the subject, which can subsequently be measured according to metrics like waist-to-hip ratio, height/waist circumference, and other emerging metrics that look at physiological health beyond BMI. For these measurements, a reasonably high-fidelity 3D reproduction of the patient is important, in order to derive accurate measurements from the 3D scan. The texture of the subject (e.g., the color/pattern of their shirt) has no bearing on these metrics. As such, the LIDAR scanning approach is far superior to photogrammetry for the data it provides, even though the scanning process takes about a minute to complete, versus a few seconds.

On the other hand, the lab of Dr. Ethylin Jabs, in Mount Siai’s Genetics Department, uses a scanning system called 3dMD, which uses photogrammetry techniques to capture the faces of patients. Many genetic conditions manifest in the form of facial asymmetries and malformations, which can be captured by the scanning system and subsequently detected by clinicians reading the scans.


Figure 6. The 3dMD photogrammetry scanner (courtesy: 3dMD)

Given what I’ve just explained about the various scanning technologies, though, you might be wondering why the Jabs lab uses photogrammetry when this is the lowest resolution form of scanning. The situation only becomes more perplexing when you see that the 3dMD system is a fairly large, elaborate system that’s not very easy to move. Couldn’t a LIDAR or textured light technique capture a better rendering of the face? While the answer to that question is “yes”, it doesn’t fully capture the method in which these scans are being used. It turns out, as I found out several years ago when I showed some different LIDAR and texture light scanners to Dr. Jabs, that there are some very good reasons why her lab uses photogrammetry to capture these 3D scans. First and foremost, many of her patients are infants – a group of people infamous for their inability to hold still. While it might technically be possible to use LIDAR or textured light to capture a scan of an infant, it’s certainly a much longer process than the 1-2 seconds it takes to snap a photo. Photogrammetry lets Dr. Jabs’ team see and process the data for many more patients than any other scanning technique would. Second, the facial metrics being extracted from these scans rely on the accurate identification of facial features (e.g., the outside corners of the eyes), which are rendered much more clearly through the 3D-mapped textured captured by photogrammetry. Thus, even if there is some inaccuracy incurred due to the lower spatial resolution, it’s more than made up for by the precision with which clinicians can identify facial features when extracting anatomical measurements to correlate with genetic data.

Conclusions

Be it 3D printing or 3D scanning, the adoption of 3D visualization and fabrication techniques in hospitals has been transformative in the delivery of patient care and the expansion of personalized medicine. By digitizing and systematizing this information, it has become possible to better prepare for surgery, improve the detection of genetic defects, and even improve the accuracy of measurements as basic as height and weight. We’re still a little ways off from everyone getting a full 3D scan workup as part of their yearly physical, but I think these examples and the spread of scanning technology overall show that we’re well on our way.

About the Author:

Joseph Borrello is currently a biomedical engineer and Ph.D. Candidate at Mount Sinai, working in the labs of Drs. Kevin Costa and Junqian Xu, in addition to managing digital fabrication operations within the Sinai BioDesign innovation team. Previously, he worked at 3D Systems on technical development in the consumer marketing department and as a liaison with engineering project management teams.

He received his bachelors in Biomedical Engineering from Macaulay Honors College at The City College of New York, where I remain active in the Zahn Innovation Center, an on-campus tech startup incubator.

Joseph is also an active member of the New York City startup ecosystem. He is the founder of Proto-Sauce, which is developing new materials for resin-based 3D printing, as well as the CTO of Biosapien, leveraging 3D printing to produce personalized therapeutics. He also tries to summarize as many of the local happenings as he can in his newsletter Magnitude and Direction.

Finally, Joseph is also the editorial assistant for 3DHEALS Lattice newsletter, where he tirelessly curate the best content for healthcare 3D printing and bioprinting community with the 3DHEALS team.

Related Articles:

Clinical Applications of Medical Modeling- Part One

3D Scanning and 3D Printing for Creating Affordable Prostheses

3D Scanning for Prostheses

What Matters When Enabling Your Dental Practice with 3D Printing and 3D Scanning

Comments