All Posts By

optinav

Why LabVIEW? Part 3

By | LabVIEW | No Comments

Code clarity and hardware support in LabVIEW

 

In this article I will continue with the programming aspects of our tool of choice, which is code clarity, and with hardware support. I will also highlight some non-programming extra benefits of using LabVIEW. This is going to be the last part of the series.

Read More

Why LabVIEW? Part 2

By | LabVIEW | No Comments

Fast LabVIEW programming

 

Welcome to the 2nd part of the article in which I would like to inform you why we use LabVIEW in our company. This part describes some pure programming aspects. To be specific, speed of coding.

Coding in LabVIEW is fast which makes it a perfect environment for prototyping, testing ideas or just quick development. Here are some reasons why.

Clicking vs. typing

LabVIEW is a clicking language and clicking code is faster than typing it. To further increase your speed you can use Quick Drop, it’s actually typing, but only for searching purposes. It doesn’t really break the convention of graphical programming and graphical code. Just imagine coding a for loop in C++ and LV. A lot of non-letter characters and newlines to write. In LV I just select the loop. Amount of typing matters.

Palettes vs. libraries

Now something on programming without being familiar with used framework. In brief, functions you can use in your IDEs are stored in libraries. I often search for what I can get from a particular library to achieve a particular effect in my program, because I often use libs new to me. In, let’s say, .Net I would have to scan through a pop up showing me all class’ members or read a documentation. It feels like visiting a library in order to obtain information you need before the Internet came. In LabVIEW we get palettes. Like painter keeping currently used colors on his palette, LabVIEW shows us its functions categorized in palettes, one category at a time, which are actually entries in a complex pop up menu. Need more details on a particular color function? There’s context help. This combined with a search (for a palette) lets us quickly get to know what our environment is capable of.

LabVIEW palette of structures and context help

LabVIEW palette of structures and context help

LabVIEW is very high-level

Another thing is that LabVIEW is a very high-level programming language for engineers. It means that it has in its base ready to use solutions for many technical and scientific problems – field-specific operations, apart from advanced operations on basic data types. For example, implementations of signal filters or curve fitting. LV also lets us make applications with simple and neat graphical user interfaces with its IDE out of a box. No need for installing extra packages or writing your own libraries.

LabVIEW filters palette

LabVIEW filters palette

Clarity

The last thing is speed through transparency – you are faster when you understand a project, you understand it when its code is clear. It is especially important when joining already existing projects or forking applications. Most times you need only one glance through a main vi’s code to understand a well written application’s architecture and its general idea. Some experience is needed too, of course. It’s extremely easy and fast to get fully involved in a project when it incorporates a well known architectural framework, like JKI State Machine. That’s why companies use always the same architectures, developed on their own or by 3rd parties. Getting familiar with a code is not so easy in typing languages, where scanning through a project structure and many code files may by required.

Multi-threading

Developing multi-threaded applications is quick too. All loops and any other operations which are not forced to be consecutive are executed in parallel, if there’s enough free resources in a system. LV happens to support this concept with its graphical two-dimensional code. Looking at two loops, one under another, you may naturally assume that they will execute in parallel, especially when the code execution is dataflow driven, from left to right. It’s natural way of working for LabVIEW – multi-threading by default, with separate cores occupation and such. That’s how they boost speed through simplicity. Unfortunately, not everything is so rosy, the problem of communication between threads remains the same as in other languages.

Two for loops executed in parallel

Two for loops executed in parallel

Conclusion

To sum up, LabVIEW supports what we do at OptiNav with its speed of coding, the concept of palettes, clarity of code, our-domain-specific functions and the way it supports multithreading.


Sławek

 

Image processing and subpixel edge detection

By | Image processing | No Comments

One of the most useful tools which allow engineers to design vision systems detecting or recognizing objects in images is subpixel edge detection. This article explains the concept and these which lay the foundation of it.

Image representation

The base for many measurement applications with optical methods is intensity images. The intensity which is perceived as brightness in the image is mapped to a digital gray scale image. Therefore these images are called grayscale images. The image is a grid that is composed of individual picture elements, so-called pixels. Each pixel represents a numerical value which represents the gray value. In a camera with a resolution of 8 bit grayscale differs from 0 for black to 255 for white, with 12-bit resolution there are 4096 gray levels. Grayscale images can be displayed as a matrix for processing and storing with software (Figure 1).

Matrix of grayscale image values

Figure 1. Computer based representation of grayscale images as matrix

There are different formats for storing digital images. For use in metrology, only image formats are possible, which are suitable for lossless transfer of image data. An involving loss transfer, as it is used for example in image compression to reduce image size, changes the image and may affect the location of edges and thus the measurement result. For lossless transfer, for example, the BMP (Windows bitmap), PNG (Portable Network Graphics [1]) and TIFF format [2] are suitable.

Image processing operators

There are different so-called “operators” for digital image processing. A distinction is made between point operators, local, global, and morphological operators.

Image processing operations that affect a pixel only depending on its value and its current position in the image without considering the neighborhood of the pixel are called point operations. Examples for point operators are brightness correction and the inversion of a grayscale image. The commonly used “gamma correction” in image processing to adjust images to the human visual perception is also a point operator using a power function with an exponent called gamma. By potentiating the gray values, a non-linear stretching in one part of the image and a non-linear compression in another part of the image is performed. With values for gamma larger than one, the image is darker, and for values less than one, the image is brighter.

Figure 2 shows the use of two other point operators. For contrast enhancement that is also called histogram stretching, the gray values are changed so that the entire available gray scale is used. For image segmentation often a global thresholding is used. Here, a binary image is created (black-white image) by displaying pixels below the threshold as black and above as white. This method is also known as binarization. A suitable threshold value can be determined from the histogram of the gray values when a bi-modal distribution of the gray values is available. A known computational method for thresholding is represented in [3].

Edge image derivation through thresholding and contrast enhancement

Figure 2. Contrast enhancement for histogram stretching, binary image with threshold from bimodal histogram and edge image derived from binary image

For local operators, the new gray value of a pixel depends not only on its previous value but also on the gray values of the pixels in its environment. The environment is defined by a so-called neighborhood. A typical neighborhood is the 8-neighborhood (3 x 3 pixel). Figure 3 shows the use of two operators considering the pixel itself and its eight neighbors, which are referred in this context as filters for eliminating image distortions.

Mean and median filter explanation

Figure 3. Local operators for eliminating image distortion: mean and median filter

Local filters in which the pixels of the filtered image are calculated from the weighted sum of the pixels of interest are referred as linear filters. The underlying mathematical procedure is a so-called convolution. There are many different linear filters [4]. Filters, such as the average filter described above or the Gaussian filter, in which the weighting factors depend on the distance to the subject pixel according to the shape of the Gaussian curve, are used to smooth the image. Thus they represent a low-pass filter. Also, the median filter, in which the median of the surrounding pixels determines the filtered pixel, is a low-pass filter.

Edge detection

In contrast to the low-pass filters, the high-pass filters are used for highlighting edges.

Figure 4 shows an edge image generated with the so-called “Sobel filter”. Given the image captured by the camera, in this example first preprocessing is done to remove distortion with the above described low-pass filters. Subsequently, edges are highlighted in two directions by two filter masks of the Sobel filter. The superposition of the images provides the edge image. This type of edge filters is based on the discrete differentiation of the image and is therefore also referred as a gradient filter.

Gradient filters have high-pass properties and increase the image noise. Therefore, the filters are designed so that they result is averaged over multiple rows or columns. Another representative of this kind of edge filters is the Prewitt filter [4, 5]. For determining the edge positions also the positions of second derivative’s zero crossings can be used, such as the Laplacian filter does [4, 5]. There are moreover gradient filters for edges that combine various filters such as the Canny edge detector [6].

Sobel filter working presentation

Figure 4. Edge detection using Sobel filter

Also a binary image (Figure 2) is suitable for edge determination. Here the definition of a global threshold value, which is used for segmentation of the image into foreground and background, determines the edge position. This approach is beneficial when only one edge in an image with several edges must be identified (e. g.: shadow edges) or for low edge smoothness (“fringed” or “pixelated” edge).

In images from camera sensors on CMMs, edges are determined along search paths that are perpendicular to the edges of measurement object’s nominal shape (figure 5). For this purpose, a region around the edge (ROI – Region of Interest or AOI – Area of Interest) within the camera image (FOV – Field of View) is selected, which has the shape of the edge (e. g. for a circle, a ring or a ring segment). In this area, the search beams are generated. Along each search beam, an edge point is determined. The maximum of the first derivative along the search path or a threshold value are used as edge criteria. The first criteria corresponds to the previously described edge detection with a gradient. The second criteria corresponds to the edge detection based on a binary image, as shown above. When threshold criteria is used you distinguish between a global threshold, which applies to the entire edge region, and a local threshold which is determined individually for each search area or search path.

Presentation of edges determination using search paths and different criteria

Figure 5. Determination of edges along search paths using different criteria

Subpixel edge detection

For a more precise determination of the edge position below the pixel resolution, an interpolation between the pixels is used, which is called a sub-pixel interpolation (Figure 6) [7].

Explanation of subpixel edge detection

Figure 6. Subpixel interpolation [8]

Grey value line and its 1st derivative along a search path - presentation

Figure 7. Grey value line and its 1st derivative along a search path

A correct determination of the edge position requires that light intensity is always below signal saturation of the camera because the edge position might be shifted due to the saturation.

To calculate product shape’s features sequences of pixels from the detected edge points are formed by contour tracing [9]. These contour points are transformed into coordinates taking into account the image scale and position of the camera sensor in CMM’s coordinate system (Figure 8).

Determination of circle’s parameters without subpixel interpolation - presentation

Figure 8. Simplified presentation of image processing for determination of circle’s parameters without subpixel interpolation

More information can be found in the literature on image processing [4, 5, 10, 11].

Bibliography

  1. ISO/IEC 15948 Informationstechnik – Computergrafik und Bildverarbeitung – Portable Netzwerkgrafik (PNG): Funktionelle Spezifikation (English: Information technology – Computer graphics and image processing – Portable Network Graphics (PNG): Functional specification) 2004-03.
  2. TIFF, Revision 6.0, Adobe Systems Incorporated, USA 1992. (Internet, 14.04.2016: http://www.adobe.com/Support/TechNotes.html).
  3. Otsu, N.: A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9, No. 1, 1979, pp. 62-66 (1979).
  4. Jähne, B.: Digitale Bildverarbeitung und Bildgewinnung, Springer-Verlag Berlin 2012, ISBN-13: 978-3642049514 (English: Jähne, B.: Digital Image Processing and Image Formation, Springer-Verlag Berlin 2016, ISBN-13: 978-3642049491).
  5. Demant, C., Streicher-Abel, B., Waszkewitz, P.: Industrielle Bildverarbeitung: wie optische Qualitätskontrolle wirklich funktioniert, Springer Verlag, Berlin 2011, ISBN: 978-3-642-13096-0 (English: Demant, C., Streicher-Abel, B., Waszkewitz, P.: Industrial Image Processing, Visual Quality Control in Manufacturing, Springer Verlag, Berlin 2013, ISBN 978-3-642-33904-2).
  6. Canny, J.: A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Computer Society Washington, DC, USA, vol. 8, 1986, pp. 679-698.
  7. Töpfer, S.: Automatisierte Antastung für die hochauflösende Geometriemessung mit CCD-Bildsensoren, Dissertation, Technische Universität Ilmenau 2008.
  8. Imkamp, D.: Multisensorsysteme zur dimensionellen Qualitätsprüfung, in: PHOTONIK Fachzeitschrift für optische Technologien, AT-Fachverlag GmbH Fellbach, Ausgabe 06/2015 (Internet, 14.02.2016: www.photonik.de/multisensorsysteme-zur-dimensionellen-qualitaetspruefung/150/21002/317557). (English: Imkamp, D.: Multi sensor systems for dimensional quality inspection, in: LASER+PHOTONICS 01/2016, AT-Fachverlag GmbH Fellbach (Internet, 14.02.2016: http://www.photonik.de/multi-sensor-systems-for-dimensional-quality-inspection/150/21404/321005 ).
  9. Pavlidis, T.: Algorithms for Graphics and Image Processing, Rockville, MD: Computer Science Press, USA 1982.
  10. Sackewitz, M. (Hrsg.): Leitfaden zur industriellen Bildverarbeitung, Vision Leitfaden 13 (1. Auflage Vision Leitfaden 1, English: Bauer, N. (Hrsg.): Guideline for industrial image processing), Fraunhofer Allianz Vision, Erlangen 2012, ISBN 978-3-8396-0447-2.
  11. VDI/VDE-Richtlinie 2632 Blatt 1 (part 1) Industrielle Bildverarbeitung – Grundlagen und Begriffe (English: Machine vision – Basics, terms, and definitions), April 2010.

Why LabVIEW? Part 1

By | LabVIEW | No Comments

Add-ons and 3rd party software

 

In this series of articles I will describe some positive aspects of LabVIEW which let us use it as a primary programming environment and get us our things done. It is totally incomplete and subjective, but still an entertaining piece of reading, I hope. In this part I will depict how fun is extending LabVIEW functionality.

Let’s start with some matters which matter before we actually begin any application. We hardly ever use LabVIEW base capabilities only and usually need some extra hardware support and libraries related to a specific field of engineering or programming. So we install LabVIEW modules and toolkits. I had wondered what is the exact difference between a module and a toolkit since the beginning of my LabVIEW career, because on NI sites there is no clear distinction between them and all of them are called add-ons. Now from my observations I can shortly say that module is a big add-on made and licensed only by NI, not available in JKI’s VI Package Manager and toolkit is a small one made and licensed by anyone, available in VIPM.

LabVIEW Modules

I usually install them together with LabVIEW, because the set of modules we use is constituted, in contrary to our workstations. For everyone in our developer team a must-install are machine vision modules – Vision Acquisition Software and Vision Development Module. It gives us ability to acquire and process images from any camera fast and conveniently. Vision is our main domain, so they happen to be quite useful for us. There is also a decent set of modules suitable for other modern engineering fields, including Real-Time Module or Robotics Module.

Toolkits

There are also add-ons which can be made by anyone. In other programming languages 3rd party software usually means putting some effort in getting it working. Enough effort to make tutorials on how to install software, like in the case of OpenCV. Quite easy but still someone may need a tutorial. In LabVIEW we don’t need that, because there is only one IDE in which you can use LabVIEW 3rd party software and it’s LabVIEW. There is also only one proper source of such extensions and it’s LabVIEW Tools Network accessible through VIPM (which is by the way the best way, if not only, of installing toolkits). So, we just open VI Package Manager, select an extension, click install and it works. Out of the box, you may say, packages made by anyone are no problem. Another thing is that we have a good selection of the toolkits. Don’t reinvent the wheel, look for it first. One of our must-install tools in toolkits category is OpenG – a great extension for basic programming aspects – arrays, strings, etc. Developed by community. It’s like boost for C++. Independent and acclaimed.

JKI's VI Package Manager

Someone is about to install a database toolkit with VI Package Manager

Other 3rd party code

During our online search for solutions we can also come across some pieces of code which are not distributed as toolkits (with palette menus, documentation, etc.). Just VIs. No problem. If we have all needed dependencies and version of LabVIEW which is not lower than the version in which it was written, it will (just) work by selecting a VI through Functions Palette.

3rd party binaries

You can also run dynamic link libraries and executables in LabVIEW. To run exe files you just need to use System Exec VI. A few first uses demand some struggling with inputs, but then it becomes nice. With C-style dlls, to call their functions, we can manually select functions and define inputs and outputs with Call Library Function Node or try automatic dll wrapper creation with Import Shared Library option. Finally there are .NET libraries and they are quite magical. To import, all you have to do is to select a library file and a method using .NET Palette VIs.

Parameters tab in LabVIEW's Call Library Function Node

Call Library Function Node – call configuration

Conclusion

LabVIEW lets us overcome almost all of our engineering challenges involving programming. It offers good hardware support and wide enough choice of extensions. Everything just works without any environment configuration, which saves our time. It also gives us a convenient way of reusing external binary code.


Sławek

3D printing in orthopedics. Part 2

By | Medicine | No Comments

So far we discussed only the “out of OR” use of 3D printing, now it is time to get our hands dirty.

Surgical instruments

The most common application of rapid prototyping for surgical instruments is guides creation. A surgeon facing complex procedure or one that requires high precision would first accurately measure patient’s anatomy and design a tool on it based on a 3D reconstruction. Such instrument will be patient specific and easy to position. It will guide other surgical instruments like drills or saws in a way that will assure a surgeon sticks to a plan. There are a few demonstrations of such procedures online. One of those is a story of a woman with a complex fracture of her fore-arm. She regains full mobility after her doctor performed surgery with customized guides what otherwise would not be possible. OptiNav faced the idea of employing rapid prototyping in guides creation. Our software Bone Extractor and Guider Creator are dedicated for orthopedists to plan hip resurfacing. The first module allows extracting bone tissue from a CT scan with the possibility to separate a femur from a pelvis. The obtained model could be printed for procedure planning or loaded to the second module for digital estimation of correct drill axis. A guide will then be automatically generated and printed. It will assure proper drill through a femur neck. The first module is already available for testing.

In maxillofacial surgery, such guides are often used to constrain operative placement to a particular area. Such guides are already typical approach with mandibular reconstruction where they assure correct fibula resection and positioning[1]

Use of 3D printed guides, compared to conventional instruments, resulted in better-positioned implants. This, in turn, can lead to increased implant longevity and less side effects for patients.

Another approach is to print single use standard surgical instruments. The goal here is to provide access to cheap tools, which would not require sterilization, and could be used in places where providing them is restricted. A perfect example here is a recent research in how to provide surgical equipment on long-duration space missions. Another studied purpose is to provide surgical instruments to underserved or less developed parts of the world. Important factor examined was per tool price, sterilization requirements and durability. As they found out, such approach may reduce the cost to 1/10 of standard price, and production conditions assure the sterile state of created tools[2].

Implants and prosthesis

Now, we have reached the most promising application of 3D printing in medicine.

Nowadays, the market offers a wide range of prosthetics and implanted devices that have for their purpose to replace, support or enhance biological structures functionality. Most treatments may be conducted with use of standard implants or prosthesis, but there are cases when such approach is impossible. For those cases, patient specific solutions are required, e.g. a patient in Hampshire is given a customized 3D printed hip implant. In cranioplastic surgery, patient specific plates for facial reconstruction are created based on CT imaging. It provides well-fitting and more aesthetic implants that retain patient’s anatomy. With a patient’s consult, the shape of the prosthesis can be digitally manipulated, printed and fitted onto a patient so that final result meets expectation[3].

Rapid prototyping may also assist surgeons when trying to treat conditions that have no standard treatment assigned. Such case occurred when a few month old baby with a rare condition (tracheomalacia) got a chance of survival by the outstanding invention of his doctor who designed and printed a specific for him bioresorbable implant allowing the baby to breathe freely.

More futuristic vision is that about printing entire organs with true living cells. Although this technology is still under development some amazing use cases are already published including:

  • 3D cell printing with introducing modifications to a standard HP inkjet[4],

  • a transplantable kidney 3D printed[5],

  • a successfully implanted 3D printed bladder[6],

  • skin grafts[7].

Some of these cases are generated using actual deposition of bioink (stem cells) by a printer onto thermosensitive, biodegradable matrices (inkjet print)[8]. Others require laser assistance or extrusion methods that are elaborately described by Christian Mandrycky et al.[4]. There is also method of organ “printing” created by applying cells to 3D printed biodegradable scaffolds[9]. Although no successful organ print was found concerning orthopedics, these methods have applications beyond those most exciting products mentioned above beginning with vessels cardiac valves, neuronal tissues, ending with bone, cartilage and muscles which potentially have an impact on orthopedics[4]. There also is scope for using the technology in scaffold construction impregnated with antibiotics to function as drug delivery system (DDS) i.e. in spinal tuberculosis[10]. Such solutions can increase efficacy and decrease risk of adverse reactions by making drug delivery patient-customized.

Conclusion

Rapid prototyping offers a broad range of applications in orthopedics. As the subject is broad, there is no possibility to discuss deeply all aspects of this technology used in medicine. For everyone who already caught the spark of interest in the subject there are a few videos to start your personal search.

Bibliography

  1. Levine, J.P. et al., (2012), “Computer-Aided Design and Manufacturing in Craniomaxillofacial Surgery: The New State of the Art”, Journal of Craniofacial Surgery, 23(1), pp. 288-293

  2. Rankin T.M. et al., (2015), “Three-Dimensional Printing Surgical Instruments: Are We There yet?”, Journal of Surgical Research, 189(2), pp.193-197

  3. Bum-Joon, K. et al., (2012), “Customized Cranioplasty Implants Using Three-Dimensional Printers and Polymethyl-Methacrylate Casting.” Journal of Korean Neurosurgical Society 52(6), pp. 541–546

  4. Mandrycky, Ch. Et al, (2015), “3D Bioprinting for Engineering Complex Tissues”, Biotechnology Advances

  5. Atala, A., (2011), Printing A Human Kidney , Available from: https://www.ted.com/talks/anthony_atala_printing_a_human_kidney?language=en, (accessed: 27/04/16)

  6. 3D Printer and 3d Printing News, (2012), Future of Medicine: 3D-Printing New 0rgans, Available from: http://www.3ders.org/articles/20120629-future-of-medicine-3d-printing-new-organs.html, (accessed: 27/04/16)

  7. Maynard, J., (2016), 3D-Printed Human Skin Could Revolutionize Medicine and Cosmetics, Available from: http://www.techtimes.com/articles/63678/20150625/human-skin-produced-3d-printers-revolutionize-medicine-cosmetics.htm, (accessed: 27/04/16)

  8. Boland, T. et al., (2003), “Cell and Organ Printing 2: Fusion of Cell Aggregates in Three-Dimensional Gels”, Anat Rec A Discov Mol Cell Evol Biol, 272(2), pp. 497-502

  9. Cox, S.C. et al., (2015), “3D printing of Porous Hydroxyapatite Scaffolds Intended for Use in Bone Tissue Tngineering Applications”, Materials Science and Engineering, 47(1), pp. 237–247

  10. Dong, J. et al, (2014), “Novel Alternative Therapy for Spinal Tuberculosis During Surgery: Reconstructing with Anti-Tuberculosis Bioactivity Implants”, Expert Opinion on Drug Delivery, 11(3), pp. 299-305


Zuzanna

 

3D printing in orthopedics. Part 1

By | Medicine | No Comments

William Osler, a Canadian physician and one of the four founding professors of Johns Hopkins Hospital once said:

“The good physician treats the disease; the great physician treats the patient who has the disease.”

With an appearance of modern technologies in healthcare, this task is starting to be more easily achievable as they all allow doctors to understand their patients better.

Nowadays orthopedists have access to a variety of diagnostic tests with arthrography, dual-energy X-ray absorptiometry, CT scans, ultrasound, nerve conduction study, MRI, and electromyography just to name a few. All of them widen doctors’ perception and help them better understand patient’s condition before starting treatment. As people vary a lot in their anatomy, overall fitness and even in such basic characteristics as a normal body temperature individual approach is essential. How can 3D printing help doctors respond better to William Osler’s call? Let’s find out.

3D printing in preoperative planning

Recognition of complex anatomical structures can sometimes be difficult to attain from simple 2D radiographic views. 3D models of patients’ anatomy facilitate this task and allow doctors to familiarize themselves with a specific patient. This approach proved to reduce drastically OR time, especially in complex cases[1]. Getting to know patients’ anatomy before entering an OR allows to plan the exact approach, helps to predict bottlenecks and even test procedures beforehand. It very often occurs in neurological applications where maneuvering around delicate nerves and vessels are common but can also be beneficial in orthopedics[2].

An amazing example of 3D models created for surgery planning is separation of Siamese twins. Individual variances and complexities of their anatomy make estimation and planning of the surgery very challenging. Surgeons have to agree on organs distribution between two patients beforehand. Having limited information from 2D imaging and also relying only on one’s experience might have been a not good enough approach, as all cases are unique. Any mistake or oversight might lead to severe complications that could be very hard to control in the OR as the procedure is occurring a simultaneously on two patients. One of such cases was described in BBC’s article on two Chinese twins separation and another one in Imaging Technology News – procedure conducted in Texas Children’s Hospital. In each of them accurate 3D models were created to evaluate complexity and validate surgical approach.

Orthopedics can also benefit from planning on 3D models. Some of the most typical use cases are scoliosis or kyphosis surgeries and quite often evaluation of craniosynostosis cases. Some severe bone fractures may be better assessed with use of 3D models. Complex maxillofacial surgical procedures are also a major application for 3D models. Facial reconstruction is a complex procedure often requiring significant time for contouring titanium plates used to link adjacent bones together. The procedure is performed with a patient under anesthesia, and the plates are formed intraoperatively. Increased OR time can increase trauma to a patient. Having a 3D model of patient’s bony structure allows to shape the plates beforehand and thus reduce time spent in surgery[3]. There is a lot of scientific and popular articles showing usefulness of this technology in facial reconstruction.

There are some other applications of 3D printed models for evaluation of patient’s condition and planning of procedures are presented at Boston’s Children Hospital website.

This technology undoubtedly boosts surgeons’ confidence as it gives them the opportunity to evaluate all aspects of patient’s anatomy without losing time to do this in OR. No standard models nor 2D images can replace 3D printing as the first do not represent the specific case in debate and the latter may hide important details, especially in the spatial relationship between structures[3]. Looking inside patient’s body before inventing 3D printing has never been so detailed and clear.

3D printing in education

Three-dimensional print models can improve understanding of anatomy and pathology for both a surgeon and a patient. They supplement images displayed on a computer screen providing tactile and visual experience. Such models may be created as a reference for complex deformities to be shared along specialists.

Studying anatomy is conducted on cadavers of people who decided to donate their bodies to science or were not claimed by their families after passing away. It is not a secret that medical education facilities often lack cadavers for their classes. It is a serious situation as bodies are indispensable study tools for students of medicine. 3D printing might offer a solution here as complex anatomical structures of real patients can be easily reconstructed from CT or MRI data and saved for presentation to students. Of course, it is not a replacement for studying anatomy on cadavers, but it may supplement this type of teaching by providing accurate replicas of real body parts for study. It is especially essential in countries that do not allow for cadaver tests or for people who find such practice unethical. A Monash University has developed a few of such educational tools[4]It has also been suggested that these models could be kept to build a library or a catalogue of pathology for future educational purposes[5]. What limits their application scope is that one material printing cannot mimic mechanical characteristics of real organs. This can though be overcome by using new multi-material printing as was reported by Waran et al.[6] All types of tissues were separated sequentially resulting in 3D digital models of skin, bone, dura, and tumor. Each model had specific material characteristics assigned and after merging them into one, a 3D multi-material printer was used to create them. It gives hope for the future creation of more adequate training tools that may supersede cadavers.

3D prints may be as well used by a doctor to explain to a patient his or her condition. Such practice is already working. Nicolla Bizotto MD reassures his patients before performing a complex bone fracture surgeries and doctors at Radboud UMC hospital print brain tumors to explain the treatment to their patients. According to these articles, the practice shows high potential in using such tools not only for assessing condition but also to facilitate for a surgeon explanation of the procedure to a patient. An article published at Medscape underlines the importance of using 3D models for explaining ocular pathologies to patients as there are no accurate enough standard models to explain some of these conditions[7]). Offering a patient possibility to understand his case and procedure may be reassuring and produce better treatment outcome by reducing stress and insecurity.

Bibliography

  1. Hammad H. M. et al., (2015), “Three-Dimensional Printing in Surgery: A Review of Current Surgical Applications”, Journal of Surgical Research, 199(2), pp. 512-522
  2. Schubert, C. et al., (2014), “Review Innovations in 3D Printing: a 3D Overview from Optics to Organs”, Br J Ophthalmol, 98(2), pp.159-61
  3. Marro, A. et al., (2016), “Three-Dimensional Printing and Medical Imaging: A Review of the Methods and Applications”, Current Problems in Diagnostic Radiology, 45(1), pp. 2-9
  4. McMenamin, P. G. et al., (2014), “The Production of Anatomical Teaching Resources Using Three-Dimensional (3D) Printing Technology”, Anatomical Sciences Education, Volume 7(6), pp. 479–486

  5. Niikura, T. et al., (2014), “Surgical Navigation System for Complex Acetabular Fracture Surgery” Orthopedics, 37, pp. 237-242

  6. Waran, W. et al., (2014), “Utility of Multimaterial 3D Printers in Creating Models with Pathological Entities to Enhance the Training Experience of Neurosurgeons”, Journal of Neurosurgery, 120(2), pp. 489-492
  7. Hobbs, B. N., (2016), Changing Medical Education With 3D Printing,  Available from: http://www.medscape.com/viewarticle/857065 (accessed 30/04/16)

Zuzanna