Fig. 31.1
Direct volume rendering of three different clinical cases from their DICOM image, here CT scan of the neck (a), thorax (b), and abdomen (c)
This technique, available on all current imaging systems (MRI or CT scan), can be sufficient for a good 3D visualization of anatomical and pathological structures and can thus be a useful tool for preoperative planning [3, 4]. It consists in replacing the standard slice view by the visualization of all slices simultaneously and in 3D. In order to see internal structures, the initial voxel gray level is replaced by an associated voxel color and transparency. This transparency allows to view the organ borders as they are not delineated in reality. With VR-Render, the user simply selects automatically computed 3D renderings from a very explicit list. That volume can also be cut along the three main axes (axial, frontal, or sagittal) or with an oblique mouse-controlled plane. In clinical routine, direct volume rendering can be of great preoperative interest. This is the case for all malformation pathologies, in particular vascular or bone malformation, but also for thoracic and digestive pathologies.
Direct volume rendering is a very useful tool as it is accessible without any preprocessing; however, it does have some limitations. It can provide neither the volume of organs nor the dimensions since these organs are not delineated. For the same reason, it is not possible to provide a volume after resection or to cut a section of these structures without cutting neighboring structures. To overcome this limit, each anatomical and pathological structure in the medical image has to be delineated. To do it, several software are available on the market essentially for liver (Myrian© from Intrasense, Ziostation© from Ziosoft, Iqqa® Liver from Edda Technology, ScoutTM Liver from Pathfinder) and more rarely for all digestive areas (Synapse© Vincent from Fujinon). Another solution consists in using 3D modelling distant services (MeVis Distant Service, Visible Patient Service from Visible Patient) that do not request the purchase and use of expensive modelling workstations, the modelling being realized at distance by experts in image processing. If MeVis Distant Service is limited to the liver, Visible Patient Service is today the only service available for any part of the body, from baby to adult. Results of the 3D modelling process can be visualized from Visible Patient Planning software through surface rendering or fusion between surface and volume rendering (Fig. 31.2).
Fig. 31.2
Visible patient 3D modelling of patients of Fig. 31.1 with a fusion of direct volume rendering and surface rendering provided by Visible Patient Planning software
Beside the 3D visualization of delineated and modelled structures, Visible Patient Planning also allows to interactively change transparency of any structure, to interact on them, to navigate anywhere, and therefore to simulate any kind of endoscopy such as laparoscopy, fibroscopy, gastroscopy, or colonoscopy (Fig. 31.3).
Fig. 31.3
Visible Patient Planning used for a virtual fibroscopy with transparency on bronchus tree allowing to see a tumor in green (a), a virtual gastroscopy with a red GIST detection (b), and a virtual coloscopy (c) from patient-specific 3D modelling
In liver surgery, a simple 3D visualization is frequently not sufficient to efficiently plan surgery. Virtual resection and volume computation after resection are then usually mandatory and requested by surgeons. Visible Patient Planning software gives such possibility through virtual clip applying that provides in real time the vascular territory of the clipped portal subtree defining the anatomical segment. It allows for multi-segmentectomy and automatically computes the future liver remain rate (see Fig. 31.4). Such computation allows for the improvement of preoperative surgical planning [5–7] and sometimes also of surgical eligibility in liver surgery, thanks to better patient-specific anatomical knowledge and better postoperative volume definition.
Fig. 31.4
Virtual left hepatectomy extended to segment 8 (left) and virtual right hepatectomy of a patient having several thermal ablations and a right embolization (right) using the clip applying function of Visible Patient Planning software
31.3 Interactive Augmented Reality
Preoperative surgical planning and simulation can significantly improve the efficiency of a surgical procedure, thanks to better preoperative knowledge of the patient’s anatomy. However, the preoperative use of such systems is not sufficient to ensure safety during the surgical procedure. Such an improvement can be provided by an intraoperative use of virtual reality through the augmented reality concept. Augmented reality consists in superimposing the preoperative 3D patient modelling onto the live intraoperative video view of the patient. Augmented reality can thus provide a transparency view of patients. Several kinds of augmented reality have been developed: interactive (IAR), semiautomatic (SemIAR), and fully automated (AAR).
Interactive augmented reality (IAR) is based on an interactive registration performed by an operator through an image overlay of the patient’s model superimposed onto the patient’s view, which can be direct (no camera) or indirect (with a camera). The patient’s view can be external, to visualize his/her skin, or internal (e.g., in laparoscopic surgery) to display an organ. The registration is then guided by anatomical landmarks visible on the patient’s view and on the patient’s model through the image overlay. Four main image overlay techniques are available: direct projection of the patient model onto patient skin through a video projector [8, 9], direct visualization through a transparent screen placed between surgeons and the patient [10], indirect visualization using a camera to provide a patient view visualized on a screen that can overlay the virtual patient model [11], and specific display such as the robotic 3D view display of the da Vinci robot [12]. The indirect visualization using a camera is today the best available solution that provides the camera’s point of view regardless of surgeon position or movement, which avoids the usual error linked to the detection of several points of view.
We have developed a two-step interactive augmented reality method [11, 12]. The first step consists in registering an external view of the real patient with a similar external view of the virtual patient. The second step consists in real-time positioning and correction of the virtual camera so that it is orientated similarly to the laparoscopic camera. Real video views are provided by two cameras inside the OR: the external real view of the patient is provided by the camera of the shadowless lamp or by an external camera, and the laparoscopic camera provides the internal image (see Fig. 31.5).
Fig. 31.5
Interactive augmented reality realized on the da Vinci robot providing an internal AR view in the Master Vision system (left) and an external AR view of the patient (right)
Both images are sent via a fiber-optic network and are visualized on two different screens by the independent operator in the video room. A third screen displays the view of the 3D patient rendering software working on a laptop equipped with a good 3D graphic card and controlled by the operator. The augmented reality view is then obtained by using a video mixer Panasonic MX 70, offering a merged view of both interactively selected screens. This system also gives the possibility to reduce the augmented reality effect to a limited part of the image as illustrated in Fig. 31.6. Visible Patient Planning software allows to realize any virtual views of the patient, from internal or external point of view as illustrated in Chap. 2. It is then used to provide the similar point of view of the real camera in the virtual world.
Fig. 31.6
External (top) and internal (bottom) interactive augmented reality views
The technique used several anatomical landmarks chosen on the skin (such as the ribs, xiphoid process, iliac crests, and the umbilicus) and inside the abdomen (inferior vena cava and two laparoscopic tools). The resulting registration accuracy can immediately be checked by properly superimposing the virtual organs onto the real visible ones. This method would take a while to implement, and adjustment of the various external landmarks was sometimes imperfect. However, this method, evaluated during more than 50 procedures, always provided good results.