Fusing real-time procedural images with preoperative cross-sectional imaging is fundamental to the adoption of interventional robotics because diagnostic-quality image fusion provides the foundation to guide robotic procedures as it does in human led procedures. Fusion is the superimposition of data or imaging acquisitions on top of each other in the same coordinate space to provide the key information at the point of care. This is frequently done using complementary imaging modalities of ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) to leverage the strengths or overcome the shortcomings of each modality. Fusion can also be done with the same imaging modality at different time points (such as with and without contrast) or by combining metabolic and morphologic imaging to enable molecular interventions, or precision interventions guided by targeted agents (such as Prostate-Specific Membrane Antigen PET, DOTATATE PET, or other nuclear medicine agents). The decision on fusion approach depends on the clinical context.
Robotics are physical, information-driven tools that assist the interventionalist and lead to more standardized and automated approaches.1 Robotics and fusion roles span the temporal scope of preprocedural planning, intraprocedural serial iterative device guidance for placement, and postprocedural treatment confirmation. For example, overlaying a patient's PET-CT with a noncontrast early intraprocedural planning CT scan of a liver mass highlights fluorodeoxyglucose (FDG) avid regions, aiding in target selection for biopsy to decipher tumor heterogeneity and provide optimized biomarkers. During procedures, fusing target images with real-time imaging feedback facilitates accurate visualization of device positioning in a repeatable iterative fashion, where each subsequent needle placement is adjusted based upon the collective aggregate of all previously placed needles. The treatment plan evolves in this fashion according to procedural feedback and data during the procedure. Postprocedurally, fused pre- and post-ablation images (Fig. 1) enable direct comparison and quantitative assessment of therapeutic outcomes, such as minimum ablation margin, which directly and logically equates to survival, local control, and outcomes.2
The use of fusion further evolves interventionalists current practice of relying on cognitive registration, spatial triangulation and landmarks to navigate toward targets utilizing information memorized from pre-procedural imaging. Image fusion can enhance visualization and improve accuracy when targeting lesions that are inconspicuous on ultrasound or non-contrast CT or difficult to target by a single modality. Looking across the room at PACS on the monitor during a case to think about 3-dimensional (3D) tomographic adds a layer of complexity that can be mitigated by fusion with robotics to standardize and expand patient care. Leveraging fusion techniques could also reduce radiation exposure by limiting the use of intraprocedural CT or fluoroscopic guidance.1 The ability to fuse prior contrast-enhanced CT, MRI, PET/CT with intra-procedural US, CT, or fluoroscopy, facilitates precise and serial navigation of needles, catheters, and instruments to an accurate target with real-time tracking of serial robotic devices in relation to an evolving iterative treatment plan, that enables sequential needle placements as well as updating the plan with regards to tissue at risk for under-treatment with every needle placement. This capability becomes critical to outcomes with larger tumors (> 3cm diameter) reliant upon composite ablation from multiple needles. Knowing when an endpoint is reached also relies upon accurate estimates of ablation margins, which in turn must be based upon accurate target definition and superimposed planned and actual treatment volumes.
The growth of automation in interventional procedures highlights that image-fusion and robotics are synergistic technologies. Fusion enables multimodal navigation for treatment planning while robotics facilitates automation and standardization, allowing inexperienced operators to implement plans with similar accuracy as experienced interventionalists.3 This leads to more consistent and improved outcomes. The downstream impact of standardization translates to enhanced opportunities for uniform, hypothesis-driven, scientific studies as well as coordinated cooperative groups for multicenter and multioperator analysis with less inherent variability of human operators who are otherwise more dependent upon hand-eye coordination without robotics.
The development and penetration of robotics into clinical practice will naturally lead to a union of robotic automation (and hardware) with fusion (and navigation software). The automatic standardized robotic treatment plan and precision interventions will rely upon interfaces and integration with treatment planning, registration, multimodality fusion as well as post ablation verification of margin software. Therefore, a review of fusion tools is relevant for a robotics community that is rapidly evolving to interface and incorporate fusion guidance.
Comments (0)