Medical Ultrasound Simulator for Education
ultrasound (US) is receiving increased attention despite the fast
progress in other, higer resolution imaging modalities. This is due to
several facts such as being the safest, the most cost efficient and the
most portable medical imaging modality. The challenge in US imaging is
that the images are noisy, free-hand slices (2D cross-sections) of the
3D body and furthermore the images are dependent not only on the US device parameters
(such as transducer type, the operating frequency, gain, etc.) but also
on user actions (such
as the pressure applied to the body surface, the amount of gel used,
etc.). Hence, even before diagnostic reading of US scans, the
ultrasonographers must have been trained on 3D navigation within the
body to be able to recognize the anatomical structures in these 2D
The conventional way of US training is done on volunteers, which is neither convenient (even not possible in some cases) nor cost efficient. As a solution to this, US training simulators have been developed and are being used in increasing numbers. All of these commercial systems are based on pre-recorded US scans and differ in number and variety of US scans offered as well as their user interfaces. Due to the fact that none of these systems do actually perform US simulation, they are incapable of simulating the effects of the US device parameters and the user actions that affect the images acquired.
The MUSE project aims at developing a
real-time, true simulation of US images from 3D virtual patient models
built using real volumetric medical image data. The current project
|CT Based Deformable Virtual Patient|
The VPC (Virtual Patient Creator) application allows interactive virtual patient model building from input abdominal CT volume. VPC provides interactive segmentation (air, soft tissue, bone), virtual patient surface mesh and volume lattice model building. The exported model is used by the MUSE platform, for 3D deformation simulations and 2D arbitrary fan-shaped slicing of input CT volume.
Figure: (In clock-wise order) The
virtual patient surface model and the bone mask; The virtual patient
and virtual US probe interaction for arbitrary slicing; The fan-shaped
CT slice (used as input or CT-to-US conversion)
|Real-time Single-ray CT-to-US Conversion with Speckle Model
Real-time CT-to-US conversion is achieved via
a ray-based approach. The CT image is used to estimate the acoustic
properties of the domain, which are then used to compute a reflection
image. The texture of the input CT image is used via a novel scatterer
distribution model based on CT texture, which in turn defines the parameters of a speckle
image component model. The model follows the basic physics. The
shadowing effect, which is generated automatically by the model, can be
removed by using an underlying image segmentation mask.
Figure: The underlying CT slice/fan
and the simulated US images w/o shadowing and suppressed exponential
|Real-time Multi-ray CT-to-US Conversion without Explicit Speckle Model|
CT-to-US conversion is based on simulaitons based on multiple acoustic
rays per transducer. The image is reconstrcuted by a weighted and
delayed combination of each ray. The delays are related to the
transducer-to-point distance. Currently, the model excludes any
explicit speckle modeling. Each ray behaves independetly, which allows
parallel processing with vritually no extra computational cost.
The method approximates a spherical acoustic wave the propagates with constant speed radially and the phased array reconstruction approach.
Figure: The multi-ray simulated US image, with 25 rays per transducer and sigma 1.32 degrees (weighting parameter). Shadowing has been suppressed.