Beyond pixel number: Where next for endoscopic imaging capabilities?
Relentless progress in imaging sensors driven by the smart phone camera revolution is beginning to transform the optical capabilities of endoscopes. Beyond pixel number, this also raises the prospect of challenger business models that offer altogether new imaging capabilities, or disposable endoscopes.
As the name suggests, the endoscope functions as the eye of the surgeon, transforming their vantage point from peering into the patient’s body from a foot above, to being inside it an inch or two away from the tissue of interest. Imaging quality is therefore an important endoscopic capability.
One recent technology that enhances the optical capabilities of endoscopes is called chip-on-tip: an image sensor sits at the tip with a high quality lens, and the electronic image signal travels through the flexible section of the endoscope to an external screen, headset or viewer.
The key advantage of chip-on-tip technology is that image quality isn’t degraded as the signal travels along the insertion tube. The obvious drawback is that the usable diameter of the endoscope limits the number of pixels that can be crammed into the available area.
For example, endoscopes with full 1080p HD imaging at 60Hz frame rate are available utilising the OV2741 sensor, which uses 1.4µm pixels and has a footprint of 3.9x2.9mm. At the other end of the scale, the smaller OV6930 chip provides a 400x400 pixel endoscopic camera with a 1.6mm diameter, including optics, making this endoscope suitable for very tight spaces.
Filtering multiple wavelengths
Multi-spectral imaging for new diagnostic capabilities is another area of innovation. Most visible light colour cameras use a Bayer filter (a checkerboard of 1 red, 2 green and 1 blue filter) sitting in front of the colour-blind silicon sensor, and ‘demosaicing’ algorithms estimate the actual colour of the image at each pixel.
But more information is available in the UV and infra-red parts of the spectrum, which can be used to highlight blood vessels, polyps and cancerous tissue depending on the wavelengths used. Images can be enhanced by means of specific dyes that improve the wavelength response, for example to detect cancer or organ blood perfusion.
A simple way to access this information is to replace one of the green Bayer filters in the 2x2 array with a different filter, say an IR wavelength. This approach works, but also limits the number of additional wavelengths. More colours lead to better tissue identification but also to lower image resolution as the spatial resolution of the sensor is lowered by the increased number of filters.
Strobing on multiple useful wavelengths
To get around this limitation, one can use a greyscale camera and sequential images of the scene by strobing on red-only light, green-only light, blue-only light, and indeed any other useful wavelength of light. Software is then used to add up the frames to produce a multispectral colour image.
This approach is particularly apt for endoscopic imaging because the only light source inside the body is the endoscope itself. Moreover, any number of illumination wavelengths and the full resolution of the sensor can be used because there is no Bayer-like filter that would reduce the resolution.
The main drawback of sequential imaging is the need for a high-enough frame rate to eliminate any movement of the endoscope and organs. Alternatively, one can use low latency software that reconstructs the view if movement between frames is too high.
The resolution Catch 22
If the full resolution of the sensor can now be used, can the optics keep up? The first limitation is diffraction, where the detail of the image on the sensor is dependent on the aperture diameter of the optics — a smaller diameter means a larger spot size on the detector, meaning less detail, which can negate increases in pixel density.
Second, and just as important, the best image quality is obtained when the object of interest is exactly in focus on the sensor. Objects such as tissue and surgical tools tend to be at different distances and are rarely all in focus simultaneously. This means that one or the other will be blurred to some extent, again reducing the effect of increasing the pixel density.
This depth of field limitation can be mitigated by reducing the optical aperture diameter, but this also reduces the light reaching the sensor and also can lead to diffraction curtailing resolution. This optical Catch 22 is a real issue when determining the performance of an endoscopic system — it’s easy to market pixel number as the main determiner of image quality, but in reality, it’s more complicated.
Rather than relying on the optical hardware to push resolution, other techniques such as software and artificial intelligence could potentially come to the rescue. The limits of resolution described above assume no prior knowledge of the scene, and that every frame is independent of any other frame. In other words, the system has no memory.
But in real life, organs and tools remain much the same from frame to frame, undergoing only minor changes like translation and distortion. Multiple images from the sensor allow for more information regarding the object to be built up, which could enable higher effective resolution, or super-resolution images to be inferred and displayed, or even objects to be displayed as in focus when they are currently out of focus on the sensor itself.
Finally, additional imaging techniques like optical coherence tomography (OCT) and ultrasound can be integrated into the tip to see under the surface: to locate tumours, blood vessels and other objects deeper inside the tissue that are not visible with conventional video endoscopy.
Within the constraints of endoscope diameter, enhanced imaging capability is thus becoming an important technological battleground for endoscope manufacturers. Another parallel trend is the rise of disposable endoscopes, which avoid laborious and sometimes imperfect sterilisation between deployments.
Some companies have come to market with completely disposable endoscopes that use chip-on-tip imaging technologies, available for around £200 in bulk volumes, with lower image quality due to the low BOM cost allowed for optics and sensor for the business model.
How these trends play out and which new business models become, and which existing business models stay viable in the future remains to be seen.
But one thing is clear: The democratisation of medical imaging chips will enable agile companies to develop new products that challenge the current industry order, either with endoscopes that offer new functionalities or ones that are sufficiently low in cost to be disposable.
And of course, beyond imaging quality and capability, there are always the surgeon’s preferences for quality, touch/feel, familiarity and other intangibles to consider.