New tech allows surgeons to manipulate MRI images with hand gestures

Researchers are working on a system that will allow surgeons to use hand gestures to issue commands to a computer that will allow them to browse through and display medical images in real time.

The system, which uses depth-sensing cameras and special algorithms to recognize hand gestures as commands to manipulate MRI images on a display, was described in a paper recently published in the Journal of the American Medical Informatics Association.

"One of the most ubiquitous pieces of equipment in U.S. surgical units is the computer workstation, which allows access to medical images before and during surgery," lead author Juan Pablo Wachs, assistant professor of industrial engineering at Purdue University, said in an article in Purdue News. "However, computers and their peripherals are difficult to sterilize, and keyboards and mice have been found to be a source of contamination. Also, when nurses or assistants operate the keyboard for the surgeon, the process of conveying information accurately has proven cumbersome and inefficient since spoken dialogue can be time-consuming and leads to frustration and delays in the surgery."

The researchers validated their system by working with veterinary surgeons to select a variety of gestures that come naturally to physicians and clinicians. In addition, they asked the surgeons to identify functions they perform with MRI images during surgeries.

The also were asked to suggest gestures they would associate with commands for manipulating imaging. The gestures chosen included rotate clockwise and counterclockwise; browse left and right; up and down; increase and decrease brightness; and zoom in and out.

"A major challenge is to endow computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures," Wachs said, according to Purdue News. "Surgeons will make many gestures during the course of a surgery to communicate with other doctors and nurses. The main challenge is to create algorithms capable of understanding the difference between these gestures and those specifically intended as commands to browse the image-viewing system. We can determine context by looking at the position of the torso and the orientation of the surgeon's gaze. Based on the direction of the gaze and the torso position we can assess whether the surgeon wants to access medical images."

According to the authors, the system has a mean accuracy of 93 percent in translating gestures into specific commands.

For more:
- see the study abstract
- read the article in Purdue News