Crowdsourcing proves effective for labeling medical terms

Crowdsourcing can be an effective means of labeling medically relevant terms that could then be used in statistical tools to provide sentence-level context results, a study from Stanford University found. 

Though many studies in natural language processing are aimed at parsing content from doctors' free-text notes, this new research aims to employ non-expert humans to label content that patients post online. The study is published at the Journal of the American Informatics Association.

The researchers farmed out the task of labeling medically relevant terms in text to Amazon's Mechanical Turk service, where workers perform small tasks for a fee. Their performance was compared with that of registered nurses contracted through the online site ODesk. They found the results to be acceptably similar, with the crowdsourcing costing far less than the nurses.

They then compiled 10,000 Turk-labeled sentences based on text from the online health community MedHelp and an expert-labeled dataset of 10,000 sentences from online health community CureTogether.  The sentences were used to evaluate toolkits such as MetaMap and the Open Biomedical Annotator (OBA) that previously have been largely focused on mapping words from text written by medical experts to concepts in biomedical ontologies.

The authors created a data model called ADEPT (Automatic Detection of Patient Terminology) using the crowdsourced dataset that performed at 78 percent accuracy against the CureTogether model and outperformed OBA (47 percent), TerMINE (43 percent) and MetaMap (39 percent).

Though the authors concede that much work remains on effectively applying ADEPT, they have made it available online to the public.

Massachusetts Institute of Technology researchers are working on algorithms to better distinguish words that can have multiple meanings, such as "discharge," which could refer to a bodily secretion or release from a hospital. They report a 75 percent accuracy rate.

Stanford University researchers, meanwhile, have found some success in analyzing doctor's notes in electronic health records for surveillance of drug interactions in near real time.

Researchers in The Netherlands have used NLP to provide links in biomedical text to sources that contain further information about relevant concepts.

To learn more:
- here's the research
- find ADEPT here