1upHealth spinoff GenHealth AI grabs $13M to fuel large medical model

Generative artificial intelligence companies continue to rake in funding despite a chilly market. GenHealth AI brought in $13 million in early funding last week that will help the company continue its work in what it is calling a large medical model, or LMM.

The Boston-based company states that its LMM follows similar principles of a large language model (LLM), but instead of the transformer neural network being trained on text, GenHealth AI’s LMM is trained on medical event data. The patent-pending approach is less prone to hallucinations, bias from human input and more able to address various tasks across healthcare, according to the company, which spun off from 1upHealth three months ago.

"Interest in practical applications of AI for healthcare has been intense and growing," said Ricky Sahu, GenHealth AI founder and CEO, in a press release. "Generative AI is like nothing else out there, and our mission is to bring that force to healthcare, impacting the daily lives of billions. Soon, most major health decisions will take guidance of an AI and GenHealth will be that AI."

The company is led by 1upHealth founder Sahu along with Eric Marriott and Ethan Siegel, engineers that joined the 1upHealth team in its early stages of growth.

Craft Venture and Obvious Ventures co-led the funding round. GenHealth AI also added two members to its advisory board. Former National Coordinator for Health IT Don Rucker, M.D., and inaugural Chief Technology Officer of the United States Aneesh Chopra joined as advisors. Rucker currently holds the position of chief strategy officer for 1upHealth while Chopra is the current president of health IT company CareJourney.

Along with the announcements came news that GenHealth AI is launching use cases for payers, pharmaceutical organizations and providers. For payers and providers, the LMM is said to aid in risk adjustment, care management and financial benchmarking. The tool will be used for Medicare Advantage and Medicaid plans and Accountable Care Organizations.

GenHealth AI said in a press release that it expects to add pilots of its model to pharmaceutical and life sciences organizations to accelerate workflows for synthetic data and clinical trial simulations.

“The main goal of LMM is to help the healthcare industry automate decisions based on individual patient histories, rather than rely on rules-based solutions that prevail today,” Sahu wrote in a blog post on the GenHealth AI website. “There are many use cases and markets that can benefit from using a large medical model to automate the billions of transactions that run healthcare behind the scenes. We are already seeing numerous companies in the industry take advantage of all the codified data to predict and manage patient futures via our LMM.”

According to the company, its generative AI models maintain an advantage over well-known LLMs due to its superior ability to assess patient care, workflow, financial claims and operational task datasets that follow common standards including HL7, FHIR, EDI and 837s.

Generative AI has been the tech buzzword of the year following the release of the LLM ChatGPT-4 in March. Despite widespread excitement and promise in the new tool’s ability to change the face of technology, snags have already been revealed, drawing concern regarding the usage of LLMs in medicine.

An LLM hallucination is similar to the phenomenon in humans—something is made up. Generative AI hallucinations are outputs that are grammatically correct but based on false assumptions and disconnected from reality. Experts have expressed concern that if AI outputs are not carefully checked by a human expert, consequences could be dire, especially in fields like healthcare.

“Large language models have no idea of the underlying reality that language describes,” Yann LeCun, chief AI scientist at Meta, told IEEE Spectrum. “Those systems generate text that sounds fine, grammatically, semantically, but they don’t really have some sort of objective other than just satisfying statistical consistency with the prompt.”

Tech giants like Microsoft and Google have expressed their own concerns regarding accuracy and bias in generative AI tools. Earlier this spring, Google’s medical LLM Med-PaLM 2 reached 85% accuracy on U.S. Medical Licensing Examination-style questions. However, the company is currently testing the tool only on non-clinical decision tasks such as prior authorization.

Various private and public organizations are stepping up to the plate to address ethical and safety issues regarding the tech. Stanford Medicine and Stanford Institute for Human-Centered Artificial Intelligence launched an initiative dubbed Responsible AI for Safe and Equitable Health, or RAISE-Health, with the goal of examining the safe uses of AI in healthcare.

The tone of caution is supported by the estimated 60% of Americans who stated that they would feel uncomfortable if their healthcare provider relied on AI for their medical care. Despite hesitation on all sides and this year’s cooling venture market, funding into digital health has remained steady.

"Looking back after a few years, we'll find it unfathomable to have lived without AI in healthcare," Sahu said.