Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Advancing medicine with AI at the edge

Swarm learning techniques for decentralizing analysis and sharing insights can bring game-changing acceleration to the pace of discovery.

The drug researchers who developed COVID-19 vaccines in record time were given a crucial advantage: Governments, desperate to quell the pandemic, dropped or modified their normal regulatory and data protection requirements.

If only such speed were the norm.

Concern over data sharing has long slowed drug developments, with regulators understandably trying to protect patient privacy and ensure data security. The challenge is how to develop medicines and treatments at speeds seen during the pandemic without jeopardizing privacy and security concerns.

One potential savior: swarm learning. Swarm learning is a technique for using AI at the edge—say, hospitals or even a doctor's office—to decentralize the analysis of data from multiple locations and then share insights through a learning model that satisfies and bypasses regulatory and privacy concerns. Swarm learning, which is inspired by the swarming phenomena seen in nature, stands to greatly improve how data is collected and analyzed, evolving from a federated, centralized learning model to a decentralized swarm model using trained artificial intelligence.

Please read: Swarm learning and the artificially intelligent edge

The current process needs help. As it stands, individual healthcare institutions collect data from research in isolation. Every project is its own discrete source of data, especially when multiple organizations are running the research. With little sharing or coordinating between organizations, the studies and datasets tend to be small and the research often duplicative.

There's a corporate parallel: the so-called silo effect, where teams or even business units fail to benefit from another team's knowledge because it's kept in a silo. Similarly, "my data" ownership attitudes often emerge and result in tedious handling and duplication of large datasets. Moreover, the datasets are rarely standardized because the data is structured for the specific research or research entity.

The downside of the status quo is twofold: It creates regulatory and privacy headaches when different entities want to share their data, and the insights gained are less accurate and less comprehensive because they are not derived from the largest possible base of data.

The key here is that doctors can't share data because of privacy and regulatory concerns. However, they do want to be able to share the insights derived from that data, quickly and effectively.

The beauty of a swarm

Swarm learning makes sharing "insights only" possible, offering agility and elegant simplicity to data mining while respecting medical requirements such as patient confidentiality as well as regulatory and legal privacy concerns. The result is more accurate diagnoses and better treatment decisions, based on larger datasets of healthcare insights, without compromising patient privacy.

This is possible because in a decentralized learning swarm, the original organization—be it a research team, pharmaceutical company, or hospital—retains control of the data; it does not move, nor is it duplicated. This ensures that privacy and sovereignty are maintained.

Technologies at play

Patients and providers are driving the digitization of healthcare, and people are choosing to digitize themselves through a variety of sources. Those sources include smart watches and exercise monitors, bedside medical devices, or any telemetry system that collects information from patients and inserts it into their medical records. All these devices are considered edge devices.

Please read: How the edge is reshaping healthcare

"When we say edge to cloud, we're looking at the full spectrum. The edge is on the patient's wrist; the edge is the technology applied in the hospital. Telehealth care really brings those two worlds together," says Rich Bird, a healthcare and life sciences expert at Hewlett Packard Enterprise. "The improvements we've seen over the last 12 months in telehealth care—not only in people being paid to do it, but also the technology available to provide it—is something that's unparalleled."

The pandemic will leave an important legacy when organizations start to think about inpatient care, clinical settings, and research settings. Documentation management and workflow management tools have become the major applications healthcare providers use to record and track information. Healthcare professionals also add their interpretation, expertise, knowledge, and empathy into the documents, making it possible for insights and lessons learned to be included in the shared data.

Please read: The rapid transformation of healthcare

"I like to think about swarm learning within the context that if I go to the doctor, she has downloaded all of the lessons that the other doctors in the world learned yesterday," says Bird. "So I'm not just talking to the one doctor's experience that I have in front of me. I'm talking to the collective experience that's been generated, collected, and shared via the swarm."

AI is the perfect technology for swarm learning. Properly trained AI and machine learning can find insights on larger datasets that might escape traditional data modeling techniques with only limited access to training data. It works well as the basis for precision medicine and is in line with medical requirements such as protecting personally identifiable information and other sensitive data.

"The beauty of AI is that instead of shifting the data around, you shift the algorithm to where the data resides," says Hartmut Schultze, a business architect in the data lifecycle practice at HPE.

That's how it makes machine learning "democratic." It's one of the fundamental swarm principles: It searches out equal and like-minded partners on the network, protecting the data and providing security at the source where the data resides on the edge. Machine learning accesses the larger dataset, is less susceptible to bias, and provides higher resiliency.

Becoming an early adopter

Beginning the process of moving to swarm learning involves an organization-wide commitment. The heavy lifting doesn't rest just on the IT department's shoulders. Since the first steps can be done in the cloud, Schultze recommends beginning with a centralized model and then decentralizing it.

"Like any organization, a hospital or research institute is a central organization at its core," he says. "Follow cloud-native structures. Containerization is the key to having the right abstraction that everybody agrees on. For example, every orchestration is standardized on Kubernetes. It doesn't matter if it runs on an edge device or on an IoT device, on a cloud or in a hospital, on a large machine or on a scaled-out machine."

Be prepared to experience many challenges along the way, such as interoperability issues between the major global applications, like EpicCerner, and Meditech. The problem is each has data stored in proprietary systems. Says Bird, "When we're designing systems like this for use in swarm learning, we also need to design a set of agreed-upon principles on the formats in which the data is shared."

Please read: EHRs prove invaluable in Israel vaccine campaign

The next challenge is getting good curation that renders good data. The question is how much preprocessing must be done? And can it be done locally, producing only quality data for analysis, and then be shared to a swarm?

And as soon as the information is prepared for decentralization, there is now a harmonized and democratized way of sharing insights from that data. The benefits of the data sharing start to become obvious. That, in turn, "motivates the data owner or the data possessor where the data is generated, while incentivizing that one who's creating the model for the algorithm and for the pattern recognition as well," says Schultze. "So everybody contributing is benefiting."

Please read: Cloud to the rescue: EHR as a service drives future of healthcare

Medical and research professionals agree that people want to share if it's possible to do so using anonymized data. But for this to happen, standardization is critical. Once in place, this will lower hurdles, making it easier for everybody to participate.

Fast Health Interoperability Resources is a standards body trying to assist in standardization via its Health Level 7 standards accreditation for healthcare interoperability. One major benefit of global standardization is that it can ensure privacy for companies by shielding their source code from competitors.

Starting from scratch

How much of this capability is currently in place with today's technologies? "Not much," according to Schultze. "Medical organizations and governments discuss their data-sharing agreement on a case-by-case basis and if and how it's done, then the paperwork gets drawn up. That takes about three months, and then they exchange data peer to peer. Then they have another consortium and do it all over again."

Not surprisingly, then, there's no clear roadmap. However, almost all of the technology necessary for the back end of a complete swarm learning solution is already available to the potential customer. The technology side of the industry is ready for regulatory bodies and healthcare organizations to make this move. And this is the move that will make research at the speed at which COVID-19 was addressed the norm, rather than the exception.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.