Why laboratories need to thoroughly choose a LIMS
The article discusses practical uses of AI in labs, stressing the importance of treating it as a tool rather than a standalone solution. It explores AI’s role in image analysis, language processing, billing, quality control, training, efficiency analysis, and diagnostics. Collaboration between data scientists and medical professionals is highlighted as crucial for maximizing AI’s potential. The article advocates for a symbiotic human-AI relationship to drive lab businesses forward.
AI is a trendy thing. Whatever you do, if you can’t make AI work for you, you don’t exist. There is speculation abound about how AI will turn this or that field and the whole world upside down. Startups with AI in their name are worth an order of magnitude more. Everyone discusses whether AI will make people obsolete and bring about the end of the world.
AI theories, however, have been written for a long time. Neural networks have been around for decades, albeit computing power and approaches to learning were different before. Yes, it opens up new opportunities. Yes, we can build new products and business processes on its basis. But still, AI is nothing more than a tool. And the most important thing about any tool is to know how it is used. This skill costs much more than the tool itself. So, the opinion that the advent of AI is turning the world on its head is somewhat exaggerated.
While Elon Musk cares about the crack of doom, we will talk about how to use AI in the lab, what it is worth and what it can do, what specialties it has, and what tasks it can solve.
If we know this, we can use AI not only for marketing purposes (to show that we have it and are worthy of investment for that reason alone) but also for business.
While captcha still considers us human because we know how to find traffic lights in pictures, AI has long overtaken us in this job. It has a long history of helping labs find and classify objects in visual images. AI has primarily dealt with anatomic pathology, but there are other sections in which it has only yet to advance. More and more tasks convert to picture recognition. Experiments are underway that will allow AI to recognize different types of microorganisms. There are also currently attempts to use AI to classify blood cells in hematology.
In short, where there has always been technology like a microscope in the lab, we can now use AI. But we can also think of unconventional applications for its ability to see details. For example, it would be good to learn how to use AI in thinking about how our processes work.
One of the most interesting AI tasks in this area could be classifying samples by their pre-analytical quality. We always want to know what is wrong with a sample as early as possible. For example, oftentimes AI only needs a picture to realize that there is hemolysis or something else that prevents the sample from being used. It is better to show AI the sample before it is in the lab. That way AI will help us commit fewer mistakes.
For the same purpose, it would be good to feed AI images of our lab from all angles, inside and out and at different times of day. How well is the room laid out? Are there any suboptimal actions? By analyzing this information, AI could help us optimize our work. Perhaps we could even get it to use the picture to look for a missing sample. Or (with all the necessary privacy measures) observe the blood collection process and make recommendations on how to improve phlebotomists’ techniques.AI is good because it lacks fundamental human traits of laziness and inattention. We want it to act as our filter so that our imperfections don’t lead to failures.
If everyone is used to pictures, LLMs still make people ecstatic.
LLM repeatedly predicts the next word by learning statistical relationships from text documents during self-supervised and semi-supervised training processes. These language models seem to us to have intelligence. But in fact, neither Mistral, OpenAI nor CoPilot “understand” what they are saying. They are not in charge of meaning. They only know how many people have talked about it before them. That’s enough to be a minor miracle, though.
LLM significantly cheapens the process of generating relevant information. It works with a large number of sources. Before, we could do everything that generative AI can do, but now we can do it inexpensively. When we used to turn a laboratory report into a detailed interpretation of how the measured parameters affect the human body, we would take an answer sheet, surround ourselves with books, and write. But that would take a lot of valuable time from a qualified professional. LLMs know how to write such texts quickly and cheaply. Therefore, it is most logical to work with LLMs where we tell the customer how she can use the results and what they indicate.
The main pain point for the lab community is billing. It is very tempting to let AI take this headache on. Especially since it doesn’t have a head, there is nothing to ache!
We still haven’t solved this problem, though. A lab tries to get money from many insurance companies. Different doctors treat our clients. Diseases are numerous. We should not be overgenerous and pay for whatever tests people come up with to do. But we have to know the tests the patient needs are included in the insurance product and will be paid for. Then the laboratory will perform them qualitatively and receive funds for its development. That task is difficult to optimize, but we are hopeful about AI’s potential. There are already experiments in this area, but they have not yet reached mass application.
Another area AI has not yet dealt with, as we have not defined approaches, is quality control and research quality assurance. Quality is enhanced when we eliminate errors as much as possible. Since AI has classification skills, it might watch the quality of live samples rather than controls. One hundred percent control of each test would make quality management completely different. That is where we expect a breakthrough. So far, we haven’t seen such products on the market, although successful experiments are out there. All that is needed to implement them is time and money.
For quality to be high, employees must always stay up to date. That means keeping them informed and on the cutting edge. AI is incomparable at procuring information for employees so that they know about the latest updates in modern research. It tells them how and for what purposes tests are used, and how staff can produce them better. A laboratory specialist needs a lot of information to do their job well. AI can bring this information to employees in a timely manner. So far, it has not come to large-scale projects.
With AI’s ability to find inconspicuous connections, we could analyze the lab’s activities, evaluate its performance, and produce uncommon metrics. Modern systems can pick out patterns in data that are not visible to humans; for example, an AI chef combines foods in ways that no human chef would consider and that turn out to be incredibly tasty.
The laboratory floor is not the only place where its future success is forged. Its partners also actively integrate AI solutions into their hardware and software systems. Developers and manufacturers of instruments, calibrators, controls, and consumables are all starting to implement AI. They aim to create tools and processes that will turn the life of the laboratory upside down.
Over the last few years, interest in POC has continued to grow. The emergence of continuous monitoring systems is making the industry’s name “in vitro” incorrect, as we are doing a lot of in vivo analysis. AI can give this trend quite a boost. For POC systems it provides large data sets at an increased frequency, something you simply can’t do without. It’s one thing to measure temperature five times a day; it’s another to measure it fifty times a day. Analyzing such arrays without AI is impossible.
So, although AI is, as we said, “just a tool,” it’s still a marvelous tool. With it at our disposal we can realize solutions that were previously unimaginable.
This is a promising area for application of AI. Before we could confidently work only with monogenic factors (when one specific gene manifests a phenotypic trait). Now AI can analyze polygenic dependencies, and most are a piece of cake. Genes work in coordination and are dependent on environmental factors. Analyzing and applying this knowledge for the benefit of humans could be revolutionary. But to do so, we need to start combining data differently and analyzing it comprehensively, in larger volumes than before.
So, we’ve looked at the corners of the lab where we could use AI’s help to play around and clean up the mess. But the main question is not “what” but “how.” What principles should be the basis for cooperation with AI to make it fruitful?
What does it like most? Yum-yum, data! The more AI has, the better it works. That is a core property of its math. The more numbers we teach AI and the more information we work on, the better its results become.
Labs are data collectors. So, everyone is wondering: can’t we look at LIMS as a data lake? Sure, you can, but there is some nuance. You can’t draw interesting conclusions just by comparing test results. You must learn how to enrich clinical results by comparing them with data from other systems. To do this, you should ensure data interoperability, taking information from different sources and using a range of approaches. That is a non-trivial task worth thinking about.
To work on nurturing AI properly, you need a good team of data scientists and medical professionals who understand the nature of your data. It’s a big mistake to treat the medical information we feed to the neural network as just lines of text. We need to better understand what we give to AI and what we want it to accomplish. It’s a huge challenge. Now, the guys who do data analytics struggle to find common ground with the medical guys who understand the process. People still see the two approaches as alternatives. Either I make a logic-based model, carefully spelling out each node in the graph of where, how, and why information is applied – or I feed the material to the neural network and let it digest. But it is better to combine both approaches. Humans can guide AI in a paternal way but still trust it to find patterns that are invisible to our eyes.
The more we can mix data from different sources, the more different specialists we bring together to work on projects, and the more attention we pay to safety aspects (that is not about the robot-apocalypse but rather about the good of the patient), the more effectively we work with AI.
Among other things, the technical challenge of validating the system is non-trivial. AI will also make mistakes. It is crucial to understand whether these errors are within acceptable levels. It is still unclear how to do this. AI is not a car that you can simply look under the hood. The way of indirect checking is a matter of the near future.
The first industrial robots worked according to a strict algorithm: pick up a part here, take it there, turn it three times, and repeat 8,000 times. They had “Stay away, I’ll kill you” written on them.
Now robots have AI that can figure out where to go and how to cooperate with humans. They will not damage a human working with them in the same area.
Trends are the same with AI in the lab. It is not worth hoping that AI will come and solve everything. After all, AI may have the Intelligence component, but social and emotional intelligence is still lacking, and we don’t expect it anytime soon. Since it is not a human, it observes human interests only insofar as the team in which it works teaches it to do so.
One can imagine AI as a strange genius from a TV series, working in a friendly and warm team of other actors: working alone he might make trouble, but together they work for the good of the people.
Or you can compare AI to a child prodigy: without fully understanding how it develops, people around them raise expectations to the ceiling only for the child to get tired of it. In this article alone the word AI is repeated 80 times, which is more than enough. We should bring up our miracle child carefully, surround it with loving people, and care about its career guidance in the future when it unfolds to its full potential.
As we can see, the idea that AI can fully replace humans in the lab is folly. Success will come to those who learn to effectively work hand in hand with AI and, most importantly, to make IT guys and lab diagnostics specialists work together. If both humans and AI play to their strengths it will result in a complimentary and fruitful partnership. It is not a solitary and self-sufficient AI, but rather our cooperation with it that will lead the lab business to the next industrial leap.
https://about.vivica.us | info@lifedl.net
© 2024 Life Data Lab, LLC.
Vivica and the Vivica logo are trademarks of Life Data Lab, LLC.
Life Data Lab, LLC is an FDA-registered device manufacturer.
Vivica™ is an FDA-listed, class I laboratory information management system.