Every medical product is born twice: first in design, then in the real world. Digital twins ensure the first birth gets it right.
The article draws on the insights of Ognjen Milicevic, Senior Machine Learning Engineer at HTEC and Teaching Assistant at the School of Medicine, University of Belgrade—an interdisciplinary scientist and engineer whose work bridges medicine, data science, and system modeling. A recognized thinker in the field, Ognjen explores how AI, big data, and digital twin technologies are reshaping the future of personalized medicine.
For decades, medicine has advanced through observation and reaction. We discovered, tested, and refined treatments in the physical world, learning from success and failure alike. But with the explosion of biological data, high-throughput analytics, and virtual modeling, that process is changing. Today, technology allows us to simulate patients and diseases, test hypotheses, and personalize care long before it reaches the clinic. The convergence of genomics, big data, and digital twin technology is reshaping medicine and redefining how we understand life itself.
From genomics to “multi-omics”: Building the foundation of data-driven medicine
The story of personalized medicine begins with genomics, the science that unlocked our biological code. When scientists first sequenced the human genome in the early 2000s, it was hailed as the key to understanding health and disease at their roots. But researchers soon realized that DNA alone doesn’t tell the full story. Our genes are static; they don’t change throughout life. The real complexity lies in how that genetic code is expressed and regulated over time.
Genomics opened a new chapter in medical science, giving us the first comprehensive look at our biological blueprint. But DNA only tells part of the story. That realization gave rise to the era of the “omics.” Scientists began studying RNA (transcriptomics) to understand when and how genes are turned on or off, then moved to proteomics, mapping the proteins those genes produce, and metabolomics, analyzing the chemical reactions that keep cells alive. Each discipline revealed a new layer of biological insight.
Together, these layers form a dynamic map of human biology that captures not only who we are genetically, but how we function in real time. Integrating these vast datasets required a new language: big data analytics.
This multidimensional data has transformed our view of disease from a single cause to a dynamic system. The more data we integrate, the clearer the picture becomes and the closer we move to medicine that’s predictive by design.
The quiet revolution of personalized medicine
Technological breakthroughs in clinical practice haven’t been loud or spectacular. They often happen behind the scenes, in ways most patients don’t notice, but clinicians do. Take radiology: over the past fifteen years, radiomics (the quantitative analysis of medical images) has transformed how clinicians interpret everything from X-rays to tissue slides. This revolution has been unfolding in laboratories and imaging rooms, one algorithm at a time.
One of the most striking examples of this silent progress is non-invasive prenatal testing (NIPT). A decade ago, it sat at the edge of genomics as an ambitious experiment. Today, it’s a standard clinical procedure, able to detect even the smallest anomalies in fetal DNA. For patients, it feels routine. For clinicians, it’s a landmark in the integration of science into daily practice. It replaced older, riskier double and triple tests with something far more precise: information drawn directly from a baby’s cells.
This slow-burn revolution matters because when science moves too far ahead of clinical reality, it risks becoming suspended innovation, hanging loose, too advanced for doctors to use. The steady embedding of genomics, imaging, and data analytics into everyday medicine shows how transformation really happens: incrementally, responsibly, and hand in hand with clinicians.
Another issue is that whenever we have huge leaps, there’s pushback and distrust from the public. We saw this during the pandemic. The heavy regulatory processes controlled by the Federal Drug Administration (FDA) in the US and the European Medicines Agency (EMA) in the European Union are there for a good reason. Unfortunately, there will always be a patient who won’t make it in time to receive a newly approved therapy. This is the tradeoff we must make.
Addressing the challenges in the MedTech and Healthcare introduced by the new technologies
When it comes to introducing new technologies into medicine, regulation is both our safeguard and our greatest challenge. The FDA is the leading body that governs and approves medical breakthroughs and fringe technologies, while in Europe, the EMA follows a similar path. It is more cautious and a bit slower, but with stronger built-in safeguards.
I’ve worked more closely with the FDA, particularly in medical devices, and I’ve seen the regulatory burden firsthand. The requirements are not unreasonable. In fact, the agency is remarkably liberal about what you choose to prove to them, but whatever you decide to prove must be transparent, reproducible, and thoroughly documented. That makes perfect sense. Still, it’s a heavy lift. You need to maintain a full history of everything you’ve done—complete provenance from day one—which is no small task when you’re prototyping and moving fast. At times, it feels stifling. It slows you down.
Fortunately, things are changing. We now have automated testing processes that take much of the strain off teams, while keeping humans in the loop to review and discuss results. It makes the system more flexible without compromising safety. I’ve seen this approach implemented at HTEC three times, from inception to completion, and I can honestly say that testing today is in a very good place.
One distinct issue remains, though: privacy. Data is the new gold, and the idea that patients truly own their data doesn’t hold up in practice. Many don’t fully understand what they’re signing or how their information will be used, while some simply don’t care. But for those of us building the technology, privacy must be engineered in from the start. It can slow development, yes, but it’s non-negotiable.
I believe regulators could help by setting clearer rules around data use in AI. Encouragingly, the FDA has recently approved the use of modern technologies like digital twins and personalized medicine in clinical studies. That’s opened up a world of opportunity for faster, more efficient innovation. We can now use algorithms to accelerate trials, reduce costs, and even cut down on animal testing. The benefits are immense, and we’re only just beginning to explore them.
Digital twins: modeling life itself
Digital twins have become one of the most exciting frontiers in medical science. In theory, a digital twin is an algorithm designed to respond just like the system it represents to external stimuli such as medication or environmental changes. In medicine, these models replicate humans or parts of humans, which is anything but simple. We are systems made up of trillions of parameters (lines of code, if you will), and each model must specialize in a particular function like an organ, a process, a disease, etc.
As the saying goes, “All models are wrong, but some are useful.” These models are useful. Today, there are companies dedicated solely to emulating and simulating patients with proprietary algorithms. It’s a major step toward personalized medicine, and it’s not the science of tomorrow; it’s the science we already have.
To see how transformative this can be, consider how most clinical research works. The standard design is the case-control study, which compares a patient with a condition who receives treatment (the case) with a healthy person who does not (the control). What surprises many is how difficult it is to find enough healthy controls for these studies. They simply aren’t available in the same way that patients are.
In response, scientists often perform meta-analyses, merging dozens of smaller studies on the same topic to strengthen statistical power. One would think that combining ten thousand participants would provide a balanced dataset, but often, all those studies relied on the same limited set of healthy controls. This results in unbalanced studies that weaken rather than strengthen conclusions.
This is where digital twins change everything. By simulating healthy control groups, they fill one of the biggest gaps in medical R&D. They allow researchers to model outcomes, test hypotheses, and validate therapies without relying solely on scarce human participants. That alone is a game-changer, and it makes studies faster, more ethical, and far more scalable.
Double twins: the patient and the disease
Replicating the diversity of human biology is already a remarkable feat. But disease brings its own variability, which is harder to model, less predictable, and constantly changing. It’s far easier to simulate the normal range of human biology, where data is abundant, than to recreate all the nuances of a disease. That’s the next frontier: building not just a patient’s digital twin, but also its disease twin. This makes it a double twin, a paired model that captures both the person and the pathology. We’re not there yet, but we’re getting close.
The next frontier: from digital twins to living models
One of the most promising bridges between the digital and physical worlds is the development of organs-on-chips. They are tiny lab-grown organs known as organoids with chips installed. These organoids are grown from real human stem cells but are cultivated in controlled environments where sensors can measure every reaction in real time.
This leap became possible in 2012, when scientists learned how to reprogram any cell into a stem cell, then grow small, functional organs in vitro. Since then, organoids have helped explore conditions like Down syndrome by observing how altered environments shape brain structure, and gut health through miniature intestines exposed to different bacteria, among others.
It still feels like science fiction, yet it’s already here. Science is evolving so fast that practice is still catching up. When it does, digital and biological models will finally work in concert.
Data: The building block of digital life
The power of digital twins lies entirely in the quality of the data that feeds them. Each layer of biological information such as DNA, RNA, proteins, metabolites, contributes to a distinct “signature” of the human condition.
When scientists study the effect of a medication, they first determine the level of information on which it’s likely to act: DNA, RNA, or in the proteome. In many cases, the focus is on RNA, which captures changes over time and sits closest to the origin of genetic activity. By comparing RNA expressions between patients and healthy controls, researchers can observe which genes are turned up or down in disease.
With more than 20,000 genes producing measurable signals, each condition leaves behind a unique molecular signature of RNA expressions, and we call it a disease signature.
Then the disease signature can be compared against vast databases of substances and drugs. Substances with opposite expression patterns could cancel out the disease signature. This method makes it possible to identify promising therapies from existing, already-approved drugs, whose safety and metabolism are well understood.
This is where mathematics meets medicine. The same principles that govern systems engineering now guide the discovery of new therapies, translating complexity into computable models that can predict outcomes long before a single dose is administered. Thus, a digital twin becomes both a mirror and a map, a simulation of life built from the information that defines it.
The role of AI in disease prediction and prevention
Prediction is the most common use case for AI. Part of the reason why it’s going slower in the medical field is because prediction is just a small part of the overall diagnostics process. Generally, people must have actual checkups and screenings to understand what’s going on.
However, true AI value comes when predictive insights lead to earlier screenings, smarter resource allocation, and more personalized interventions. AI doesn’t replace diagnostics; it reframes them. It turns medicine into a proactive system that identifies who might need care, when, and why.
Collaboration: the catalyst for breakthroughs
No single field can deliver this future alone. The Oden Institute in Texas offers a striking example: a collaboration between medical researchers from the Center for Computational Oncology and aerospace engineers from the Willcox Research Group used MRI data to build digital twins of brain tumors. The project produced models capable of predicting patient responses to radiation therapy—a breakthrough in treating glioblastoma, one of the most aggressive cancers known.
Such stories show that innovation lives at the intersection between clinicians and data scientists, biology and computation, theory and application.
Looking ahead
To make medicine truly personalized, we must continue bridging disciplines, standardizing data, and embedding ethical frameworks into every algorithm we create.
As we learn to model life itself, the line between research and care begins to blur. Personalized medicine is rapidly merging medical and engineering challenges. As an expert involved in developing these things, I can say that many breakthroughs are coming.
At HTEC, we see this every day. Personalized medicine depends on precision engineering. From data infrastructure to digital twin design, we’re helping our partners across the medtech ecosystem make medicine predictive by design, creating better and more precise treatments specialized to treat each patient’s unique condition and biology.





