Following a successful inaugural AI-first executive event in London, HTEC hosted an engaging dinner and panel discussion with healthcare leaders from Bayer, Baxter, Amgen, Verizon, NJII Innovation, Think Tank, and other prominent organizations in New Jersey. Bringing together experts and industry leaders, the event explored how AI is driving the next wave of innovation—and complexity—in healthcare.
Moderated by Alfred Olivares, HTEC’s Global Managing Partner, HLS, the panel featured Tim Sears, HTEC’s Chief AI Officer, Robin Goldsmith, Global Lead of Health Innovation & Strategy, Verizon, and Nemanja Kovačev, MD, orthopedic and trauma surgeon and AI expert, HTEC. Here’s what stood out from the conversation.
Overcoming bias in healthcare data
AI is already transforming healthcare, from diagnostic imaging to record-keeping. Yet, in a highly regulated and high-risk industry, unexpected errors can have serious consequences. That’s why addressing challenges like model hallucinations and biases in training data is critical and must be approached systematically.
As Tim observed, “Integrating LLMs in healthcare workflows can be tricky, as these models are usually meant to cover a wide spectrum of use cases. As bias comes from data, companies need to start with a well-balanced dataset, which is easier if they are training the models themselves.”
Since healthcare often relies on fine-tuning existing AI models, inheriting biases is almost inevitable for most organizations. Therefore, it’s essential to critically evaluate AI outputs and ask rigorous questions before integrating these models into real-world workflows.
Human in the loop remains irreplaceable
To add to Tim’s argument about the necessary control over the AI models, Nemanja insisted on keeping the human in the loop at all times, especially for more risk-inherent areas such as diagnostics and treatment.
“Healthcare is specific due to inherent risks. We cannot fully rely on the models yet, so we need the human in the loop. AI explainability can help build trust not only among patients and doctors, but also across the broader network of healthcare stakeholders. Understanding how AI reaches its conclusions is key to ensuring transparency, accountability, and confidence in its use.”
As Nemanja observed, the recently introduced EU’s AI Act is a step towards strengthening explainability by imposing transparency, logging, documentation, and a legal right to explanation for high‑risk AI. As technology and regulations advance, it’s vital—especially in high-risk fields like diagnostics and treatment—that AI extends rather than replaces human expertise.
A huge bet on AI in healthcare
Globally, healthcare is facing a shortage of staff, and this is where Robin sees an immense potential of AI:
“With the global shortage of staff, we’re looking for cases where AI will not replace humans, but fill in some gaps, for example, in areas like back office and virtual nursing. Healthcare generates vast amounts of data, and with its capacity to process it quickly and efficiently, AI can support medical record keeping, automate administrative workflows, and assist with coding and billing, to mention just a few use cases.”
With the sensitive nature of patient data, concerns of security come to the forefront. With recent breaches, connectivity has become crucial – and Robin thinks that looking into solutions like private 5G networks could be a step toward building more secure connected hospitals and integrating AI more safely into healthcare workflows.
Beyond PoC: Making AI work at scale
As the audience observed, one of the biggest issues of making AI work in healthcare is moving beyond the PoC. In addition, as the demand for AI solutions grows, so does the need for data centers, compute resources, and energy. This can create doubts about the long-term sustainability of AI solutions.
On the other hand, Tim observed that in this climate, the cost of inference is likely to keep dropping, and Robin pointed out that upgrading legacy data centers can help alleviate the current situation. In addition, healthcare is well-positioned for AI, as it generates vast amounts of data—one of today’s most valuable resources.
To close this engaging discussion on a positive note, Alfred stressed the need for industry-wide collaboration to meet the growing demand for resources and sustain the pace of AI-driven innovation:
“We need responsibility not just from individual service providers, but from the industry as a whole. The challenge is finding ways to protect our IP while still collaborating to implement AI more effectively. It’s time to adopt a more collective mindset—one that helps us make smarter use of limited resources.”
We look forward to hosting more events like this—bringing together industry leaders to explore how AI can drive meaningful and ethical impact across sectors. Let’s keep the conversation going—reach out to further explore how we can build effective AI solutions together.