Avoiding common pitfalls in the final stage of AI product lifecycle — Delivering on data and AI promises (part 3)

In our recently published white paper, The human touch in AI: A human-centric approach for better AI and data product development, Sanja Bogdanovic, head of data solutions at HTEC, explored how to tackle common obstacles in data and AI projects, including aligning objectives and addressing technical and domain expertise gaps. This three-part series builds on those insights.   

In Part 2, we focused on the Discovery phase, emphasizing its critical role in setting up a successful data and AI project. We covered the importance of thoroughly understanding the problem space, involving domain experts, and establishing strong data governance early on. By addressing potential pitfalls — such as “data blindness” and misaligned priorities — teams can ensure that their project foundations are solid, ultimately reducing the risks of costly adjustments later in the process. 

In Part 3, Sanja examines the Delivery phase, where careful planning meets execution. Here, she shares strategies to ensure data and AI projects deliver meaningful, actionable results. The delivery phase is where your early decisions and preparations come to life, and oversights from earlier stages start to show. In this article, we’ll discuss the specific challenges that arise during delivery, including maintaining alignment with stakeholders, ensuring data quality, and managing cross-functional collaboration. We’ll also cover practical strategies to keep the project on track, helping teams navigate the complexities of delivery to achieve impactful, reliable results. 

Stage 3: Delivery — the solution is tested by the stakeholders 

In the delivery phase, all the hard work you’ve put into understanding the problem and data starts to materialize. The solution is becoming a tangible product ready to be integrated into existing workflows.  

At this stage, key stakeholders will use and evaluate the product. Business leaders should assess whether the AI solution aligns with organizational goals and delivers on its intended outcomes. Technical teams, including data scientists and engineers, will verify that the product operates as expected under real-world conditions. Domain experts should also review the solution for industry-specific relevance, ensuring it complies with necessary regulations and standards. In some cases, end-users will participate in final testing, providing usability and practical application feedback. At the same time, product managers must oversee the review process to ensure the AI product meets predefined success metrics before full deployment. 

Common pitfalls in the delivery phase 

While the delivery phase promises action and results, it’s also where the cracks from earlier stages become clear. In addition to the previous pitfalls, here are some delivery-stage specific issues that may crop up and cause problems in the final stage of development. 

1. Poor team staffing 

Not understanding the complexities of data-driven solutions and the diverse challenges they present often leads to less obvious but critical missteps in staffing the delivery team. As experts are building their delivery team, they often forgot to consider the following questions:  

  • Who will ensure the quality of the solution and how?  
  • Who will ensure the quality of data, and how?  
  • Who should be involved in designing the metrics for tracking KPIs?  
  • Who should evaluate the effectiveness of the solution, and how?  

Bringing domain experts into the process too late often results in the discovery of critical issues during delivery. Late-stage issues are often costly and time-consuming to fix, pushing deadlines and budgets off track. For example, in a healthcare AI solution, late involvement of compliance experts might result in a system that fails to meet critical regulatory requirements. Consequently, the system would require significant reworking to avoid legal risks. Similarly, in a retail AI application, not involving marketing specialists early in the delivery phase could result in a solution misaligned with user behavior patterns. This misaligned may reduce the product’s effectiveness or result in delays that increase costs, damage stakeholder trust, and push deadlines well beyond the target. 

2. Misguided data engineering  

In part 2 of the series, we mentioned that the lack of data clarity, standards, and governance often results in discrepancies in the delivery. As delivery progresses, the effects of inadequate data management during earlier phases begin to emerge, particularly in the form of integration challenges. Semantic discrepancies — where similar data points carry different meanings or interpretations across systems — can create confusion and complicate the alignment of datasets. Schema inconsistencies between databases present another hurdle; for example, one system might store a customer’s name in two fields (first name, last name), while another combines them into one. Reconciling these differences can lead to delays. Additionally, data from legacy systems often uses outdated formats, requiring extensive cleaning and conversion before it can be integrated. These issues typically surface during the delivery phase, when data sources are actively combined for the first time, exposing previously hidden inconsistencies in formats, semantics, and schema structures that were overlooked in earlier stages. 

3. Inappropriate tools  

A poor understanding of data needs during earlier phases often spills into delivery, leading to the selection of tools and metrics that cannot meet the project’s demands. Poor tool and metric selection can also be attributed to a misunderstanding of the core business problem the AI solution is designed to address. For instance, a company might opt for a data visualization tool suitable for basic dashboards but incapable of handling real-time data streaming or generating self-serve insights. As the project evolves and advanced data requirements emerge, the limitations of the selected tool can lead to performance bottlenecks and scalability issues. Similarly, choosing the wrong success metrics or KPIs can derail a project’s delivery phase. By following the wrong metrics, teams are more likely to track progress in ways that fail to capture the true impact or value of the solution. This can sidetrack the project and erode stakeholder trust, leading to confusion and a potential loss of focus, trust, and interest in the solution.  

4. Leadership gaps  

Choosing the right leadership team is often overlooked in AI solution delivery. Leadership is often chosen on the basis of trust rather than matching the leaders’ experience and skills to the specific demands of the project. Delivering successful data and AI solutions requires leadership deeply familiar with the unique challenges of the landscape. Without this expertise, teams risk missing early warning signs, spending excessive time on unproductive experiments, not pivoting when necessary, and struggling to address stakeholder questions or align with their needs. While business and leadership skills are vital, effective leadership in data and AI demands versatility across multiple domains — technology, business strategy, product design, the data and AI ecosystem, and the ethical considerations surrounding AI solutions. Leaders must navigate the complex intersections of these areas with purpose and clarity, ensuring the project remains on track and aligned with its objectives. 

5. Underestimation of user testing  

Skipping or minimizing end-user testing can result in solutions that are technically sound but impractical or unintuitive for those who need to use them. This can significantly impact adoption and success. 

Solution: Delivering on the promise to solve the stakeholder’s problem 

Aligning strategy with action is critical in the delivery phase — teams need to translate and seamlessly execute their plans. Proactive planning should begin well before the phase formally starts, with teams identifying potential bottlenecks, testing data pipelines, and finalizing tools and success metrics. Proactive planning also includes scheduling frequent progress reviews to anticipate challenges and ensure resources are allocated efficiently to avoid delays. In addition to proactive planning, here are a few other delivery phase-actions to prioritize:  

Effective collaboration 

Effective collaboration is key during the final phase of development and can be achieved through clear communication across technical teams, domain experts, and stakeholders. Regular stand-up meetings, shared project management tools, and transparent documentation can help ensure everyone stays on the same page. For instance, technical teams can align on data integration updates, while domain experts provide ongoing validation to ensure the solution meets industry standards. Encouraging open feedback during these sessions helps surface issues early and enables quicker resolutions. 

Continuous alignment 

Teams should implement structured checkpoints to evaluate their progress against predefined success metrics, adjusting priorities as needed. For example, mid-phase reviews can assess whether the AI solution is tracking toward business objectives and highlight areas requiring further refinement. Assigning a dedicated project lead to oversee communication and coordination can help align team members and stakeholders with the project’s overarching goals. 

Focusing on proactive planning, effective collaboration, and continuous alignment can help create a solid framework for navigating delivery-phase complexities. It’s also important to address the practical challenges that arise as the solution moves closer to deployment. Key potential pitfalls mentioned earlier — data integration, tool scalability, leadership involvement, and user feedback — play a crucial role in translating strategic planning into a functional and impactful AI solution. 

Data integration  

Often one of the most complex tasks in delivery, data integration must be addressed effectively. Teams should dedicate resources to resolve integration issues promptly, ensuring consistency in semantics, schema compatibility, and data formats. Frequent testing of data pipelines, and early identification of potential bottlenecks can help mitigate risks and ensure smooth deployment. 

Tool and metric selection   

The tools used during delivery must be capable of handling the real-time demands of AI solutions. Evaluating the scalability and performance of tools at this stage is essential to avoid bottlenecks as the solution begins to operate in real-world conditions. Additionally, the evaluation metrics should be set up specifically for the solution. There are no known frameworks to follow. Instead, teams should build different types of tests to help evaluate the durability of the solution. Key considerations for tests include: 

  • Anticipated solution usage: data flow, estimated processing complexity, anticipated fluctuations in data volumes, and user activity.  
  • Solution extendibility: how easy it is to add a new source, apply new processing approaches, and integrate new models.  
  • Alignment with regulations and ethical policies: how will the ethical aspects of the solution be tracked and measured, will new regulations cause solution deprecation or will there be a way to make it compliant, how will end users trust the solution, and what should teams track to ensure that the level of trust does not degrade.  

Leadership  

Leadership plays a pivotal role in maintaining alignment and facilitating collaboration across teams. Active engagement from leaders ensures clear communication, regular progress reviews, and swift resolution of any challenges that arise. This coordination keeps all stakeholders aligned with the solution’s goals and objectives, reducing friction during delivery. 

User testing and feedback  

User testing and feedback are critical in the delivery phase, playing a pivotal role in validating the solution’s practicality and usability. While end users should be involved early in the development process as key stakeholders, it’s especially important to include structured User Acceptance Testing (UAT) during the delivery phase to ensure the solution aligns with real-world needs. 

UAT typically involves approaches such as beta testing, where the solution is deployed in a controlled environment to gather initial reactions and uncover potential issues before full-scale rollout. Moderated usability testing sessions can provide valuable insights into how users interact with the solution, identifying areas that require improvement. Additionally, feedback can be collected through surveys or questionnaires to understand user satisfaction, usability concerns, and feature requests. 

Feedback gathered during this stage should guide refinements, ensuring a smooth and successful rollout of the AI solution. 

Takeaway: Delivering AI solutions that align with expectations 

In data and AI solution development, technology is only the beginning. Ensuring your solution’s success requires assembling the right team, addressing data integration challenges promptly, using scalable tools, maintaining strong leadership, and incorporating user feedback. By focusing on these elements, teams can bridge the gap between vision and execution, delivering AI solutions that not only meet technical requirements but also align with business objectives and user expectations. 

AI is remarkable, and we’re witnessing the technology’s active evolution — which is cause for excitement. However, at the end of the day, it’s people who build data and AI solutions for people to use.  Consequently, we should drive technology towards solutions, not the other way around. 

To learn more about navigating the challenges of AI projects or to discuss how our team can help, connect with us today.


Contributing author: Sanja Bogdanovic