Avoiding common pitfalls in the second stage of the AI product lifecycle  — discovery: the right expertise matters (part 2)

In our recently published white paper, The human touch in AI: A human-centric approach for better AI and data product development, Sanja Bogdanovic, head of data solutions at HTEC, explored how to tackle common obstacles in data and AI projects, from misaligned objectives to gaps in technical and domain expertise. This three-part series builds on those insights. In Part 2, Sanja delves into the discovery phase — often the make-or-break stage for data and AI initiatives — sharing practical strategies and hard-earned lessons to help teams navigate this crucial step.

In Part 1 of this series, we explored the critical role of the initial stage of a data and AI project, problem definition, and broke it down into two sub-stages: sparking interest and getting buy-in. In the problem definition stage, our focus is on aligning project objectives, coordinating with key stakeholders, and building a problem-driven approach from the outset. These elements lay the groundwork for a successful project, helping to avoid misalignment, wrong expectations, and costly course corrections later on. 

Now, in Part 2, we move into the discovery phase. This phase often brings hidden complexities to light, demanding expertise in data capabilities, domain-specific insights, and robust data governance. Here, I’ll explore the most common obstacles during discovery and why the right expertise is essential for navigating this phase successfully. 

Stage 2: Data and AI discovery — the foundation for success 

The discovery phase in data and AI projects is where initial excitement meets real planning. Here, ideas turn into actionable blueprints, and critical details come to light. It’s a pivotal stage that determines whether a project will stay on track or slowly unravel. Rushing through this phase or underestimating its complexity is a common pitfall that can set the entire project up for failure. To maximize success, teams need to approach discovery with diligence, clarity, and the right expertise. 

What could possibly go wrong? 

The discovery phase usually begins with stakeholder workshops to distill the problem into building blocks and create a blueprint of the target solution. The worst scenario that can unfold during these workshops is the decision to skip them. Let’s assume you don’t. The second worst scenario is rushing into building the solution. Instead, you should be investing time into fully understanding all the constraints and challenges of your project and how those challenges can potentially derail your data journey. 

Common pitfalls to take into consideration during the discovery phase 

Remember those stakeholder workshops that you didn’t skip? Those are crucial to gaining a thorough understanding of your client’s problem. Without these meetings, initial misunderstandings can lead to poorly aligned objectives, misallocated resources, and a solution that ultimately fails to address the client’s real needs. Here are a few other common pitfalls to avoid during this stage: 

  1. Moving forward without fully understanding the stakeholder’s current capabilities can lead to your assumptions guiding the product’s design and development rather than their real needs. Consequently, you may prioritize initiatives that are unlikely to succeed. 
  1. Failing to involve data experts early can result in “data blindness,” where the solution is built without considering the full potential and limitations of the available data. This is a common and consistent killer of data and AI solutions. 
  1. Involving a domain subject matter expert (SME) too late in the process (or not at all). For instance, if you’re developing a healthcare solution without involving a domain expert, you may overlook essential regulations that could lead to non-compliance in the solution.  
  1. Relying solely on system architects to handle data aspects and ignoring the overall impact of data on the system design will ultimately lead to insufficiencies in the final product. Unfortunately, this is still a common theme, as the software and data engineering worlds persistently clash. When it comes to building AI solutions, data engineers and software engineers often approach the process with different priorities and methodologies, leading to potential conflicts. While data engineers prioritize the integrity, accessibility, and quality of the data software engineers emphasize system performance, functionality, and user experience. Make no mistake, not having data and AI expertise during the discovery phase will significantly increase the solution’s vulnerability and probability of failure. 
  1. Lack of clarity on data quality standards frequently results in late-stage disagreements on what constitutes “good data,” affecting the solution’s architecture and delaying project progress. We recently published an ebook on how to assess your data and AI readiness which may be of use if you’re on the fence about how to evaluate and structure your data for AI. 
  1. Disregarding data governance from the start can cause project-wide inconsistencies, as alignment on data standards across business units is vital for long-term usability and accuracy. Data governance will define the effective management, quality, security, and use of an organization’s data, ensuring that it aligns with organizational goals, regulatory requirements, and ethical considerations.  

While all of these discovery-stage pitfalls are a major cause for concern, the scary part is that the resulting issues usually don’t surface until later stages. This means that your team could end this phase excited about a job well done. There may even be a company-wide announcement about the successful completion of a solution design, and it’ll be a big win. Then, the consequences of these skipped steps and rushed processes will spill over into the final, development stage when it will be the hardest and costliest to correct them. This could lead to delays or even abandonment of the project if the later-stage teams are unable to correct the initial setup and data issues. 

Solution: Aligning expertise and implementing data standards 

The discovery phase is like laying the foundation for a skyscraper — without precision and the right tools, everything built on top is in danger of collapsing. To ensure success, teams must approach this phase with curiosity, collaboration, structure, and clarity, transforming potential pitfalls into opportunities for innovation. Start by prioritizing stakeholder workshops and engaging in open, in-depth discussions to understand the client’s current capabilities, real needs, and long-term goals. These sessions should be guided by data experts and domain SMEs who can identify potential constraints and ensure the solution aligns with both business objectives and industry-specific requirements. 

Additionally, evaluate your data governance and establish data quality standards early to prevent inconsistencies and misalignments later in the project. Without governance and well-defined standards, data becomes fragmented and unreliable, undermining AI initiatives, analytics, and strategic decision-making. Involving data and software engineers collaboratively from the beginning ensures that data integrity and system performance are considered together, minimizing potential conflicts. Avoid falling into a trap by approaching the solution from a strictly technical angle — diversifying your approach in these early phases is crucial for success. By fostering cross-functional collaboration and emphasizing clarity in scope and requirements, you set a solid foundation for building AI solutions that are robust, compliant, and impactful. 

Who should be left out of this stage?  

This phase is too high-level for engineers, QAs, project managers, and delivery leaders, and having them involved will only overwhelm them and bring the risk of derailing the process. Keep them ready for the next phase when they’ll shine. 

Takeaway: Every project is unique; approach it with diverse teams and curiosity 

The discovery phase is where long-term success is often determined, and each challenge faced here is an opportunity to create a solution that’s both effective and resilient. Teams that invest time in genuinely understanding the stakeholders’ world and environment — from data quality and governance to domain-specific nuances — lay a solid foundation for their projects. Avoid the temptation to use the blueprints of the AI and data projects you’ve worked on before. Every project is unique, and every data set is different. It’s about being able to dance in your stakeholders’ shoes by truly understanding their problem space and creating a solution that addresses the issue in their system. 

Clarity of scope, capabilities, and possibilities and the proper involvement of people and teams can make the discussion flow easily and efficiently, inspire creative thinking, and spark innovation.  

With a strong foundation in discovery, we’re ready to move into the final phase: Delivery. In Part 3 of this series, we’ll explore how early decisions and preparation impact the outcome of the project as it takes shape and share strategies to ensure a smooth delivery that meets stakeholders’ expectations and project goals. 


Contributing author: Sanja Bogdanovic