This is the second part of a blog post series detailing HTEC’s first-hand investigation into the feasibility of delegating the development of full-fledged software solutions to AI tools. To read the introductory post and learn more about the investigation’s scope and goals, click here.
The series spans the experiences of one of HTEC’s most experienced Solution Architects, Zoran Vukoszavlyev, in relying exclusively on AI tools through all stages of a simulated software development project. This blog post focuses on the initial stages of a software development project – requirements and system design.
The process of defining software requirements on a project often relies on vague input. Clients don’t always grasp the full scope of the necessary work – they have a vision for a product or a solution and rely on us as technology experts to define the roadmap that will make that vision come to life.
For the purpose of our investigation, we chose to treat AI tools as a sort of a technology partner theoretically capable of handling the entirety of a software development project, starting with software requirements. To better simulate common real-life project circumstances, our starting point was intentionally vague and incomplete.
Defining software requirements
The initial set of requirements provided to AI tools was minimalistic, stating little more than the domain (healthcare) and the main focus areas. Based on this, we tasked AI with investigating this initial set and identifying the missing pieces, starting with functional requirements. Once AI had generated the new, more detailed requirements, we would review whether individual items were correct or out of scope and then iterate until we determined that the functional scope was ready.
For non-functional requirements (NFRs), we took a slightly different approach. We tasked AI with asking questions about NFRs and generating a form with checkboxes and multiple options for each question. The options provided by AI were generally solid, albeit not as comprehensive as would be expected in regular development, even after several iterations. This faces us with a question we often encounter on real-life projects.
“The question that emerges here is whether it is feasible to be perfect. In HTEC’s Tech Excellence Office, we often talk about the balance between technical excellence and the reality of deadlines. That balance is, essentially, the life of a solution architect. I assumed that AI might bring us to the elusive excellence in a shorter time frame, but it turns out that, even with AI tools, we still need to balance perfection and reality. We can reach a certain point much faster, but at that point, we still need to say, ‘this is good enough to move forward’, because otherwise, we would be iterating infinitely at every step. Excellence doesn’t have to be perfection, but simply a point where something works well enough.” – Zoran Vukoszavlyev, lead researcher on this project.
Solution architecture
Once a list of requirements was complete, we exported it to a markdown document and used it as input for the next AI chat session focused on software architecture. Based on the requirements list, we tasked AI with suggesting five options on what architectural principles should be followed for the software solution.
The suggested options were generally solid and well-founded. We found it surprising that Claude AI went beyond proposing single options but suggested a combination of different architectural approaches for specific cases (e.g. combining microservice architecture with event-driven behavior). In other words, Claude AI wasn’t simply repeating something found on the web, but contextualizing the information and proposing the best solution for the scenario it was presented. In fact, Zoran accepted one such suggestion and decided on a hybrid approach.
System design phase
The initial goal of this stage was to have an entire AI-made design document, in alignment with our standard practices. On a regular greenfield project, HTEC will create a system design phase (SDP) document – a detailed account of the requirements, architecture, and a variety of other relevant information that is often over 100 pages long and serves as a roadmap for future development.
We started this process by asking AI to come up with a design document template we would follow. Once we settled on the skeleton of a document, we started filling out chapters one by one.
Diagrams
The starting point of the SDP was the visual representation of the solution. However, creating diagrams with AI proved to be a slippery slope. Initial results were mixed but generally unsatisfactory: the diagrams were difficult to interpret, with arrows often crossing one another without any marks or pointing nowhere. Significant time spent in attempts to refine the results yielded little improvement. We tried switching from the default SVG markup language to PlantUML. It is a diagram-as-a-code tool that generates diagrams based on textual input. The thinking was that with PlantUML, we could at least double-check the textual input to see how different system components are connected. However, this choice created further complications because AI started treating PlantUML as the default tool for all diagrams. The problem with this is that PlantUML is good for UML diagrams, but not best suited for general-purpose diagrams (e.g., boxes and arrows). After numerous attempts to reset the chat context, as well as experimenting with other diagraming tools, we reached the conclusion that default tools (Markdown for text and SVG for visual representation) work best with Claude AI.
However, the diagrams created with SVG were at best acceptable for internal review, but really difficult to read and maintain and not suitable to be presented to the client or included in an actual SDP document. As a compromise solution, Claude AI was tasked with converting SVG files to draw.io files, which could then be adjusted manually.
Currently, the most effective solution for diagramming is for AI to generate ASCII art so that we can see the concept and then draw the diagrams manually in draw.io.
From diagrams we moved on to performance and security aspects, which AI handled a lot more competently and efficiently. From this point forward, the SDP document was completed fairly quickly.
Overall, the creation of the requirements list, architectural design, and the SDP document took 18 work hours, including manual adjustments to diagrams. For comparison, this process normally takes between one and two months on a regular project.
Design review
It is standard procedure at HTEC that a newly created SDP document is reviewed and validated by committee of solution architects. To assess the results of AI in the creation of the SDP document, we put it through the same process.
Within the span of a day, nine solution architects reviewed the document and provided their impressions. For the purpose of the review, Zoran created a brief questionnaire with each question being graded on a scale of 1 to 10. These were the average grades:

The relatively high average numbers suggest that the initial phase of the SDP document creation process can be dramatically accelerated by using AI tools. However, a significant amount of manual adjustment is still needed. In its current form, the AI-generated SDP document would not pass HTEC’s internal review, but the numbers are promising.
Scope adjustments
The initial phases of our investigation raised an important question: In AI-supported development, do we really need an SDP document?
Aside from the fact that our experience shows that the process requires a lot of manual adjustment to make the document presentable, AI really doesn’t need things to be presentable to be able to build the context. The presentation is intended for human eyes. Therefore, if the customer expects it or if customer approval is needed, we can create a design document. However, if an SDP is not required by the client, all AI would need is a requirements list in a markdown format.
To further pursue this line of thought, we converted all the decisions to an architectural decision record (ADR) document, complemented with a risk analysis session. Risk analysis involved two methods that are very common in healthcare. The SWIFT (structured what-if techniques) method is commonly utilized at HTEC to good effect. However, AI struggled with SWIFT, because it deals with intuitive “what if” scenarios. Many of the questions generated by AI were illogical, irrelevant, and not risk related. In general, it seemed to generate questions that addressed the source of the problem but not the problem itself.
Eventually, we restarted the exercise using the FMEA (failure mode and effects analysis) method. This method requires a different way of thinking, and AI responded much better to this approach. Focusing on the components of the design and the connections between them, it generated more on-point risks and mitigations.
The risk analysis generated new requirements, which in turn generated new ADRs, requiring new iteration cycles. Once this was complete, we reconsidered whether the design document was truly needed and decided to focus on the implementation. AI-assisted creation of an SDP document will be revisited and reinvestigated at a later point, where AI tools might evolve enough so that less human intervention is needed.
Additional notes and observations
- AI can be a great tool in the ideation phase. It often proposes unconventional and outside-the-box options that can serve as a fresh perspective. This is important because we are all biased by our previous experiences and preferences.
- AI hallucinations are still an issue. It is necessary to have expert human supervision to identify and mitigate hallucinations, illogicalities, and ill-advised decisions.
- AI displays logic gaps in defining requirements. Despite running multiple iterations, AI failed to address multiple crucial requirements for patient management, such as patient reconciliation. At this point, relying solely on AI to define requirements would likely lead to critical lapses and omissions.
- The length of the chat history is a concern. There were numerous instances of chat context being full in the middle of work. It is necessary to find ways to start a new chat context reusing the outcomes of the previous session. One solution would be to regularly export decisions into markdown files and add them to the project context memory.
Final takeaways
AI has tremendous potential to both accelerate and improve the initial stages of project setup, including requirements, architecture, and system design. However, the technology still has sufficient limitations and flaws that require a high degree of human supervision and intervention. While far from perfect, the outcomes of using AI tools in project setup are still promising, suggesting that the results will improve even further as the technology and our utilization practices continue to evolve.
Stay tuned for the next article in the series as we move on to the implementation phase.