Home » Specialist articles » AI-assisted business analysis: How LLMs are changing the discovery process
Specialist article
AI-assisted business analysis is rapidly becoming a central component of modern digital transformation projects. Large language models (LLMs) show enormous potential, particularly in the discovery process, i.e., the early phase of requirements gathering, in which needs, goals, and system limitations are clarified. They are capable of understanding and structuring natural language and generating new content. This enables analysts to prepare workshops more efficiently, process stakeholder input more quickly, and formulate requirements more clearly and consistently.
At their core, LLMs are AI models that are trained on huge amounts of text and can therefore grasp both the context and semantics of complex texts. They can extract unstructured information from emails, meeting notes, or other documents and convert it into a technically meaningful form. This language and structure advantage makes them highly relevant for business analysis, which traditionally depends heavily on the quality of human communication, the consistency of documents, and the completeness of stakeholder statements.
At their core, LLMs are AI models that are trained on huge amounts of text, enabling them to grasp both the context and semantics of complex texts. They can extract unstructured information from emails, meeting notes, or other documents and convert it into a technically meaningful form. This linguistic and structural advantage makes them highly relevant for business analysis, which traditionally depends heavily on the quality of human communication, the consistency of documents, and the completeness of stakeholder statements.
Numerous current research projects are already demonstrating how LLMs can derive functional and non-functional requirements from free text and provide them in formalized formats. A prominent example is the ReqBrain project, which demonstrates how a specialized model generates standard-compliant software requirements from natural language descriptions – with a high degree of consistency with human-created specifications (Habib et al., 2025). The automatic derivation of non-functional requirements is also developing rapidly: Studies show that LLMs can derive plausible additions to topics such as security, performance, or usability from existing functional requirements (Almonte et al., 2025). This is the first time that an approach has been developed that facilitates and standardizes the often-neglected definition of NFRs.
Beyond generating requirements, LLMs show great potential in quality assurance and standardization. Requirements in many projects are inconsistent, incomplete, or contradictory—a well-known problem in requirements engineering. Studies such as those by Lubos et al. (2024) show that LLMs can check requirements for clarity, consistency, and completeness and generate suggestions for improvement based on international standards such as ISO 29148. This capability enables quality assurance at a level of depth that was previously only possible through manual, time-consuming review processes.
LLMs can also provide valuable support in the earliest phase of the discovery process, during interviews and workshops. They are able to generate question lists, structure topic blocks, and assist with documentation. The LLMREI research prototype shows how semi-automated requirements interviews can work: the model asks structured questions, records answers, and derives requirements from them (Sami et al. 2024). While this does not replace human moderation and relationship skills, it can significantly increase the consistency of information gathering, especially in large-scale projects.
Despite this potential, the limitations must not be overlooked. LLMs generate information not through genuine understanding, but through probabilities. This can result in incorrect, misleading, or incomplete requirements—so-called hallucinations—which can pose significant risks in critical or regulated areas. The lack of domain knowledge, lack of contextual depth, and often limited traceability of model decisions raise questions regarding responsibility, compliance, and auditability. Many studies to date have been conducted in laboratory environments, so validation in real-world project structures is still pending (Arora et al., 2023).
Therefore, a consciously controlled and iterative approach is recommended: Companies should start with pilot projects, ideally with a clearly defined scope, in order to evaluate opportunities and risks. Results generated by LLMs should be reviewed by experienced analysts. The quality of the prompts and the clarity of the context provided are crucial for the quality of the results. In addition, governance mechanisms should be established that define responsibilities and traceability. A continuous feedback and improvement process ensures that the use of LLMs in the project environment remains high-quality and responsible in the long term.
Overall, it is clear that LLMs do not replace business analysis, but rather expand it. They automate routine tasks, increase speed, and promote standardization—while humans continue to be needed where contextual understanding, critical thinking, and stakeholder management are required. Given the dynamic development of the models, it is expected that AI-supported business analysis will become an established practice in the coming years. Companies that address this issue early on and build up the relevant skills will not only secure efficiency gains, but also a strategic advantage in an increasingly data-driven competitive environment.
Sources:
Arora, C., Grundy, J., Abdelrazek, M. (2023): Advancing Requirements Engineering through Generative AI: Assessing the Role of LLMs.
Lubos, S., Felfernig, A., Tran, T. N. T. et al. (2024): Leveraging LLMs for the Quality Assurance of Software Requirements.
Habib, M. K., Graziotin, D., Wagner, S. (2025): ReqBrain: Task-Specific Instruction Tuning of LLMs for AI-Assisted Requirements Generation.
Almonte, J. T., Boominathan, S. A., Nascimento, N. (2025): Automated Non-Functional Requirements Generation in Software Engineering with LLMs.
Sami, M. A., Waseem, M., Zhang, Z. et al. (2024): AI based Multiagent Approach for Requirements Elicitation and Analysis.
Gerne leiten wir Sie weiter. Hierbei übermitteln wir einige Daten an den Anbieter. Mehr Informationen unter: Datenschutz
Gerne leiten wir Sie weiter. Hierbei übermitteln wir einige Daten an den Anbieter. Mehr Informationen unter: Datenschutz
Gerne leiten wir Sie weiter. Hierbei übermitteln wir einige Daten an den Anbieter. Mehr Informationen unter: Datenschutz
We will gladly forward you to the site. In doing so, we transmit some data to the provider. More information under: Data protection
We will gladly forward you to the site. In doing so, we transmit some data to the provider. More information under: Data protection
We will gladly forward you to the site. In doing so, we transmit some data to the provider. More information under: Data protection
Carl-Zeiss-Ring 15a
85737 Ismaning