Enhancing Software Development with Large Language Models: A Case Study of Kolay.ai
Main Article Content
Abstract
The integration of large language models (LLMs) into software development has transformed the field by streamlining coding processes, reducing manual workload, and enabling automation of documentation and testing. This paper presents a detailed case study of Kolay.ai, a project built using LLM-based development tools. It demonstrates how LLMs accelerate development cycles by 30–40%, reduce errors by 30%, and improve onboarding efficiency. However, the study also identifies challenges such as hallucinated outputs, context management issues, and integration complexities, which require careful oversight through human-in-the-loop (HITL) workflows. To address these challenges, the project employed a modular development strategy, structured prompt libraries, and continuous monitoring techniques. The findings emphasize that while LLMs offer significant advantages, manual oversight remains essential for ensuring code quality, consistency, and security. This paper proposes practical solutions, including enhanced prompt engineering and memory-augmented LLMs, to optimize future LLM-based workflows. It concludes by highlighting the need for balanced collaboration between human developers and LLMs, paving the way for scalable, efficient, and adaptive software development.
Cite this article as: S. E. Şeker and H. Nizam-.zoğur, “Enhancing software development with large language models: A case study of kolay.ai,” Electrica, 26, 0033, 2026. doi:10.5152/electrica.2026.25033.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
