AI-aided test-first development
There’s a growing interest in leveraging AI tools to aid in code development in the software industry. While many opt to use models like ChatGPT to generate tests for existing implementations, one approach has taken a different route, embracing a technique called ‘AI-aided test-first development’.
Instead of providing implementation code directly to an external model, which may raise concerns about sensitive information exposure, employ a process where you first articulate a tech stack and design patterns in a reusable prompt “fragment.” Then, outline the specific feature requirements, including acceptance criteria. Utilizing this information, you can task ChatGPT with generating an implementation plan tailored to our architectural style and technology stack. Once you validate the plan, you use ChatGPT to create tests for the specified acceptance criteria.
This approach has proven effective on several fronts. It encourages teams to articulate their architectural style clearly, facilitating alignment and coherence across development efforts. Additionally, it serves as a valuable learning tool for junior developers and newcomers, guiding them in coding features that adhere to established team practices.
However, it’s essential to acknowledge the limitations of this approach. While you don’t directly expose source code to the model, you provide potentially sensitive information, such as tech stack details and feature descriptions. Therefore, teams must exercise caution and consult legal advisors to mitigate intellectual property concerns. This precautionary measure is crucial until AI tools explicitly designed for business applications are available.
By balancing the benefits and risks of AI-aided development, this is a practical approach that enhances productivity while safeguarding sensitive information.