RDEL #84: What influences developers' trust in adopting AI-assisted coding tools?
Quality of output and functional value are important to build trust, but engineers are also largely motivated by their unique behavioral styles.
Welcome back to Research-Driven Engineering Leadership. Each week, we pose an interesting topic in engineering leadership and apply the latest research in the field to drive to an answer.
Generative AI tools promise to boost developer productivity, yet many engineers hesitate to adopt them fully. Some developers trust these tools implicitly, while others remain skeptical about their reliability, usability, or alignment with their work styles. Since trust is a key driver of adoption, this week we ask: what actually influences developers' trust in AI-assisted coding tools?
The context
The explosion of generative AI tools has reshaped software development, offering massive potential for productivity gains. Engineers now have access to AI-powered assistants that can generate code, suggest improvements, and automate tedious tasks. However, for these tools to be valuable, developers must trust them. Without trust, even the most powerful AI remains underutilized. Trust is built through reliable outputs, alignment with engineering workflows, and transparency around risks and limitations.
That said, AI adoption isn’t just about functionality—it’s also about how well the tool supports different working styles. Some engineers enjoy experimenting with AI, while others prefer structured learning before incorporating it into their workflow. To make AI useful for everyone, we need to understand what builds trust across different engineering styles, ensuring AI tools are both powerful and inclusive.
The research
A study surveying 238 developers at GitHub and Microsoft investigated what factors influence developers’ trust in generative AI and how trust correlates with adoption. The researchers used Partial Least Squares-Structural Equation Modeling (PLS-SEM) to evaluate trust and behavioral intention.
Key findings of the study included:
System/Output Quality Matters – The perceived reliability, accuracy, and safety of AI-generated outputs significantly influenced trust. Developers were more likely to trust tools that provided consistent, high-quality responses aligned with secure coding practices.
Functional Value Increases Adoption – Engineers who found AI useful for learning or solving coding challenges reported higher trust levels, reinforcing the idea that practical benefits drive confidence in AI tools.
Goal Maintenance Drives Trust – AI tools that aligned well with an engineer’s immediate task and objectives inspired more trust. If the AI’s recommendations felt contextually relevant and required minimal rework, developers were more likely to integrate it into their workflows.
Cognitive Styles Shape Intentions – Developers who are intrinsically motivated to explore new technologies, have high self-efficacy, or are comfortable with risk were more likely to trust and adopt AI tools. In contrast, task-oriented engineers or those with lower confidence in AI interactions were hesitant to integrate AI-generated code into production environments.
Ease of Use Was Surprisingly Less Impactful – Unlike traditional software tools, where usability is a key factor, the study found that ease of use did not significantly correlate with trust. This suggests that for generative AI, the perceived correctness and value of the output matter more than interface simplicity.
The application
This study highlights the non-linear adoption patterns for engineers using AI tools. Aligning both trust and behavioral motivations are key to building a healthy culture around AI usage. For engineering leaders seeking to encourage AI adoption in a productive and responsible manner, this research suggests several takeaways:
Ensure AI-generated outputs meet high standards – Invest in tools that prioritize output quality, security, and alignment with best practices to build trust among developers.
Help engineers see AI’s functional value – Provide examples of how generative AI can improve learning, reduce cognitive load, and accelerate development in specific, practical scenarios.
Consider cognitive diversity in AI adoption – Developers have different approaches to learning and risk-taking. Offering structured training, best practices, and transparency about AI’s strengths and weaknesses can make adoption smoother for all engineers.
As AI becomes a larger part of engineering workflows, it is even more important to align AI capabilities with engineers’ goals, use tools that the team trusts, and design for diverse cognitive styles.
—
Happy Research Tuesday!
Lizzie