AI tools are increasingly used by contributors to read code, explore codebases, and generate changes. Across many open source projects, this is already affecting how issues are opened and pull requests are submitted. While AI can help people get started, it also creates new challenges for maintainers.
This talk is grounded in real discussions and concrete examples from the open source community. Maintainers in projects such as GNOME, OCaml, Python, and Django have reported similar patterns: large or unnecessary AI-generated changes, missing design discussion, references to non-existent APIs, and contributions that are technically correct but hard to review and maintain. In many cases, the workload shifts from contributors to already time-constrained maintainers.
The focus of this talk is not whether AI should be allowed or banned. The shared concern emerging from these communities is about responsibility. Problems arise when AI replaces understanding, testing, and human accountability, breaking the social processes that open source relies on.
The talk also looks at how projects are trying to respond. Some are adding documentation, review rules, or disclosure requirements, while others are starting broader discussions around governance, sustainability, and legal risk. These efforts show that the problem goes beyond individual pull requests.
Rather than offering simple answers, this talk aims to share the real questions the community is actively asking, and to help contributors and maintainers think more clearly about the future role of AI in Python open source.