Do Users Write More Insecure Code with AI Assistants?
Computer Science > Cryptography and Security
[Submitted on 7 Nov 2022 (v1), last revised 16 Dec 2022 (this version, v2)]
Abstract: We conduct the first large-scale user study examining how users interact with
an AI Code assistant to solve a variety of security related tasks across
different programming languages. Overall, we find that participants who had
access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote
significantly less secure code than those without access. Additionally,
participants with access to an AI assistant were more likely to believe they
wrote secure code than those without access to the AI assistant. Furthermore,
we find that participants who trusted the AI less and engaged more with the
language and format of their prompts (e.g. re-phrasing, adjusting temperature)
provided code with fewer security vulnerabilities. Finally, in order to better
inform the design of future AI-based Code assistants, we provide an in-depth
analysis of participants’ language and interaction behavior, as well as release
our user interface as an instrument to conduct similar studies in the future.
Comments:
18 pages, 16 figures, update adds names of statistical tests and survey questions
Subjects:
Cryptography and Security (cs.CR)
Cite as:
arXiv:2211.03622 [cs.CR]
(or
arXiv:2211.03622v2 [cs.CR] for this version)
https://doi.org/10.48550/arXiv.2211.03622
arXiv-issued DOI via DataCite
Submission history
From: Neil Perry [view email]
[v1]
Mon, 7 Nov 2022 15:19:20 UTC (3,940 KB)
[v2]
Fri, 16 Dec 2022 19:01:32 UTC (4,624 KB)
References & Citations
export bibtex citation
Loading…
Bibtex formatted citation
×
Bibliographic and Citation Tools
Code, Data and Media Associated with this Article
Demos
Recommenders and Search Tools
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv’s community? Learn more about arXivLabs and how to get involved.