Bestgamingpro

Product reviews, deals and the latest tech news

Code-generating tools may cause more security problems than they solve

Researchers from Stanford have found that code-generating AI systems like Github Copilot may pose more security vulnerabilities than previously understood.

Elon Musk is a co-founder of the artificial intelligence company OpenAI, which was the focus of this research.

Codex is the engine behind Microsoft’s GitHub Copilot, which translates natural language into code and makes contextually relevant adjustments recommendations to simplify and popularise the development process.

Problems with Artificial Intelligence Coding

Co-author Neil Perry states that “code-generating technologies are presently not a substitute for human developers.”

According to the study’s methodology, 47 developers with varying levels of experience were asked to utilise Codex to solve security-related challenges written in Python, JavaScript, and C. The study found that compared to a control group, individuals who depended on Codex were more likely to produce unsafe code.

Perry elaborated, “developers using [coding tools] to complete tasks outside of their own areas of expertise should be concerned, and those using them to speed up tasks that they are already skilled at should carefully double-check the outputs and the context that they are used in in the overall project.”

The use of artificial intelligence in coding tools has been called into question before. In reality, GitHub, which is owned by Microsoft, was sued because its Copilot feature did not properly credit the work of other developers. In response, a lawsuit seeking $9 billion was filed over 3.6 million separate breaches of Section 1202.

Artificial intelligence (AI) driven code-generating tools are now best seen as a helping hand that helps speed up programming rather than a complete replacement; nevertheless, if recent advancements are any indication, they may soon replace conventional coding.