Skip to main content
Advertisement
Live broadcast
Main slide
Beginning of the article
Озвучить текст
Select important
On
Off

Programs created by artificial intelligence have 15 times more vulnerabilities than software developed by humans. This technology is called vibe coding, and experts believe that at this stage the harm from it exceeds the benefit. Thus, the number of such gaps in the IT systems of Russian companies has increased by almost 30% over the year. Another problem threatening business in the Russian Federation is the increasing effectiveness of deepfakes, which are also created by the efforts of artificial intelligence. Experts estimate the total damage from such attacks at 17 billion rubles.

What is vibe coding and why is it dangerous?

AI-created programs have 15 times more vulnerabilities than human-designed software, especially in terms of input validation and business logic. This was reported in the Center for Monitoring and Countering Cyber Attacks of the Informzashita company.

Vibe coding is an approach to development in which a programmer sets neural network tasks in natural language, and the AI translates them into computer code. In practice, large language models (LLM) are most often used, which speed up the creation of template code and interfaces, but at the same time ignore security issues.

офис
Photo: Global Look Press/Oliver Berg

The company's experts note that the most common vulnerabilities in 90% of such software are lack of input filtering (76%), incorrect authorization (52%), and secrets in open repositories (39%). In addition, more than 80% of solutions have problems with business logic that can lead to application disruption and direct financial losses. AI models can reproduce outdated and unsafe patterns or correctly perform basic tasks, but not handle exceptions and non-standard processes.

— Startups, small and medium-sized businesses are the most vulnerable to the threats of vibe coding. The speed of implementation of new solutions here often outstrips the development of control processes, which creates risks. Also, small businesses have fewer funds that they are willing to allocate for information security, which often makes the software used unprotected," explained Anatoly Peskovsky, head of security analysis at IZ:SOC Informzashchita.

He noted that such errors due to the use of AI and critical vulnerabilities do not affect functionality, but often become an entry point for hackers and others attacking the company's infrastructure. The popularity of vibe coding and vibe testing (testing applications using an AI assistant) is really worrying information security experts, the expert added.

клавиатура
Photo: IZVESTIA/78 TV channel

The software of 89% of Russian companies contains vulnerabilities, the exploitation of which can lead, among other things, to unauthorized access to internal networks or to sensitive data. In addition, the number of discovered vulnerabilities in domestic applications in the third quarter of 2025 reached 6 thousand, which is 27% more than in the same period of 2024, the IT company Spikatel reported.

At the moment, about 8-10% of lines of code in Russian systems are generated using AI, and by 2030 this share may increase to 70%. These are solutions where high performance is not required, such as the interface of websites and mobile applications. Relying on vibe coding when a certain project size is reached is dangerous, because research shows that almost half of LLM models rely on fictitious libraries. They often come up with names based on patterns rather than real data. It is difficult and risky to use such a code," said Alexey Kozlov, a leading analyst at the Spikatel Information Security monitoring department.

бизнесмен
Photo: IZVESTIA/Pavel Volkov

The generated AI code is a global trend, and it is relevant not only for Russia. According to data from the last three years, about 50% of the code generated by artificial intelligence contains errors and potential vulnerabilities, said Anton Vedernikov, head of product security at Selectel. He is convinced that with the development of vibe coding, the growth of new applications will multiply. This means that automation of such checks after Vibe programmers will become much more relevant, because manual control will not be enough.

Deepfake attacks

The second important threat posed by AI efforts is deepfakes. According to Information Protection, the total damage from such attacks is estimated at 17 billion rubles.

— In the first half of 2025, about 6.4 thousand companies in Russia faced at least one deepfake attack. And the number of incidents involving the use of deepfakes against medium—sized and large organizations is about 20 thousand," the company's press service said.

Also, due to deepfakes, a new trend has emerged: since the beginning of the year, the share of announcements about leaks of internal documentation of companies has increased to 40%, the press service of BI.ZONE noted. Previously, leaks of such information were rare, in 2024 their share ranged from 15-20%, and user data (full name, mail, logins, passwords, phone numbers) remained predominant (80-85%).

ИИ
Photo: IZVESTIA/Sergey Lantyukhov

AI technologies for video, audio, and photo generation are developing rapidly, and today deepfake content is difficult to distinguish from "live" content. Therefore, it is important to constantly raise public awareness about fraudulent schemes using modern technologies. A successful attack with the substitution of the head's identity can cost the company millions of rubles and cause significant reputational damage," said Pavel Potekhin, executive director of MTS Link.

It is important to understand that the quality of the code generated by AI usually corresponds to the level of a developer of average qualifications or lower and at the same time requires careful monitoring, said Fedor Lezhnev, director of the Information Technology Department at Alfa Capital Management Company. The risks of vibe coding are reduced with strict compliance with corporate standards: the use of proven libraries, automatic code analyzers (SAST, DAST), mandatory code review before implementation, he noted.

Avito believes that the problem lies not in the technology itself, but in the approach to its use. Any AI tool can create vulnerabilities if used without proper control and understanding. Therefore, those who work with sensitive data use only models in the company's contour, as this protects against code and data leaks, summed up Andrey Usenok, head of information security at the company.

Переведено сервисом «Яндекс Переводчик»

Live broadcast