Russians were warned about the risks of uploading data to ChatGPT
So far, cases of data leaks through neural networks remain the subject of discussion and have not been definitively confirmed, but it is not worth the risk. This opinion was expressed by Sergey Kuzmenko, head of the Roskachestvo Digital Expertise Center, on Friday, August 1.
The expert warned that by uploading confidential information to ChatGPT and other neural networks, users create the potential for its misuse in the future. He recalled that technology is constantly evolving, and the protection invented yesterday may turn out to be vulnerable tomorrow.
"It is absolutely forbidden to upload data that may compromise you or other people. This category includes passport data, network identifiers, payment card data, medical records, usernames and passwords from services, as well as any other information that allows you to identify a specific person and potentially use them to harm," he said in a conversation with Lenta.Ru .
According to Kuzmenko, the safest scenario for communicating with a neural network is to use anonymized data sets. This means that before uploading, you must delete all information that may point to a specific person. If the information can be used against anyone, it is better to refrain from asking the question and solve the problem on your own, the specialist concluded.
On July 28, Igor Trunov, president of the Russian Bar Association, told the National Assembly that Russia could recognize the use of artificial intelligence in the preparation of crimes as an aggravating circumstance, while "unfriendly AI" remains in the gray zone.
On the same day, Sam Altman, the head of OpenAI, said that the authorities would be able to access correspondence with ChatGPT upon a court request. 360.ru .
Earlier, on July 23, it was reported that researchers from the artificial intelligence (AI) laboratory of T-Bank AI Research have developed a new way of interpreting and managing language models based on the SAE Match method. The discovery allows you to directly influence errors and hallucinations in a large language model during text generation.
Language models such as ChatGPT build their responses based on a multi-layered architecture, where each layer processes information, "passing" it on. Until recently, researchers could only record which features (or concepts) appear in these layers, without understanding exactly how they evolve.
Earlier, on June 12, Ekaterina Orlova, a psychologist and deputy director of the Institute of Clinical Psychology and Social Work at Pirogov University, said that it is dangerous to trust a chatbot with the most intimate experiences, fears and traumatic situations, RT reports.
All the important news is on our channel in the MAX messenger.
Переведено сервисом «Яндекс Переводчик»