AI can lead to suffering distancing syndrome, says Kaspersky expert

Artificial Intelligence (AI) can lead to “suffering distancing syndrome”, warned Kaspersky expert, Vitaly Kamluk, as he examines the psychological hazards of AI in cybercrime.

Kamluk, Head of Research Center for Asia Pacific, Global Research and Analysis Team (GReAT) at Kaspersky, revealed that cybercriminals are increasingly leveraging AI in their malicious activities. This allows them to shift responsibility onto the technology itself and evade accountability for the consequences of their cyberattacks.

According to him, this will result in “suffering distancing syndrome”.

Also read: Nvidia, Reliance join hands to advance AI in India: LLMs in India regional languages coming soon

“Other than technical threat aspects of AI, there is also a potential psychological hazard here. There is a known suffering distancing syndrome among cybercriminals. Physically assaulting someone on the street causes criminals a lot of stress because they often see their victim’s suffering. That doesn’t apply to a virtual thief who is stealing from a victim they will never see. Creating AI that magically brings the money or illegal profit distances the criminals even further, because it’s not even them, but the AI to be blamed,” Kamluk explained.

Another psychological consequence of AI that can impact IT security teams is known as "responsibility delegation." As cybersecurity processes and tools increasingly rely on automation and neural networks, individuals may feel less responsible if a cyberattack occurs, especially in a corporate setting.

This can also affect defenders, especially in the corporate world full of compliance and formal safety responsibilities. In such cases, an advanced defence system could be unfairly blamed for security incidents. 

How to safely embrace the benefits of AI:

1. Accessibility 

 Limit access to AI systems, track content history, and verify content generation methods to prevent misuse.

2. Regulations 

The European Union is considering rules to label content created with AI, making it easier for users to spot AI-generated material. This helps identify AI-made images, audio, video, or text. While some may still misuse AI, they'll be in the minority and will face consequences.

3. Education

Promote AI awareness in schools, differentiating it from natural intelligence. Teach responsible tech use and consequences for abuse.

“Like most technological breakthroughs, AI is a double-edged sword. We can always use it to our advantage as long as we know how to set secure directives for these smart machines,” Kamluk said.



from Apps News https://ift.tt/0qPYOrN
via IFTTT

Comments