Artificial Intelligence is empowering hackers to automate and tailor-fit digital assaults
In a significant shift in the cybersecurity landscape, government-backed hackers are increasingly using artificial intelligence (AI) to enhance their attacks, according to a report by the security firm CrowdStrike.
The North Korea-linked hacker team "Famous Chollima," also tracked as UNC5267, has conducted over 320 intrusions in a year, demonstrating the high operational tempo AI enables these actors to maintain. AI has been instrumental in automating reconnaissance, customizing phishing campaigns, and exploiting vulnerabilities, allowing hackers to scale attacks across thousands of targets swiftly.
One of the key advantages of AI is its ability to automate reconnaissance. AI bots scan networks and public sources for sensitive information, credentials, and configuration leaks, helping hackers identify attack entry points faster than manual efforts. This speed and efficiency are crucial in maintaining a high operational tempo, as demonstrated by Famous Chollima's annual intrusions.
AI also plays a significant role in phishing and social engineering. It generates hyper-personalized and grammatically flawless phishing emails, even translating and tailoring content for different languages and cultures, increasing success rates. For instance, the group Reconnaissance Spider almost certainly used AI to translate one of its phishing lures into Ukrainian during an attack.
In addition to phishing, AI is also used in vulnerability exploitation. It rapidly assesses the exploitation value of vulnerabilities and can mutate malware in real time to avoid detection by security tools, increasing attack effectiveness and stealth.
The use of AI by hackers poses a significant threat to the security of organizations worldwide. As businesses continue to adopt AI tools, the attack surface will expand, and trusted AI tools could potentially become the next potential insider threat. This trend is evident in the increasing number of attacks that exploit organizations' AI tools as initial access vectors for diverse post-exploitation operations.
The Iran-linked hacking team Charming Kitten likely used AI to generate messages as part of a 2024 phishing campaign against U.S. and European organizations. This incident underscores the growing sophistication of AI-powered cyber attacks and the need for organizations to stay vigilant.
To keep pace with these rapidly executing attacks, defenders are adopting AI-powered tools for real-time detection and response. However, the threat landscape is broadening as democratized AI tools allow less sophisticated hackers to perform tasks that previously required experts. This trend increases the volume and complexity of attacks, making it essential for organizations to stay informed and prepared.
In conclusion, AI acts as a force multiplier, enabling government-backed hackers to conduct faster, larger-scale, and more adaptive cyber attacks than previously possible. As the use of AI by hackers becomes more prevalent and sophisticated, it is crucial for organizations to adapt their defenses accordingly.
[1] CrowdStrike: 2022 Global Threat Report [2] Verizon Business 2022 DBIR [3] Langflow AI Workflow Development Tool Vulnerability Report [4] FireEye Mandiant M-Trends 2022 [5] McAfee Labs Threats Report: March 2022
- Artificial intelligence (AI) is utilized by hackers for automating reconnaissance, allowing them to identify attack entry points faster and maintain a high operational tempo, as seen with the North Korea-linked hacker team "Famous Chollima."
- AI also enhances the effectiveness of phishing and social engineering attacks, generating hyper-personalized and grammatically flawless phishing emails for diverse languages and cultures, increasing success rates, like in the case of the group Reconnaissance Spider.
- In vulnerability exploitation, AI can rapidly assess the exploitation value of vulnerabilities and mutate malware in real time to avoid detection, increasing attack stealth and effectiveness, as found in some attacks that exploited organizations' AI tools.