Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules.
“Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates,” Recorded Future said in a new report shared with The Hacker News.
“Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates,” Recorded Future said in a new report shared with The Hacker News.