Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft.
The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI’s Huntr bug bounty platform.
The most severe of the
This site uses cookies to offer you a better browsing experience. By browsing this website, you agree to our use of cookies. Click More Info to view Privacy Policy.