nvidia / Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0
Last scanned: Never Scanned
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
Add NeMo Curator section | sarahyurick | Mar 22, 2025 | |||
Fix to the prompt template (#6) | cparisien | Oct 19, 2024 | |||
Updated information about model training prompt (#5) | cparisien | Oct 21, 2024 | |||
Upload folder using huggingface_hub | cparisien | Never scanned | |||
Delete adapter_model.safetensors with huggingface_hub | cparisien | Never scanned | |||
Delete adapter_config.json with huggingface_hub | cparisien | Never scanned | |||
Create ReadMe (#3) | cparisien | Oct 29, 2024 | |||
Upload folder using huggingface_hub | cparisien | Nov 3, 2024 | |||
Delete adapter_config.json with huggingface_hub | cparisien | Never scanned | |||
Delete adapter_model.safetensors with huggingface_hub | cparisien | Never scanned | |||
Upload folder using huggingface_hub | cparisien | Oct 31, 2024 | |||
initial commit | cparisien | Nov 12, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.