Nelci / bertimbau_hate_speech
Last scanned: Mar 31, 2025 at 5:23 PM UTC
Files
.gitignore
pytorch_model.bin
training_args.binvocab.txt
config.json
model.safetensors
.gitattributes
special_tokens_map.json
tokenizer_config.json
README.md
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.