Nelci / bertimbau_hate_speech
Last scanned: Oct 22, 2024 at 11:14 AM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
End of training | Nelci | Oct 22, 2024 | |||
Training in progress, epoch 1 | Nelci | Oct 22, 2024 | |||
update model card README.md | Nelci | Oct 22, 2024 | |||
Adding `safetensors` variant of this model (#1) | Nelci | Nov 12, 2024 | |||
Training in progress, epoch 1 | Nelci | Oct 22, 2024 | |||
Training in progress, epoch 2 | Nelci | Oct 22, 2024 | |||
Training in progress, epoch 1 | Nelci | Oct 22, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.