Davlan / afro-xlmr-small
Last scanned: Never Scanned
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | Davlan | Never scanned | |||
Upload config.json | Davlan | Nov 9, 2024 | |||
Upload pytorch_model.bin with git-lfs | Davlan | Never scanned | |||
Upload sentencepiece.bpe.model with git-lfs | Davlan | Never scanned | |||
Upload special_tokens_map.json | Davlan | Never scanned | |||
Upload tokenizer.json | Davlan | Never scanned | |||
Upload tokenizer_config.json | Davlan | Never scanned | |||
Upload training_args.bin with git-lfs | Davlan | Never scanned | |||
Update README.md | Davlan | Never scanned | |||
Update README.md | Davlan | Never scanned | |||
Update README.md | Davlan | Never scanned | |||
Update README.md | Davlan | Never scanned | |||
Adding `safetensors` variant of this model (#1) | Davlan | Nov 9, 2024 | |||
Update README.md | Davlan | Never scanned | |||
Update README.md | Davlan | Never scanned | |||
Update README.md | Davlan | Never scanned | |||
Update README.md | Davlan | Oct 23, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.