Davlan / afro-xlmr-small

Last scanned: Never Scanned

Unscanned

Versions

Version

Commit message

Commit author

Last scanned

initial commitDavlanNever scanned
Upload pytorch_model.bin with git-lfsDavlanNever scanned
Upload sentencepiece.bpe.model with git-lfsDavlanNever scanned
Upload special_tokens_map.jsonDavlanNever scanned
Upload tokenizer.jsonDavlanNever scanned
Upload tokenizer_config.jsonDavlanNever scanned
Upload training_args.bin with git-lfsDavlanNever scanned
Update README.mdDavlanNever scanned
Update README.mdDavlanNever scanned
Update README.mdDavlanNever scanned
Update README.mdDavlanNever scanned
Update README.mdDavlanNever scanned
Update README.mdDavlanNever scanned
Update README.mdDavlanNever scanned
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.
Report your finding