polyglots / SinLlama_v01

Last scanned: Aug 21, 2025 at 11:02 PM UTC

Unsafe

Versions

Version

Commit message

Commit author

Last scanned

Update README.mdAravindaHWKNever scanned
Rename SinLlama.png to asserts/SinLlama.pngAravindaHWKNever scanned
Upload SinLlama.pngAravindaHWKNever scanned
Update README.mdAravindaHWKNever scanned
Added some of the metadataNisansaDdSNever scanned
Updated the citation to match what is given by Google scholarNisansaDdSNever scanned
Fixed RenderingNisansaDdSNever scanned
Upload trainer_state.json with huggingface_hubAravindaHWKNever scanned
Upload rng_state_1.pth with huggingface_hubAravindaHWKNever scanned
Upload rng_state_2.pth with huggingface_hubAravindaHWKNever scanned
Upload adapter_config.json with huggingface_hubAravindaHWKNever scanned
Upload tokenizer_config.json with huggingface_hubAravindaHWKNever scanned
Upload scheduler.pt with huggingface_hubAravindaHWKNever scanned
Upload optimizer.pt with huggingface_hubAravindaHWKNever scanned
Upload special_tokens_map.json with huggingface_hubAravindaHWKNever scanned
Upload rng_state_3.pth with huggingface_hubAravindaHWKNever scanned
Upload adapter_model.safetensors with huggingface_hubAravindaHWKNever scanned
Upload rng_state_0.pth with huggingface_hubAravindaHWKNever scanned
Upload training_args.bin with huggingface_hubAravindaHWKNever scanned
Upload tokenizer.json with huggingface_hubAravindaHWKNever scanned
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.
Report your finding