sugilee / DeepSeek-R1-Distill-Llama8B-MentalHealth2
Last scanned: Never Scanned
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
Trained with Unsloth | sugilee | Never scanned | |||
Upload tokenizer | sugilee | Never scanned | |||
Upload model trained with Unsloth | sugilee | Never scanned | |||
Upload model trained with Unsloth | sugilee | Never scanned | |||
Trained with Unsloth | sugilee | Never scanned | |||
Upload tokenizer | sugilee | Never scanned | |||
Upload model trained with Unsloth | sugilee | Never scanned | |||
Upload model trained with Unsloth | sugilee | Never scanned | |||
Upload tokenizer | sugilee | Apr 12, 2025 | |||
Upload model trained with Unsloth | sugilee | Apr 12, 2025 | |||
Upload model trained with Unsloth | sugilee | Apr 12, 2025 | |||
Upload README.md with huggingface_hub | sugilee | Apr 11, 2025 | |||
initial commit | sugilee | Apr 8, 2025 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.