Locutusque / TinyMistral-248M
Last scanned: Nov 12, 2024 at 7:00 PM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | Locutusque | Nov 12, 2024 | |||
Update README.md | Locutusque | Oct 19, 2024 | |||
Adding Evaluation Results (#2) | Locutusque | Nov 12, 2024 | |||
Update README.md | Locutusque | Nov 12, 2024 | |||
Final epoch | Locutusque | Nov 12, 2024 | |||
Update README.md | Locutusque | Nov 5, 2024 | |||
Update README.md | Locutusque | Nov 12, 2024 | |||
Update README.md | Locutusque | Nov 12, 2024 | |||
Update README.md | Locutusque | Nov 12, 2024 | |||
Update config.json | Locutusque | Nov 12, 2024 | |||
Upload pytorch_model.bin | Locutusque | Nov 12, 2024 | |||
Adding `safetensors` variant of this model (#9) | Locutusque | Oct 21, 2024 | |||
Adding Evaluation Results | leaderboard-pr-bot | Oct 23, 2024 | |||
Adding Evaluation Results | leaderboard-pr-bot | Nov 12, 2024 | |||
Adding `safetensors` variant of this model | SFconvertbot | Nov 12, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.