jameslahm / yolov10n
Last scanned: Oct 25, 2024 at 6:35 AM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | jameslahm | Nov 12, 2024 | |||
Push model using huggingface_hub. | jameslahm | Nov 12, 2024 | |||
Update README.md | jameslahm | Never scanned | |||
Update README.md | jameslahm | Nov 12, 2024 | |||
Update config.json | jameslahm | Never scanned | |||
Update config.json | jameslahm | Never scanned | |||
Push model using huggingface_hub. | jameslahm | Oct 21, 2024 | |||
Update README.md | jameslahm | Never scanned | |||
Add inference section (#1) | jameslahm | Oct 25, 2024 | |||
Update README.md | jameslahm | Oct 17, 2024 | |||
Update README.md | jameslahm | Oct 18, 2024 | |||
Update README | nielsr | Nov 12, 2024 | |||
Remove supervision | nielsr | Nov 12, 2024 | |||
Address comments | nielsr | Nov 12, 2024 | |||
Add inference section | nielsr | Nov 12, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.