and-effect / musterdatenkatalog_clf
Last scanned: Never Scanned
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
add github Repo | Rahka | Oct 25, 2024 | |||
fix taxonomy not found error | Rahka | Oct 22, 2024 | |||
add demo | Rahkakavee Baskaran | Oct 22, 2024 | |||
delete typing packages and update readme | Rahkakavee Baskaran | Nov 11, 2024 | |||
Update README.md | friederikebauer | Nov 11, 2024 | |||
upload retrained model | Rahka | Nov 7, 2024 | |||
add example to widget | Rahka | Oct 29, 2024 | |||
Update README.md | Rahka | Oct 29, 2024 | |||
Update README.md | Rahka | Oct 29, 2024 | |||
upload read me with all training params (manual) | Rahka | Oct 29, 2024 | |||
add results | Rahka | Oct 22, 2024 | |||
add results | Rahka | Oct 29, 2024 | |||
test README | Rahka | Oct 29, 2024 | |||
test README | Rahka | Oct 29, 2024 | |||
test README | Rahka | Oct 29, 2024 | |||
test README | Rahka | Oct 29, 2024 | |||
README update with new jinja template | Rahka | Oct 29, 2024 | |||
upload model | Rahka | Nov 12, 2024 | |||
add correct model | Rahkakavee Baskaran | Oct 30, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.