and-effect / musterdatenkatalog_clf
Last scanned: Nov 7, 2024 at 6:30 PM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
add github Repo | Rahka | Oct 25, 2024 | |||
fix taxonomy not found error | Rahka | Oct 22, 2024 | |||
add demo | Rahkakavee Baskaran | Oct 22, 2024 | |||
delete typing packages and update readme | Rahkakavee Baskaran | Nov 11, 2024 | |||
add pipeline wrapper | Rahkakavee Baskaran | Never scanned | |||
Update README.md | friederikebauer | Nov 11, 2024 | |||
Update README.md | friederikebauer | Never scanned | |||
Update README.md | friederikebauer | Never scanned | |||
add links | Rahka | Never scanned | |||
change recall to f1 score | Rahka | Never scanned | |||
upload retrained model | Rahka | Nov 7, 2024 | |||
Delete vocab.txt | Rahka | Never scanned | |||
Delete tokenizer_config.json | Rahka | Never scanned | |||
Delete tokenizer.json | Rahka | Never scanned | |||
Delete special_tokens_map.json | Rahka | Never scanned | |||
Delete sentence_bert_config.json | Rahka | Never scanned | |||
Delete pytorch_model.bin | Rahka | Never scanned | |||
Delete modules.json | Rahka | Never scanned | |||
Delete config_sentence_transformers.json | Rahka | Never scanned | |||
Delete config.json | Rahka | Never scanned |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.