and-effect / musterdatenkatalog_clf
Last scanned: Oct 25, 2024 at 9:44 PM UTC
Files
config_sentence_transformers.json
tokenizer.json
pipeline.py
README.md
tokenizer_config.json
config.json
taxonomy.json
pytorch_model.bin
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.