dortucx / unkindtokenizer

Last scanned: Nov 12, 2024 at 7:00 PM UTC

Safe

Versions

Version

Commit message

Commit author

Last scanned

initial commitdortucxNever scanned
Rename tokenization_yi.py to this_tokenizer_is_evil_actually.pydortucxNever scanned
Update tokenizer_config.jsondortucxNever scanned
Update tokenizer_config.jsondortucxNever scanned
Update this_tokenizer_is_evil_actually.pydortucxNever scanned
Update this_tokenizer_is_evil_actually.pydortucxNever scanned
Update this_tokenizer_is_evil_actually.pydortucxNever scanned
Update this_tokenizer_is_evil_actually.pydortucxNever scanned
Update this_tokenizer_is_evil_actually.pydortucxNever scanned
Create config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Update config.jsondortucxNever scanned
Rename tokenizer.model to pytorch_model.bindortucxNever scanned
Rename pytorch_model.bin to tokenizer.modeldortucxNever scanned
Upload test.bmpdortucxNever scanned
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.
Report your finding