google-bert / bert-large-cased-whole-word-masking

Last scanned: Never Scanned

Unscanned

Versions

Version

Commit message

Commit author

Last scanned

initial commitsystemNever scanned
Update pytorch_model.binsystemNever scanned
Update config.jsonsystemNever scanned
Update tf_model.h5systemNever scanned
Update modelcard.jsonsystemNever scanned
Update config.jsonsystemNever scanned
Update tokenizer.jsonsystemNever scanned
Copy model card from bert-large-uncased-whole-word-maskinglysandreNever scanned
Update dimensionslysandreNever scanned
Updates the tokenizer configuration file (#2)lysandreNever scanned
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Learn more
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.
Report your finding