nielsr / layoutlmv3-finetuned-funsd
Last scanned: Never Scanned
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | nielsr | Nov 12, 2024 | |||
Training in progress, step 500 | nielsr | Nov 12, 2024 | |||
Training in progress, step 1000 | nielsr | Nov 12, 2024 | |||
Model save | nielsr | Never scanned | |||
update model card README.md | nielsr | Nov 3, 2024 | |||
End of training | nielsr | Nov 9, 2024 | |||
update model card README.md | nielsr | Never scanned | |||
Update README.md | nielsr | Oct 21, 2024 | |||
Upload preprocessor_config.json | nielsr | Nov 3, 2024 | |||
Librarian Bot: Add base_model information to model (#3) | nielsr | Oct 28, 2024 | |||
Adding `safetensors` variant of this model (#4) | nielsr | Oct 14, 2024 | |||
Adding `safetensors` variant of this model | SFconvertbot | Oct 24, 2024 | |||
Librarian Bot: Add base_model information to model | librarian-bot | Nov 12, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.