openai-community / roberta-base-openai-detector
Last scanned: Mar 21, 2025 at 10:27 PM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
Adds the tokenizer configuration file (#17) | lysandre | Mar 21, 2025 | |||
Link the "Release Strategies and the Social Impacts of Language Models" paper | julien-c | Nov 6, 2024 | |||
Adding `safetensors` variant of this model (#11) | sgugger | Nov 12, 2024 | |||
Put labels so widget shows which scores correspond with each label (#4) | osanseviero | Oct 26, 2024 | |||
Add model card (#2) | sgugger | Nov 12, 2024 | |||
allow flax | patrickvonplaten | Nov 12, 2024 | |||
For clarity, delete deprecated modelcard.json | julien-c | Oct 30, 2024 | |||
Update tokenizer.json | system | Nov 12, 2024 | |||
Update modelcard.json | system | Nov 12, 2024 | |||
Update config.json | system | Nov 12, 2024 | |||
initial commit | system | Nov 12, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.