arminmehrabian / nasa-impact-bert-e-base-mlm-finetuned
Last scanned: Never Scanned
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | arminmehrabian | Nov 12, 2024 | |||
Adding model and tokenizer saved on 0.0.1 | Armin Mehrabian | Never scanned | |||
Update README.md | arminmehrabian | Never scanned | |||
Trained with abstracts. | Armin Mehrabian | Never scanned | |||
Create README.md | arminmehrabian | Never scanned | |||
Labels bug fixed and model trained on abstracts. | Armin Mehrabian | Never scanned | |||
Update README.md | arminmehrabian | Never scanned | |||
Update README.md | arminmehrabian | Never scanned | |||
Update README.md | arminmehrabian | Never scanned | |||
Bug that would train only on abstracts fixed. Now trained on abstracts and titles. The traing dataset updated to v2. | Armin Mehrabian | Never scanned | |||
testing a new model. | Armin Mehrabian | Never scanned | |||
Readme updated with the new mapping. | Armin Mehrabian | Never scanned | |||
config.json file updated. | Armin Mehrabian | Nov 10, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.