llava-hf / llava-1.5-7b-hf
Last scanned: Jan 27, 2025 at 11:22 PM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
Update pipeline example | RaushanTurganbay | Never scanned | |||
Update README.md | nielsr | Never scanned | |||
Typo (#30) | nielsr | Never scanned | |||
update pipeline tag | RaushanTurganbay | Never scanned | |||
Add chat template examples | RaushanTurganbay | Never scanned | |||
Create chat_template.json | RaushanTurganbay | Never scanned | |||
Update README.md | nielsr | Never scanned | |||
Update README.md | ybelkada | Never scanned | |||
Update config.json | ybelkada | Never scanned | |||
Update README.md | liuhaotian | Never scanned | |||
Update tokenizer_config.json | ybelkada | Never scanned | |||
Update README.md | ybelkada | Never scanned | |||
Create README.md | ybelkada | Never scanned | |||
Update config.json | ybelkada | Never scanned | |||
Update config.json | ybelkada | Never scanned |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.