meta-llama / Meta-Llama-3-8B-Instruct
Last scanned: Nov 12, 2024 at 7:00 PM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
Remove inference parameters from README.md (#229) | vontimitta, Wauplin | Sep 4, 2025 | |||
Expose metadata link to next version of the model (#182) | vontimitta, davanstrien | Mar 22, 2025 | |||
Update README.md (#75) | osanseviero | Nov 5, 2024 | |||
Update README.md (#118) | osanseviero | Never scanned | |||
Change license from other to llama3 (#92) | osanseviero | Never scanned | |||
Update config.json (#105) | ArthurZ | Never scanned | |||
Update tokenizer_config.json (#60) | ArthurZ | Never scanned | |||
Update generation_config.json (#62) | ArthurZ | Nov 12, 2024 | |||
Update examples and widget inference parameters (#53) | pcuenq | Never scanned | |||
Update post-processor to add bos (#42) | pcuenq | Never scanned | |||
Fix typo in pipeline device argument (#48) | pcuenq | Never scanned | |||
Update generation_config.json (#4) | pcuenq | Never scanned | |||
Fix chat template to add generation prompt only if the option is selected (#9) | philschmid | Never scanned | |||
Example for AutoModelForCausalLM (#11) | philschmid | Never scanned | |||
Update README.md | philschmid | Never scanned | |||
Update README.md | philschmid | Nov 12, 2024 | |||
Update README.md | ArthurZ | Never scanned | |||
Upload original checkpoint (#1) | pcuenq | Never scanned | |||
Upload folder using huggingface_hub | pcuenq | Nov 12, 2024 | |||
Duplicate from hsramall/hsramall-70b-placeholder | osanseviero | Never scanned |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.