| | Expose metadata link to next version of the model (#182) | vontimitta, davanstrien | Mar 22, 2025 |
| | Update README.md (#75) | osanseviero | Nov 5, 2024 |
| | Update README.md (#118) | osanseviero | Never scanned |
| | Change license from other to llama3 (#92) | osanseviero | Never scanned |
| | Update config.json (#105) | ArthurZ | Never scanned |
| | Update tokenizer_config.json (#60) | ArthurZ | Never scanned |
| | Update generation_config.json (#62) | ArthurZ | Nov 12, 2024 |
| | Update examples and widget inference parameters (#53) | pcuenq | Never scanned |
| | Update post-processor to add bos (#42) | pcuenq | Never scanned |
| | Fix typo in pipeline device argument (#48) | pcuenq | Never scanned |
| | Update generation_config.json (#4) | pcuenq | Never scanned |
| | Fix chat template to add generation prompt only if the option is selected (#9) | philschmid | Never scanned |
| | Example for AutoModelForCausalLM (#11) | philschmid | Never scanned |
| | Update README.md | philschmid | Never scanned |
| | Update README.md | philschmid | Nov 12, 2024 |
| | Update README.md | ArthurZ | Never scanned |
| | Upload original checkpoint (#1) | pcuenq | Never scanned |
| | Upload folder using huggingface_hub | pcuenq | Nov 12, 2024 |
| | Duplicate from hsramall/hsramall-70b-placeholder | osanseviero | Never scanned |