|  | Remove inference parameters from README.md (#229) | vontimitta, Wauplin | Sep 4, 2025 | 
 |  | Expose metadata link to next version of the model (#182) | vontimitta, davanstrien | 
 |  | Update README.md (#75) | osanseviero | 
 |  | Update README.md (#118) | osanseviero | 
 |  | Change license from other to llama3 (#92) | osanseviero | 
 |  | Update config.json (#105) | ArthurZ | 
 |  | Update tokenizer_config.json (#60) | ArthurZ | 
 |  | Update generation_config.json (#62) | ArthurZ | 
 |  | Update examples and widget inference parameters (#53) | pcuenq | 
 |  | Update post-processor to add bos (#42) | pcuenq | 
 |  | Fix typo in pipeline device argument (#48) | pcuenq | 
 |  | Update generation_config.json (#4) | pcuenq | 
 |  | Fix chat template to add generation prompt only if the option is selected (#9) | philschmid | 
 |  | Example for AutoModelForCausalLM (#11) | philschmid | 
 |  | Update README.md | philschmid | 
 |  | Update README.md | philschmid | 
 |  | Update README.md | ArthurZ | 
 |  | Upload original checkpoint (#1) | pcuenq | 
 |  | Upload folder using huggingface_hub | pcuenq | 
 |  | Duplicate from hsramall/hsramall-70b-placeholder | osanseviero |