|  | Remove inference parameters from README.md (#73) | vontimitta, Wauplin | Sep 4, 2025 | 
 |  | Update to latest version of the Llama 3.3 70B Instruct model (#71) | vontimitta, reach-vb | 
 |  | Expose metadata link to next version of the model (#67) | vontimitta | 
 |  | add base_model metadata (#63) | Hamid-Nazeri | 
 |  | Update README.md (#54) | osanseviero | 
 |  | Change license from other to llama3 (#47) | osanseviero | 
 |  | Update config.json (#49) | ArthurZ | 
 |  | Update tokenizer_config.json | ArthurZ | 
 |  | Update README.md (#31) | ArthurZ | 
 |  | Update generation_config.json (#30) | ArthurZ | 
 |  | Update post-processor to add bos (#26) | pcuenq | 
 |  | Add ignore_merges (#21) | pcuenq | 
 |  | Chat template ignores add_generation_prompt (#5) | pcuenq | 
 |  | Update generation_config.json (#2) | pcuenq | 
 |  | Fix chat template to add generation prompt only if the option is selected (#6) | philschmid | 
 |  | Widget inference params + examples (#15) | pcuenq | 
 |  | Update README.md | philschmid | 
 |  | Update README.md | pcuenq | 
 |  | Update README.md | philschmid | 
 |  | Update README.md | philschmid |