| | Remove inference parameters from README.md (#73) | vontimitta, Wauplin | Sep 4, 2025 |
| | Update to latest version of the Llama 3.3 70B Instruct model (#71) | vontimitta, reach-vb | Jun 3, 2025 |
| | Expose metadata link to next version of the model (#67) | vontimitta | Never scanned |
| | add base_model metadata (#63) | Hamid-Nazeri | Never scanned |
| | Update README.md (#54) | osanseviero | Never scanned |
| | Change license from other to llama3 (#47) | osanseviero | Never scanned |
| | Update config.json (#49) | ArthurZ | Never scanned |
| | Update tokenizer_config.json | ArthurZ | Never scanned |
| | Update README.md (#31) | ArthurZ | Never scanned |
| | Update generation_config.json (#30) | ArthurZ | Never scanned |
| | Update post-processor to add bos (#26) | pcuenq | Never scanned |
| | Add ignore_merges (#21) | pcuenq | Never scanned |
| | Chat template ignores add_generation_prompt (#5) | pcuenq | Never scanned |
| | Update generation_config.json (#2) | pcuenq | Never scanned |
| | Fix chat template to add generation prompt only if the option is selected (#6) | philschmid | Never scanned |
| | Widget inference params + examples (#15) | pcuenq | Never scanned |
| | Update README.md | philschmid | Never scanned |
| | Update README.md | pcuenq | Never scanned |
| | Update README.md | philschmid | Never scanned |
| | Update README.md | philschmid | Never scanned |