genai-archive / stable-diffusion-v1-5-inpainting
Last scanned: Oct 29, 2024 at 4:43 AM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | bean980310 | Nov 12, 2024 | |||
Create README.md | bean980310 | Never scanned | |||
Upload stable-diffusion-v1-5-inpainting/config.json with huggingface_hub | bean980310 | Never scanned | |||
Upload config.json with huggingface_hub | bean980310 | Never scanned | |||
Delete stable-diffusion-v1-5-inpainting | bean980310 | Never scanned | |||
Upload model_index.json with huggingface_hub | bean980310 | Never scanned | |||
Upload feature_extractor/preprocessor_config.json with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload safety_checker/config.json with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload safety_checker/model.fp16.safetensors with huggingface_hub | bean980310 | Never scanned | |||
Upload safety_checker/pytorch_model.fp16.bin with huggingface_hub | bean980310 | Never scanned | |||
Upload safety_checker/model.safetensors with huggingface_hub | bean980310 | Nov 10, 2024 | |||
Upload safety_checker/pytorch_model.bin with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload scheduler/scheduler_config.json with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload text_encoder/config.json with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload text_encoder/model.fp16.safetensors with huggingface_hub | bean980310 | Nov 4, 2024 | |||
Upload text_encoder/pytorch_model.fp16.bin with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload text_encoder/model.safetensors with huggingface_hub | bean980310 | Nov 10, 2024 | |||
Upload text_encoder/pytorch_model.bin with huggingface_hub | bean980310 | Never scanned | |||
Upload tokenizer/merges.txt with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload tokenizer/special_tokens_map.json with huggingface_hub | bean980310 | Oct 29, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.