genai-archive / stable-diffusion-v1-5-inpainting
Last scanned: Oct 29, 2024 at 3:58 AM UTC
Versions
Version | Commit message | Commit author | Last scanned | ||
---|---|---|---|---|---|
initial commit | bean980310 | Nov 12, 2024 | |||
Upload feature_extractor/preprocessor_config.json with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload safety_checker/config.json with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload safety_checker/model.safetensors with huggingface_hub | bean980310 | Nov 10, 2024 | |||
Upload safety_checker/pytorch_model.bin with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload scheduler/scheduler_config.json with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload text_encoder/config.json with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload text_encoder/model.fp16.safetensors with huggingface_hub | bean980310 | Nov 4, 2024 | |||
Upload text_encoder/pytorch_model.fp16.bin with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload text_encoder/model.safetensors with huggingface_hub | bean980310 | Nov 10, 2024 | |||
Upload tokenizer/merges.txt with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload tokenizer/special_tokens_map.json with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload tokenizer/vocab.json with huggingface_hub | bean980310 | Nov 6, 2024 | |||
Upload unet/diffusion_pytorch_model.fp16.bin with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload vae/config.json with huggingface_hub | bean980310 | Nov 7, 2024 | |||
Upload vae/diffusion_pytorch_model.fp16.safetensors with huggingface_hub | bean980310 | Nov 7, 2024 | |||
Upload vae/diffusion_pytorch_model.fp16.bin with huggingface_hub | bean980310 | Nov 12, 2024 | |||
Upload vae/diffusion_pytorch_model.safetensors with huggingface_hub | bean980310 | Nov 7, 2024 | |||
Upload vae/diffusion_pytorch_model.bin with huggingface_hub | bean980310 | Oct 29, 2024 | |||
Upload sd-v1-5-inpainting.ckpt with huggingface_hub | bean980310 | Oct 29, 2024 |
Protect AI's security scanner detects threats in model files
With Protect AI's Guardian you can scan models for threats before ML developers download them for use, and apply policies based on your risk tolerance.
Found a new threat?
Earn bounties and get recognition for your discoveries by submitting them through Huntr.