BRIA 2.3 ControlNet Generative Fill Fast

Trained exclusively on the largest multi-source commercial-grade licensed dataset, BRIA 2.3 Generative Fill guarantees best quality while safe for commercial use. The model provides full legal liability coverage for copyright and privacy infrigement and harmful content mitigation, as our dataset does not represent copyrighted materials, such as fictional characters, logos or trademarks, public figures, harmful content or privacy infringing content.

BRIA 2.3 Generative Fill is a model designed to fill masked regions in images based on user-provided textual prompts. The model can be applied in different scenarios, including object, replacement, addition, and modification within an image.

This model works with all types of masks, but is highly optimized to work best with blob-shaped masks which occupy more than 15% of the image area.

Get Access

BRIA 2.3 ControlNet-Gennarative Fill requires access to BRIA 2.3 Foundationmodel. For more information, click here.

For more information, please visit our website.

Join our Discord community for more information, tutorials, tools, and to connect with other users!

CLICK HERE FOR A DEMO

What's New

BRIA 2.3 ControlNet Generative Fill can be applied on top of BRIA 2.3 Text-to-Image and therefore enable to use Fast-LORA. This results in extremely fast generative fill model, requires only 1.6s using A10 GPU.

Model Description

  • Developed by: BRIA AI
  • Model type: Latent diffusion image-to-image model
  • License: bria-2.3 inpainting Licensing terms & conditions.
  • Purchase is required to license and access the model.
  • Model Description: BRIA 2.3 Generative Fill was trained exclusively on a professional-grade, licensed dataset. It is designed for commercial use and includes full legal liability coverage.
  • Resources for more information: BRIA AI

How To Use

Tested with:

diffusers==0.27.2
transformers==4.47.1
torch==2.3.0 (on CUDA 12.1)
peft==0.14.0
huggingface_hub==0.25.2
from diffusers import (
    AutoencoderKL,
    LCMScheduler,
)
from pipeline_controlnet_sd_xl import StableDiffusionXLControlNetPipeline
from controlnet import ControlNetModel, ControlNetConditioningEmbedding
import torch
import numpy as np
from PIL import Image
import requests
import PIL
from io import BytesIO
from torchvision import transforms
import pandas as pd 
import os 


def resize_image_to_retain_ratio(image):
    pixel_number = 1024*1024
    granularity_val = 8
    ratio = image.size[0] / image.size[1]
    width = int((pixel_number * ratio) ** 0.5)
    width = width - (width % granularity_val)
    height = int(pixel_number / width)
    height = height - (height % granularity_val)

    image = image.resize((width, height))
    return image


def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")


def get_masked_image(image, image_mask, width, height):
    image_mask = image_mask # fill area is white
    image_mask = image_mask.resize((width, height)) # object to remove is white (1)
    image_mask_pil = image_mask
    image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
    image_mask = np.array(image_mask_pil.convert("L")).astype(np.float32) / 255.0
    assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
    masked_image_to_present = image.copy()
    masked_image_to_present[image_mask > 0.5] = (0.5,0.5,0.5)  # set as masked pixel
    image[image_mask > 0.5] = 0.5  # set as masked pixel - s.t. will be grey 
    image = Image.fromarray((image * 255.0).astype(np.uint8))
    masked_image_to_present = Image.fromarray((masked_image_to_present * 255.0).astype(np.uint8))
    return image, image_mask_pil, masked_image_to_present


image_transforms = transforms.Compose(
    [
        transforms.ToTensor(),
    ]
)

default_negative_prompt = "blurry"

img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"

init_image = download_image(img_url).resize((1024, 1024))
mask_image = download_image(mask_url).resize((1024, 1024))


init_image = resize_image_to_retain_ratio(init_image)
width, height = init_image.size

mask_image = mask_image.convert("L").resize(init_image.size)

width, height = init_image.size

# Load, init model    
controlnet = ControlNetModel().from_pretrained("briaai/BRIA-2.3-ControlNet-Generative-Fill", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained("briaai/BRIA-2.3", controlnet=controlnet.to(dtype=torch.float16), torch_dtype=torch.float16, vae=vae) #force_zeros_for_empty_prompt=False, # vae=vae)

pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("briaai/BRIA-2.3-FAST-LORA")
pipe.fuse_lora()
pipe = pipe.to(device="cuda")

# pipe.enable_xformers_memory_efficient_attention()

generator = torch.Generator(device="cuda").manual_seed(123456)

vae = pipe.vae


masked_image, image_mask, masked_image_to_present = get_masked_image(init_image, mask_image, width, height)

masked_image_tensor = image_transforms(masked_image)
masked_image_tensor = (masked_image_tensor - 0.5) / 0.5


masked_image_tensor = masked_image_tensor.unsqueeze(0).to(device="cuda")
control_latents = vae.encode(  
        masked_image_tensor[:, :3, :, :].to(vae.dtype)
    ).latent_dist.sample()   
control_latents = control_latents * vae.config.scaling_factor 


image_mask = np.array(image_mask)[:,:]
mask_tensor = torch.tensor(image_mask, dtype=torch.float32)[None, ...]
# binarize the mask
mask_tensor = torch.where(mask_tensor > 128.0, 255.0, 0)       

mask_tensor = mask_tensor / 255.0

mask_tensor = mask_tensor.to(device="cuda")
mask_resized = torch.nn.functional.interpolate(mask_tensor[None, ...], size=(control_latents.shape[2], control_latents.shape[3]), mode='nearest')

masked_image = torch.cat([control_latents, mask_resized], dim=1)

prompt = ""

gen_img = pipe(negative_prompt=default_negative_prompt, prompt=prompt, 
            controlnet_conditioning_scale=1.0, 
            num_inference_steps=12, 
            height=height, width=width, 
            image = masked_image, # control image
            init_image = init_image,     
            mask_image = mask_tensor,
            guidance_scale = 1.2,
            generator=generator).images[0]
Downloads last month
157
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Spaces using briaai/BRIA-2.3-ControlNet-Generative-Fill 2