ControlNet Tool
The foundation model that ControlNet will guide
Describe what you want to generate, the ControlNet will use your image as guidance.
Elements to avoid in the generated image
Detects edges in your image to guide the generation.
Controls how much the conditioning image influences the final result.
Generated image will appear here
This is a demo visualization only
Input Image
Control Map
Example Results
Canny Edge
Prompt: cyberpunk city street...
OpenPose
Prompt: dancer in motion...
What is ControlNet?
ControlNet is a neural network structure designed to control diffusion models like Stable Diffusion by adding extra conditions. It allows you to influence the image generation process using various input types such as edge maps, pose detection, depth maps, segmentation masks, and more.
How ControlNet Works
ControlNet works by adding trainable copying layers to the Stable Diffusion U-Net structure. When you provide a conditioning image (like a canny edge map or pose skeleton), ControlNet uses this additional information alongside your text prompt to guide the generation process.
Popular ControlNet Types
- Canny Edge: Uses edge detection to maintain composition
- OpenPose: Controls human pose and body position
- Depth Map: Maintains spatial relationships and 3D structure
- Line Art: Uses line drawings to guide generation
- Segmentation: Controls object placement with colored masks
- Normal Map: Controls 3D surface orientation
Common Applications
- Maintaining consistent character poses across multiple images
- Creating architectural visualizations with precise geometry
- Converting sketches into detailed artwork
- Generating images that match specific layouts
- Style transfer while preserving structure
- Photo editing with controlled outcomes