v1/jobs
POST/v1/jobs
create diffusion job
Request
- application/json
Body
required
- Array [
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- 262,144 ≤ hr_resize_x * hr_resize_y ≤ 8,294,400
- 262,144 ≤ hr_resize_x * hr_resize_y ≤ 8,294,400
- 262,144 ≤ hr_resize_x * hr_resize_y ≤ 8,294,400
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- Array [
- ]
- Array [
- ]
- Array [
- ]
- ]
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
- For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
- For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- ]
ensure request idempotence, should be unique
stages object[]required
stages to be executed
Possible values: [DEFAULT
, INPUT_INITIALIZE
, DIFFUSION
, IMAGE_TO_UPSCALER
, IMAGE_TO_ADETAILER
, IMAGE_TO_INPAINT
, IMAGE_TO_ANIMATE_DIFF
]
Default value: DEFAULT
stage type
inputInitialize object
Possible values: <= 4294967295
Default value: 0
Random noise seed (omit this option or use 0 for a random seed).
Image used to initialize the diffusion process, in lieu of random noise.
Possible values: >= 1
and <= 4
Default value: 1
Number of images to generate
diffusion object
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
prompts object[]required
Possible values: <= 150
An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
negativePrompts object[]required
Possible values: <= 150
An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
The model to use for the diffusion , How to get the model id
The vae to use for the diffusion Support list
Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list
Possible values: >= 1
and <= 60
Default value: 0
Number of diffusion steps to run.
Possible values: <= 30
Default value: 7
How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)
controlnet object
args object[]
The model to use for the controlnet preprocessor Support list
The model to use for the controlnet Support list
Possible values: [DEFAULT
, JUST_RESIZE
, CROP_AND_RESIZE
, RESIZE_AND_FILL
]
Default value: DEFAULT
Possible values: [DEFAULT
, BALANCED
, MY_PROMPT_IS_MORE_IMPORTANT
, CONTROLNET_IS_MORE_IMPORTANT
]
Default value: DEFAULT
lora object
items object[]
The model to use for the diffusion , How to get the model id
lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"
animateDiff object
args object[]
embedding object
items object[]
The model to use for the negative prompts embedding , How to get the model id
layerDiffusion object
imageToUpscaler object
The model to use for the upscaling Support list
Possible values: >= 128
and <= 5120
hr_scale or hr_resize_x must be specified. If hr_scale is specified, hr_resize_x will be ignored. Height of the image upscaler in pixels. Must be in increments of 64 and pass the following validation:
Possible values: >= 128
and <= 5120
hr_scale or hr_resize_y must be specified. If hr_scale is specified, hr_resize_y will be ignored. Height of the image upscaler in pixels. Must be in increments of 64 and pass the following validation:
The size to use for the upscaling
Possible values: <= 60
Number of diffusion steps to run.
Possible values: <= 1
denoising_strength
diffusion object
if has diffusion stage, this diffusion will be ignored, else need to be specified
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
prompts object[]required
Possible values: <= 150
An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
negativePrompts object[]required
Possible values: <= 150
An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
The model to use for the diffusion , How to get the model id
The vae to use for the diffusion Support list
Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list
Possible values: >= 1
and <= 60
Default value: 0
Number of diffusion steps to run.
Possible values: <= 30
Default value: 7
How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)
controlnet object
args object[]
The model to use for the controlnet preprocessor Support list
The model to use for the controlnet Support list
Possible values: [DEFAULT
, JUST_RESIZE
, CROP_AND_RESIZE
, RESIZE_AND_FILL
]
Default value: DEFAULT
Possible values: [DEFAULT
, BALANCED
, MY_PROMPT_IS_MORE_IMPORTANT
, CONTROLNET_IS_MORE_IMPORTANT
]
Default value: DEFAULT
lora object
items object[]
The model to use for the diffusion , How to get the model id
lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"
animateDiff object
args object[]
embedding object
items object[]
The model to use for the negative prompts embedding , How to get the model id
layerDiffusion object
imageToAdetailer object
args object[]
The model to use for the adetailer Support list
adPrompt object[]
adNegativePrompt object[]
Default value: 4
Default value: None
Default value: 0.4
Default value: true
Default value: 32
Default value: false
Default value: 512
Default value: 512
Default value: false
Default value: 20
Default value: false
Default value: 7
lora object
items object[]
The model to use for the diffusion , How to get the model id
lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"
diffusion object
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
prompts object[]required
Possible values: <= 150
An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
negativePrompts object[]required
Possible values: <= 150
An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
The model to use for the diffusion , How to get the model id
The vae to use for the diffusion Support list
Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list
Possible values: >= 1
and <= 60
Default value: 0
Number of diffusion steps to run.
Possible values: <= 30
Default value: 7
How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)
controlnet object
args object[]
The model to use for the controlnet preprocessor Support list
The model to use for the controlnet Support list
Possible values: [DEFAULT
, JUST_RESIZE
, CROP_AND_RESIZE
, RESIZE_AND_FILL
]
Default value: DEFAULT
Possible values: [DEFAULT
, BALANCED
, MY_PROMPT_IS_MORE_IMPORTANT
, CONTROLNET_IS_MORE_IMPORTANT
]
Default value: DEFAULT
lora object
items object[]
The model to use for the diffusion , How to get the model id
lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"
animateDiff object
args object[]
embedding object
items object[]
The model to use for the negative prompts embedding , How to get the model id
layerDiffusion object
imageToInpaint object
Possible values: [DEFAULT
, JUST_RESIZE
, CROP_AND_RESIZE
, RESIZE_AND_FILL
, JUST_RESIZE_LATENT_UPSCALE
]
Default value: DEFAULT
Possible values: [DEFAULT
, FILL
, ORIGINAL
, LATENT_NOISE
, LATENT_NOTHING
]
Default value: DEFAULT
diffusion object
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
Possible values: >= 512
and <= 1536
Default value: 512
Height of the image in pixels. Must be in increments of 64 and pass the following validation:
prompts object[]required
Possible values: <= 150
An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
negativePrompts object[]required
Possible values: <= 150
An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:
The model to use for the diffusion , How to get the model id
The vae to use for the diffusion Support list
Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list
Possible values: >= 1
and <= 60
Default value: 0
Number of diffusion steps to run.
Possible values: <= 30
Default value: 7
How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)
controlnet object
args object[]
The model to use for the controlnet preprocessor Support list
The model to use for the controlnet Support list
Possible values: [DEFAULT
, JUST_RESIZE
, CROP_AND_RESIZE
, RESIZE_AND_FILL
]
Default value: DEFAULT
Possible values: [DEFAULT
, BALANCED
, MY_PROMPT_IS_MORE_IMPORTANT
, CONTROLNET_IS_MORE_IMPORTANT
]
Default value: DEFAULT
lora object
items object[]
The model to use for the diffusion , How to get the model id
lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"
animateDiff object
args object[]
embedding object
items object[]
The model to use for the negative prompts embedding , How to get the model id
layerDiffusion object
Responses
- 200
- 400
- default
OK
- application/json
- Schema
- Example (from schema)
Schema
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
- Array [
- ]
job object
job id
Possible values: [DEFAULT
, CREATED
, PENDING
, RUNNING
, CANCELED
, SUCCESS
, FAILED
, WAITING
]
Default value: DEFAULT
job status
waitingInfo object
waiting info, when status is waiting will return this
failedInfo object
failed info, when status is failed will return this
Possible values: [DEFAULT
, TRANSITORY_ERROR
, FATAL_ERROR
]
Default value: DEFAULT
runningInfo object
running info, when status is running will return this
processingImages object[]
resourceImage object
meta object
image object
workflowFinishItem object
Possible values: [DEFAULT
, INIT
, RUNNING
, SUCCESS
, FAILED
]
Default value: DEFAULT
nodes object
property name* object
Possible values: [DEFAULT
, INIT
, RUNNING
, SUCCESS
, FAILED
]
Default value: DEFAULT
outputUi object
images object[]
finishedNodes object
successInfo object
success info, when status is success will return this
images object[]
final output images
meta object
image object
videos object[]
final output videos
meta object
image object
workflowFinishItem object
Possible values: [DEFAULT
, INIT
, RUNNING
, SUCCESS
, FAILED
]
Default value: DEFAULT
nodes object
property name* object
Possible values: [DEFAULT
, INIT
, RUNNING
, SUCCESS
, FAILED
]
Default value: DEFAULT
outputUi object
images object[]
finishedNodes object
{
"job": {
"id": "string",
"status": "DEFAULT",
"credits": 0,
"waitingInfo": {
"queueRank": "string",
"queueLen": "string"
},
"failedInfo": {
"reason": "string",
"code": "DEFAULT"
},
"runningInfo": {
"processingImages": [
{
"resourceImage": {
"id": "string",
"url": "string",
"expiredIn": "string",
"meta": {
"image": {
"format": "string",
"width": 0,
"height": 0
}
}
},
"progress": 0
}
],
"workflowFinishItem": {
"status": "DEFAULT",
"progress": 0,
"step": 0,
"id": "string",
"ctime": "string",
"mtime": "string",
"nodes": {},
"finishedNodes": {}
}
},
"successInfo": {
"images": [
{
"id": "string",
"url": "string",
"expiredIn": "string",
"meta": {
"image": {
"format": "string",
"width": 0,
"height": 0
}
}
}
],
"videos": [
{
"id": "string",
"url": "string",
"expiredIn": "string",
"meta": {
"image": {
"format": "string",
"width": 0,
"height": 0
}
}
}
],
"workflowFinishItem": {
"status": "DEFAULT",
"progress": 0,
"step": 0,
"id": "string",
"ctime": "string",
"mtime": "string",
"nodes": {},
"finishedNodes": {}
}
}
}
}
Bad Request: general error for invalid parameters More specific errors:
- invalid_samples: Sample count may only be greater then 1
- invalid_height_or_width: Height and width must be specified in increments of 64
- application/json
- Schema
Schema
- any
Default
- application/json
- Schema
- Example (from schema)
Schema
- Array [
- ]
details object[]
{
"code": 0,
"message": "string",
"details": [
{
"@type": "string"
}
]
}