跳到主要内容

v1/jobs/credits

POST 

/v1/jobs/credits

check job

Request

Body

required
    stages object[]required

    stages to be executed

  • Array [
  • type stage type (string)required

    Possible values: [DEFAULT, INPUT_INITIALIZE, DIFFUSION, IMAGE_TO_UPSCALER, IMAGE_TO_ADETAILER, IMAGE_TO_INPAINT, IMAGE_TO_ANIMATE_DIFF]

    Default value: DEFAULT

    stage type

    inputInitialize object
    seed int64

    Possible values: <= 4294967295

    Default value: 0

    Random noise seed (omit this option or use 0 for a random seed).

    imageResourceId string

    Image used to initialize the diffusion process, in lieu of random noise.

    count int32

    Possible values: >= 1 and <= 4

    Default value: 1

    Number of images to generate

    diffusion object
    width int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    height int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    prompts object[]required

    Possible values: <= 150

    An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • negativePrompts object[]required

    Possible values: <= 150

    An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • sdModel string

    The model to use for the diffusion , How to get the model id

    sdVae string

    The vae to use for the diffusion Support list

    sampler string

    Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list

    steps int32

    Possible values: >= 1 and <= 60

    Default value: 0

    Number of diffusion steps to run.

    cfgScale float

    Possible values: <= 30

    Default value: 7

    How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)

    clipSkip int32
    denoisingStrength float
    etaNoiseSeedDelta int32
    controlnet object
    args object[]
  • Array [
  • inputImageResourceId string
    maskResourceId string
    preprocessor stringrequired

    The model to use for the controlnet preprocessor Support list

    model stringrequired

    The model to use for the controlnet Support list

    weight float
    resizeMode string

    Possible values: [DEFAULT, JUST_RESIZE, CROP_AND_RESIZE, RESIZE_AND_FILL]

    Default value: DEFAULT

    guidance float
    guidanceStart float
    guidanceEnd float
    controlMode string

    Possible values: [DEFAULT, BALANCED, MY_PROMPT_IS_MORE_IMPORTANT, CONTROLNET_IS_MORE_IMPORTANT]

    Default value: DEFAULT

    pixelPerfect boolean
    preprocessorParams object
  • ]
  • lora object
    items object[]
  • Array [
  • loraModel string

    The model to use for the diffusion , How to get the model id

    weight float
    blockWeight string

    lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"

    loraAccessKey string
  • ]
  • animateDiff object
    args object[]
  • Array [
  • videoLength int32
    fps int32
  • ]
  • embedding object
    items object[]
  • Array [
  • model string

    The model to use for the negative prompts embedding , How to get the model id

    weight float
    embeddingAccessKey string
  • ]
  • v1Clip boolean
    modelAccessKey string
    scheduleName string
    enableElla todo:优化成stages,因为底层还不支持,所以目前优化没用,暂时只在推理部分支持。 (boolean)
    enablePix2pix boolean
    useHunyuanDit deprecated (boolean)
    layerDiffusion object
    enable boolean
    weight float
    clipEncoderName string
    imageToUpscaler object
    hrUpscaler stringrequired

    The model to use for the upscaling Support list

    hrResizeX int64

    Possible values: >= 128 and <= 5120

    hr_scale or hr_resize_x must be specified. If hr_scale is specified, hr_resize_x will be ignored. Height of the image upscaler in pixels. Must be in increments of 64 and pass the following validation:

    • 262,144 ≤ hr_resize_x * hr_resize_y ≤ 8,294,400
    hrResizeY int64

    Possible values: >= 128 and <= 5120

    hr_scale or hr_resize_y must be specified. If hr_scale is specified, hr_resize_y will be ignored. Height of the image upscaler in pixels. Must be in increments of 64 and pass the following validation:

    • 262,144 ≤ hr_resize_x * hr_resize_y ≤ 8,294,400
    hrScale double

    The size to use for the upscaling

    • 262,144 ≤ hr_resize_x * hr_resize_y ≤ 8,294,400
    hrSecondPassSteps int32required

    Possible values: <= 60

    Number of diffusion steps to run.

    denoisingStrength floatrequired

    Possible values: <= 1

    denoising_strength

    diffusion object

    if has diffusion stage, this diffusion will be ignored, else need to be specified

    width int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    height int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    prompts object[]required

    Possible values: <= 150

    An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • negativePrompts object[]required

    Possible values: <= 150

    An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • sdModel string

    The model to use for the diffusion , How to get the model id

    sdVae string

    The vae to use for the diffusion Support list

    sampler string

    Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list

    steps int32

    Possible values: >= 1 and <= 60

    Default value: 0

    Number of diffusion steps to run.

    cfgScale float

    Possible values: <= 30

    Default value: 7

    How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)

    clipSkip int32
    denoisingStrength float
    etaNoiseSeedDelta int32
    controlnet object
    args object[]
  • Array [
  • inputImageResourceId string
    maskResourceId string
    preprocessor stringrequired

    The model to use for the controlnet preprocessor Support list

    model stringrequired

    The model to use for the controlnet Support list

    weight float
    resizeMode string

    Possible values: [DEFAULT, JUST_RESIZE, CROP_AND_RESIZE, RESIZE_AND_FILL]

    Default value: DEFAULT

    guidance float
    guidanceStart float
    guidanceEnd float
    controlMode string

    Possible values: [DEFAULT, BALANCED, MY_PROMPT_IS_MORE_IMPORTANT, CONTROLNET_IS_MORE_IMPORTANT]

    Default value: DEFAULT

    pixelPerfect boolean
    preprocessorParams object
  • ]
  • lora object
    items object[]
  • Array [
  • loraModel string

    The model to use for the diffusion , How to get the model id

    weight float
    blockWeight string

    lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"

    loraAccessKey string
  • ]
  • animateDiff object
    args object[]
  • Array [
  • videoLength int32
    fps int32
  • ]
  • embedding object
    items object[]
  • Array [
  • model string

    The model to use for the negative prompts embedding , How to get the model id

    weight float
    embeddingAccessKey string
  • ]
  • v1Clip boolean
    modelAccessKey string
    scheduleName string
    enableElla todo:优化成stages,因为底层还不支持,所以目前优化没用,暂时只在推理部分支持。 (boolean)
    enablePix2pix boolean
    useHunyuanDit deprecated (boolean)
    layerDiffusion object
    enable boolean
    weight float
    clipEncoderName string
    imageToAdetailer object
    args object[]
  • Array [
  • adModel string

    The model to use for the adetailer Support list

    adPrompt object[]
  • Array [
  • text string
    weight float
  • ]
  • adNegativePrompt object[]
  • Array [
  • text string
    weight float
  • ]
  • adConfidence float
    adDilateErode int32

    Default value: 4

    adMaskMergeInvert string

    Default value: None

    adDenoisingStrength float

    Default value: 0.4

    adInpaintOnlyMasked boolean

    Default value: true

    adInpaintOnlyMaskedPadding float

    Default value: 32

    adUseInpaintWidthHeight boolean

    Default value: false

    adInpaintWidth int32

    Default value: 512

    adInpaintHeight int32

    Default value: 512

    adUseSteps boolean

    Default value: false

    adSteps int32

    Default value: 20

    adUseCfgScale boolean

    Default value: false

    adCfgScale float

    Default value: 7

    lora object
    items object[]
  • Array [
  • loraModel string

    The model to use for the diffusion , How to get the model id

    weight float
    blockWeight string

    lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"

    loraAccessKey string
  • ]
  • adUseCheckpoint boolean
    adCheckpoint string
    adUseSampler boolean
    adUseNoiseMultiplier boolean
    adNoiseMultiplier float
    adUseClipSkip boolean
    adClipSkip int32
    adSampler string
  • ]
  • diffusion object
    width int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    height int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    prompts object[]required

    Possible values: <= 150

    An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • negativePrompts object[]required

    Possible values: <= 150

    An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • sdModel string

    The model to use for the diffusion , How to get the model id

    sdVae string

    The vae to use for the diffusion Support list

    sampler string

    Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list

    steps int32

    Possible values: >= 1 and <= 60

    Default value: 0

    Number of diffusion steps to run.

    cfgScale float

    Possible values: <= 30

    Default value: 7

    How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)

    clipSkip int32
    denoisingStrength float
    etaNoiseSeedDelta int32
    controlnet object
    args object[]
  • Array [
  • inputImageResourceId string
    maskResourceId string
    preprocessor stringrequired

    The model to use for the controlnet preprocessor Support list

    model stringrequired

    The model to use for the controlnet Support list

    weight float
    resizeMode string

    Possible values: [DEFAULT, JUST_RESIZE, CROP_AND_RESIZE, RESIZE_AND_FILL]

    Default value: DEFAULT

    guidance float
    guidanceStart float
    guidanceEnd float
    controlMode string

    Possible values: [DEFAULT, BALANCED, MY_PROMPT_IS_MORE_IMPORTANT, CONTROLNET_IS_MORE_IMPORTANT]

    Default value: DEFAULT

    pixelPerfect boolean
    preprocessorParams object
  • ]
  • lora object
    items object[]
  • Array [
  • loraModel string

    The model to use for the diffusion , How to get the model id

    weight float
    blockWeight string

    lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"

    loraAccessKey string
  • ]
  • animateDiff object
    args object[]
  • Array [
  • videoLength int32
    fps int32
  • ]
  • embedding object
    items object[]
  • Array [
  • model string

    The model to use for the negative prompts embedding , How to get the model id

    weight float
    embeddingAccessKey string
  • ]
  • v1Clip boolean
    modelAccessKey string
    scheduleName string
    enableElla todo:优化成stages,因为底层还不支持,所以目前优化没用,暂时只在推理部分支持。 (boolean)
    enablePix2pix boolean
    useHunyuanDit deprecated (boolean)
    layerDiffusion object
    enable boolean
    weight float
    clipEncoderName string
    imageToInpaint object
    resizeMode JUST_RESIZE (string)

    Possible values: [DEFAULT, JUST_RESIZE, CROP_AND_RESIZE, RESIZE_AND_FILL, JUST_RESIZE_LATENT_UPSCALE]

    Default value: DEFAULT

    maskImageResourceId string
    maskBlur float
    inpaintingFill ORIGINAL (string)

    Possible values: [DEFAULT, FILL, ORIGINAL, LATENT_NOISE, LATENT_NOTHING]

    Default value: DEFAULT

    inpaintFullRes true (boolean)
    inpaintFullResPadding int64
    inpaintMaskInvert int64
    diffusion object
    width int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    height int64required

    Possible values: >= 512 and <= 1536

    Default value: 512

    Height of the image in pixels. Must be in increments of 64 and pass the following validation:

    • For 512 engines: 262,144 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For 768 engines: 589,824 ≤ height * width ≤ 1,048,576, Maximum 1024
    • For SDXL v1.0: 262,144 ≤ height * width ≤ 2,073,600, Maximum 1536
    prompts object[]required

    Possible values: <= 150

    An array of text prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • negativePrompts object[]required

    Possible values: <= 150

    An array of text negative prompts to use for generation. Given a text prompt with the text A lighthouse on a cliff and a weight of 0.5, it would be represented as:

  • Array [
  • text string
    weight float
  • ]
  • sdModel string

    The model to use for the diffusion , How to get the model id

    sdVae string

    The vae to use for the diffusion Support list

    sampler string

    Which sampler to use for the diffusion process. If this value is omitted we'll automatically select an appropriate sampler for you. Support list

    steps int32

    Possible values: >= 1 and <= 60

    Default value: 0

    Number of diffusion steps to run.

    cfgScale float

    Possible values: <= 30

    Default value: 7

    How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt)

    clipSkip int32
    denoisingStrength float
    etaNoiseSeedDelta int32
    controlnet object
    args object[]
  • Array [
  • inputImageResourceId string
    maskResourceId string
    preprocessor stringrequired

    The model to use for the controlnet preprocessor Support list

    model stringrequired

    The model to use for the controlnet Support list

    weight float
    resizeMode string

    Possible values: [DEFAULT, JUST_RESIZE, CROP_AND_RESIZE, RESIZE_AND_FILL]

    Default value: DEFAULT

    guidance float
    guidanceStart float
    guidanceEnd float
    controlMode string

    Possible values: [DEFAULT, BALANCED, MY_PROMPT_IS_MORE_IMPORTANT, CONTROLNET_IS_MORE_IMPORTANT]

    Default value: DEFAULT

    pixelPerfect boolean
    preprocessorParams object
  • ]
  • lora object
    items object[]
  • Array [
  • loraModel string

    The model to use for the diffusion , How to get the model id

    weight float
    blockWeight string

    lora block weight, value such as <weight>:lbw=<layer weight> example: "1:lbw=1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0"

    loraAccessKey string
  • ]
  • animateDiff object
    args object[]
  • Array [
  • videoLength int32
    fps int32
  • ]
  • embedding object
    items object[]
  • Array [
  • model string

    The model to use for the negative prompts embedding , How to get the model id

    weight float
    embeddingAccessKey string
  • ]
  • v1Clip boolean
    modelAccessKey string
    scheduleName string
    enableElla todo:优化成stages,因为底层还不支持,所以目前优化没用,暂时只在推理部分支持。 (boolean)
    enablePix2pix boolean
    useHunyuanDit deprecated (boolean)
    layerDiffusion object
    enable boolean
    weight float
    clipEncoderName string
  • ]

Responses

OK

Schema
    credits double
Loading...