AIone API (English)
    • 01 - Quick Start
    • 02 - Authentication
    • 03 - Error Codes
    • 04 - Pricing
    • 05 - Contact Us
    • 06 - Quality of Service
    • 07 - Complete Examples
    • 08 - Caching & Cost Optimization
    • 11 - Model Quality Monitoring
    • 12 - Network & Connectivity
    • 13 - Model Naming & Compatibility
    • 14 - Gemini Image Generation
    • 09 - Model Verification
    • 10 - IDE Integration

    14 - Gemini Image Generation

    Gemini Image Generation#

    Overview#

    AIone supports Gemini image generation models, accessible through the standard OpenAI-compatible /v1/chat/completions endpoint.
    These models are internally routed through the Gemini native image generation pipeline, with responses converted back to the OpenAI-compatible format. Your request format stays the same, but you can use additional image-specific parameters.

    Available Image Models#

    The following Gemini image models are currently available:
    gemini-3-pro-image-preview
    gemini-3-pro-image-preview-2k
    gemini-3-pro-image-preview-4k
    gemini-2.5-flash-image
    gemini-3.1-flash-image-preview
    If you are unsure about a model name, refer to GET https://api.nexara.net/v1/models and the Portal model list page.

    Minimal Example#

    Notes:
    aspect_ratio and image_size are optional -- images will be generated without them
    The response uses the standard OpenAI chat.completion structure

    Example with Aspect Ratio and Size Parameters#

    Supported Request Parameters#

    General Parameters#

    model: Image model ID
    messages: Conversation message array, following the standard OpenAI Chat Completions format
    max_tokens / max_completion_tokens: Mapped to Gemini's native maxOutputTokens
    temperature
    top_p
    top_k

    Image-Specific Parameters#

    aspect_ratio: Mapped to Gemini's native imageConfig.aspectRatio
    image_size: Mapped to Gemini's native imageConfig.imageSize
    Both parameters are optional -- images will generate without them.

    Reference Image Input#

    To include a reference image, use the OpenAI-style multimodal messages.content format:
    {
      "model": "gemini-3-pro-image-preview",
      "messages": [
        {
          "role": "user",
          "content": [
            {"type": "text", "text": "Using this image as a composition reference, draw a cat sitting by a window"},
            {
              "type": "image_url",
              "image_url": {
                "url": "data:image/png;base64,iVBORw0KGgoAAA..."
              }
            }
          ]
        }
      ],
      "max_tokens": 4096
    }
    Notes:
    The gateway currently only supports data: URI format for image_url.url on the Gemini native pipeline
    If you pass a public image URL directly, the gateway will not convert it to a Gemini native image input

    Response Format#

    Image model responses may return choices[0].message.content as an array rather than a plain string:
    {
      "id": "chatcmpl-xxx",
      "object": "chat.completion",
      "model": "gemini-3-pro-image-preview",
      "choices": [
        {
          "index": 0,
          "message": {
            "role": "assistant",
            "content": [
              {"type": "text", "text": "Here is the generated image:"},
              {
                "type": "image_url",
                "image_url": {
                  "url": "data:image/png;base64,AAAA..."
                }
              }
            ]
          },
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 123,
        "completion_tokens": 456,
        "total_tokens": 579
      }
    }
    This means:
    Some responses include both descriptive text and an image
    Image content is returned via image_url.url, typically in data:image/...;base64,... format
    The usage field is still returned, allowing you to cross-reference consumption in both the response and the console

    Limitations & Notes#

    1.
    Only /v1/chat/completions is supported: Gemini image models do not support /v1/messages
    2.
    Streaming is not supported: stream must be false or omitted
    3.
    Response content may be an array: Do not assume choices[0].message.content is always a plain text string
    4.
    Model permissions: Ensure your API Key is authorized to access the image model
    5.
    Model variants: If you specifically want the 2K or 4K variant, use gemini-3-pro-image-preview-2k or gemini-3-pro-image-preview-4k directly
    Modified at 2026-04-04 16:04:15
    Previous
    13 - Model Naming & Compatibility
    Next
    09 - Model Verification
    Built with