Back to Blog

How to Use Nano Banana for AI Image Generation via API

Andrew Adams

Andrew Adams

·10 min read
How to Use Nano Banana for AI Image Generation via API

Nano Banana is one of the most capable AI image generation models available today, built on Google DeepMind's Gemini architecture. Whether you need product photos, marketing assets, or creative visuals, calling Nano Banana through an API lets you automate image generation at scale. Wireflow makes this even simpler by letting you chain Nano Banana with other AI models in visual workflows, all accessible through a single REST API.

This guide walks through every step: getting your API key, writing your first request, handling responses, and scaling up to production-level image pipelines.

What Is Nano Banana and Why Use It via API

Nano Banana is a family of AI image generation models from Google DeepMind. The lineup includes three tiers, each built on a different Gemini foundation model:

  • Nano Banana (based on Gemini 2.5 Flash): the most affordable option at roughly $0.04 per image
  • Nano Banana 2 (based on Gemini 3.1 Flash): mid-tier with faster generation, under 10 seconds per image
  • Nano Banana Pro (based on Gemini 3 Pro): the flagship model with best-in-class text rendering, up to 4K resolution, and advanced scene composition

For a deeper look at the Nano Banana 2 model and how it fits into visual workflows, check out the Nano Banana 2 model page.

Using Nano Banana through an API rather than a web interface gives you several practical advantages:

  1. Batch processing: generate hundreds of images from a spreadsheet of prompts
  2. Pipeline integration: feed generated images directly into editing, upscaling, or video models
  3. Consistent output: lock down parameters like resolution, aspect ratio, and style across all requests
  4. Cost control: track usage programmatically and set spending limits

API workflow diagram showing text prompt flowing into Nano Banana model

Step 1: Get Your API Key

You have two main paths to access Nano Banana via API. The first is through Google AI Studio, which provides 50 free requests per day across all resolutions. Sign in with a Google account, navigate to the API keys section, and generate a key. This is the quickest way to start experimenting with the AI image generation capabilities.

The second path is through a workflow platform that wraps Nano Banana (and dozens of other models) behind a unified API. This approach is better if you plan to chain multiple models together or need webhook-based triggers. API keys start with sk- and are generated from the dashboard under Settings > API Keys. Each key is shown only once, so store it securely.

For teams already running automated SEO and content pipelines, an API-first approach to image generation slots naturally into existing workflows without manual steps.

Step 2: Make Your First API Request

Here is a minimal curl example that sends a text prompt to Nano Banana Pro through the Wireflow API and starts execution:

curl -X POST https://www.wireflow.ai/api/v1/workflows/YOUR_WORKFLOW_ID/execute \
  -H "Authorization: Bearer sk-your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "nodes": [
      {
        "id": "input-1",
        "type": "basedNode",
        "data": {
          "nodeType": "input:text",
          "category": "input",
          "params": {
            "prompt": "A golden retriever sitting in a sunlit meadow, photorealistic, 4K"
          }
        }
      },
      {
        "id": "gen-1",
        "type": "basedNode",
        "data": {
          "nodeType": "generate:nano_banana_pro",
          "category": "generate",
          "params": {
            "resolution": "2K",
            "aspect_ratio": "16:9"
          }
        }
      }
    ],
    "edges": [
      {
        "source": "input-1",
        "target": "gen-1",
        "sourceHandle": "out-prompt",
        "targetHandle": "in-prompt"
      }
    ]
  }'

This returns a JSON response containing an executionId. The execution is asynchronous, so you need to poll for the result. The batch image generation endpoint follows the same pattern if you need to queue multiple prompts at once.

Step 3: Poll for Results

After submitting your request, poll the execution status endpoint until it returns COMPLETED:

curl https://www.wireflow.ai/api/v1/workflows/executions/YOUR_EXECUTION_ID/poll \
  -H "Authorization: Bearer sk-your-api-key"

The response cycles through these states: RUNNING, then either COMPLETED (with node outputs including your image URL) or FAILED (with an error message). Use exponential backoff for polling: start at 1 second, multiply by 1.5 each attempt, and cap at 10 seconds between requests. This keeps your client responsive without hammering the API. You can read more about execution states and response shapes in the API executions documentation.

A typical successful response includes the generated image URL in the node outputs, along with timing data and credit usage. Every response also includes an X-Request-Id header, which is useful for debugging with support.

Polling flow showing request lifecycle

Step 4: Write Effective Prompts for Nano Banana

The quality of your generated images depends heavily on prompt structure. Nano Banana Pro excels at following detailed instructions, so specificity pays off. Here are practical tips for building AI content generation prompts:

  • Be specific about composition: instead of "a city," write "an aerial view of a dense urban skyline with glass towers reflecting a pink sunset"
  • Include technical parameters: mention "photorealistic," "8K quality," or "cinematic lighting" to guide the model's output style
  • Use text rendering: Nano Banana Pro is one of the few models that renders legible text in images. You can include text overlay instructions directly in your prompt
  • Use negative phrasing carefully: while some models support negative prompts, Nano Banana responds better to positive descriptions of what you want

For product photography, pair your prompt with specific camera angles, background descriptions, and lighting conditions. The model handles multi-subject scenes well, maintaining consistency across up to 5 subjects in a single image. This makes it particularly useful for e-commerce image workflows.

Step 5: Handle Errors and Rate Limits

Every API has constraints, and handling them gracefully keeps your pipeline running. The API returns standard HTTP status codes. Here are the ones you will encounter most often when working with AI pipeline automation:

Status Code Meaning What to Do
200/201 Success Process the response normally
402 Insufficient credits Check your balance; the response includes requiredCredits and availableCredits
429 Rate limited Wait for the duration in the Retry-After header before retrying
500 Server error Retry once after a short delay; report persistent errors using the X-Request-Id

Rate limits vary by plan. Free accounts get 10 requests per minute and 50 daily executions. Pro accounts get 60 requests per minute and 1,000 daily executions. All plans share a 10 executions-per-minute cap to prevent automation overload. For full details, see the rate limits documentation.

To prevent duplicate charges from retried requests, send an Idempotency-Key header with each execution call. Identical keys within 24 hours replay the original response without running the workflow again.

Step 6: Scale to Production Pipelines

Once your basic integration works, consider these patterns for scaling up. You can create a saved workflow in the dashboard with pre-configured nodes, then trigger it via API or webhook without sending the full node definition each time.

For event-driven architectures, use webhook triggers instead of API key authentication:

curl -X POST https://www.wireflow.ai/api/v1/workflow/YOUR_WEBHOOK_ID/trigger \
  -H "Content-Type: application/json" \
  -d '{"prompt": "A minimalist product shot of a white sneaker on marble"}'

Webhook triggers return a 202 Accepted with an executionId and require no API key, making them safe to call from frontend forms, Zapier automations, or CI pipelines. You can learn more about this pattern in the webhooks guide.

For high-volume batch jobs, consider chaining Nano Banana with other models. For example, you could generate base images with Nano Banana 2 (lower cost), then selectively upscale the best results with a dedicated image upscaler. The API supports multi-node workflows where each step runs in sequence, passing outputs automatically between models.

Production pipeline architecture

Pricing and Model Selection

Choosing the right Nano Banana tier depends on your volume and quality requirements. Here is a comparison to help you decide which model fits your AI workflow:

Model Base Price Resolution Speed Best For
Nano Banana ~$0.04/image Up to 1K Fast Prototyping, thumbnails
Nano Banana 2 ~$0.07/image Up to 1K Under 10s Mid-volume production
Nano Banana Pro ~$0.13/image Up to 4K Moderate Hero images, text overlays, premium assets

Google AI Studio offers 50 free requests per day on all tiers, which works well for testing. For production workloads, running Nano Banana through a unified orchestration API simplifies billing and lets you switch models without changing your integration code.

Try it yourself: Build this workflow in Wireflow, with the nodes pre-configured with the exact Nano Banana Pro setup discussed above. For the full API reference, visit the Wireflow API docs.

Frequently Asked Questions

What resolution does Nano Banana Pro support?

Nano Banana Pro generates images at up to 4K resolution. You can specify 1K, 2K, or 4K in your API request parameters. Higher resolutions cost more per image ($0.13 for 1K/2K, $0.24 for 4K).

Is there a free tier for Nano Banana API access?

Yes. Google AI Studio provides 50 free requests per day across all Nano Banana models, including Pro at 4K resolution. This gives you roughly 1,500 free generations per month with just a Google account.

Can Nano Banana render text inside images?

Nano Banana Pro has best-in-class text rendering among current AI image models. It can produce legible text in multiple languages, including long passages. The original Nano Banana and Nano Banana 2 have limited text rendering capability.

How do I handle API timeouts?

Use the asynchronous execution pattern: POST to start the job, then poll the status endpoint. If polling returns no result after 60 seconds, the execution may have failed silently. Check the full execution details endpoint for error information, and retry once with an Idempotency-Key header.

What is the difference between Nano Banana 2 and Nano Banana Pro?

Nano Banana 2 is faster and cheaper, running on Gemini 3.1 Flash. It handles standard image generation well but lacks the advanced text rendering, 4K output, and multi-subject consistency of Nano Banana Pro, which runs on Gemini 3 Pro.

Can I chain Nano Banana with other AI models?

Yes. Using a workflow-based API, you can connect Nano Banana to editing models (like Flux 2 Edit), video generation models (like Kling 2.5), or utility nodes for prompt manipulation. Each node's output feeds into the next automatically.

What happens if I exceed my rate limit?

The API returns a 429 status code with a Retry-After header indicating how many seconds to wait. Your code should pause and retry after that duration. Repeated violations do not result in account suspension, but sustained over-limit traffic may be throttled further.

Does Nano Banana support image-to-image generation?

Yes. Both Nano Banana Pro and Nano Banana 2 accept an optional input image alongside a text prompt. This enables style transfer, image editing, and reference-based generation. Pass the image URL in the image1 input field of the generation node.