Flux 2 by Black Forest Labs is one of the most capable text-to-image models available through hosted APIs today. Whether you are building a product that needs on-demand image generation or prototyping a creative pipeline, calling Flux 2 from your own code is straightforward once you know the right endpoints and parameters. Wireflow makes it even simpler by letting you chain Flux 2 with other AI models in a visual workflow, but understanding the raw API calls gives you full control over prompts, sizes, and post-processing.
This guide walks through four ways to call Flux 2 programmatically: the official Black Forest Labs API, fal.ai, Together AI, and a no-code alternative. Each section includes ready-to-paste code for both cURL and Python.
Prerequisites
Before running any of the examples below, make sure you have the following ready:
- An API key from at least one provider (BFL, fal.ai, or Together AI). Sign up on their respective dashboards to get one.
- Python 3.8+ installed if you plan to use the Python examples. The
requestslibrary ships with most Python distributions, but install it withpip install requestsif needed. - cURL available in your terminal. It comes pre-installed on macOS and most Linux distributions.

Option 1: Black Forest Labs Official API
The official BFL API gives you direct access to Flux 2 Pro without a middleman. The endpoint is https://api.bfl.ai/v1/flux-2-pro and authentication uses an x-key header.
cURL
curl -X POST https://api.bfl.ai/v1/flux-2-pro \
-H "Content-Type: application/json" \
-H "x-key: $BFL_API_KEY" \
-d '{
"prompt": "A futuristic city at sunset with glass towers",
"width": 1024,
"height": 768,
"safety_tolerance": 2
}'
The response returns a task ID. Poll https://api.bfl.ai/v1/get_result?id=TASK_ID until the status is Ready, then grab the image URL from the result field. This async pattern keeps request timeouts short even for complex image generation tasks.
Python
import requests
import time
import os
BFL_API_KEY = os.environ["BFL_API_KEY"]
headers = {"Content-Type": "application/json", "x-key": BFL_API_KEY}
# Submit the generation request
response = requests.post(
"https://api.bfl.ai/v1/flux-2-pro",
headers=headers,
json={
"prompt": "A futuristic city at sunset with glass towers",
"width": 1024,
"height": 768,
},
)
task_id = response.json()["id"]
# Poll for the result
while True:
result = requests.get(
f"https://api.bfl.ai/v1/get_result?id={task_id}",
headers=headers,
).json()
if result["status"] == "Ready":
print(result["result"]["sample"])
break
time.sleep(1)
The BFL API charges per image and does not require a subscription. It supports Flux 2 Pro, Flux 2 Max, and several variant models. You can also pass seed for reproducible outputs, which is useful when building automated pipelines.

Option 2: fal.ai
fal.ai wraps Flux 2 in a synchronous endpoint so you get the image back in a single request without polling. The trade-off is slightly higher latency per call, but the simpler integration often makes it worth it.
cURL
curl -X POST https://fal.run/fal-ai/flux-2-pro \
-H "Authorization: Key $FAL_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "A futuristic city at sunset with glass towers",
"image_size": "landscape_16_9"
}'
Python
import requests
import os
response = requests.post(
"https://fal.run/fal-ai/flux-2-pro",
headers={
"Authorization": f"Key {os.environ['FAL_API_KEY']}",
"Content-Type": "application/json",
},
json={
"prompt": "A futuristic city at sunset with glass towers",
"image_size": "landscape_16_9",
},
)
data = response.json()
print(data["images"][0]["url"])
fal.ai also provides a dedicated Python SDK (pip install fal-client) that adds queue-based submission and webhook callbacks for production use. The SDK is a good fit when you need to chain multiple models in sequence, since it handles retries and backpressure automatically.
Option 3: Together AI
Together AI offers Flux 2 through an OpenAI-compatible images endpoint, which means existing OpenAI client code works with minimal changes. This is convenient if your application already uses the OpenAI SDK.
cURL
curl -X POST https://api.together.xyz/v1/images/generations \
-H "Authorization: Bearer $TOGETHER_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "black-forest-labs/FLUX.2-pro",
"prompt": "A futuristic city at sunset with glass towers",
"width": 1024,
"height": 768,
"steps": 28,
"n": 1,
"response_format": "url"
}'
Python
from together import Together
import os
client = Together(api_key=os.environ["TOGETHER_API_KEY"])
response = client.images.generate(
model="black-forest-labs/FLUX.2-pro",
prompt="A futuristic city at sunset with glass towers",
width=1024,
height=768,
steps=28,
n=1,
)
print(response.data[0].url)
Install the SDK with pip install together. The OpenAI-compatible interface means you can switch between Flux 2 and other image models by changing a single model string, which is useful for A/B testing different generators in production.

Comparing the Three Providers
| Feature | BFL (Official) | fal.ai | Together AI |
|---|---|---|---|
| Auth method | x-key header |
Key bearer |
Bearer token |
| Response style | Async (poll) | Synchronous | Synchronous |
| Python SDK | No official SDK | fal-client |
together |
| OpenAI compatible | No | No | Yes |
| Image size control | Width/height pixels | Named presets | Width/height pixels |
| Pricing model | Per-image | Per-image | Per-image |
| Flux 2 variants | Pro, Max, Flash | Pro, Flash | Pro, Flex, Max |
Choose BFL when you want the latest model versions first. Choose fal.ai for the simplest integration with a single synchronous call. Choose Together AI when your codebase already uses OpenAI-compatible endpoints.
Tips for Production Use
Building a reliable Flux 2 integration takes more than copy-pasting the examples above. Here are practical tips:
-
Set timeouts generously. Image generation can take 5 to 30 seconds depending on model variant and load. Set your HTTP client timeout to at least 60 seconds for synchronous endpoints.
-
Cache by prompt hash. If users submit the same prompt repeatedly, hash the prompt + parameters and serve the cached image. This saves cost and improves response time for batch generation workflows.
-
Use webhooks for async flows. Both fal.ai and BFL support webhook callbacks. Instead of polling, provide a callback URL and let the provider push the result to your server when it is ready.
-
Store images in your own bucket. Provider-hosted URLs expire. Download the generated image and upload it to your own S3 or R2 bucket immediately after generation. This prevents broken images in your production pipeline.
-
Handle rate limits gracefully. Implement exponential backoff with jitter. Most providers return
429when you exceed your plan limits. -
Pin the model version. Some providers update default model versions without notice. Specify the exact model ID (e.g.,
black-forest-labs/FLUX.2-pro) rather than relying on aliases to keep outputs consistent across automated content generation.

No-Code Alternative: Visual Workflow Builder
If you prefer not to write API integration code, you can call Flux 2 through a visual interface instead. Drag a text input node, connect it to a Flux 2 Pro node, and click run. The platform handles authentication, retries, and image hosting without requiring any code.
Try it yourself: Build this workflow in Wireflow -- the nodes are pre-configured with the exact setup discussed above.
Frequently Asked Questions
What is Flux 2 and how does it differ from Flux 1?
Flux 2 is the second-generation text-to-image model from Black Forest Labs. It improves on Flux 1 with better prompt adherence, higher detail in complex scenes, and more consistent text rendering. The API interface remains similar, so migrating existing Flux 1 integrations requires only changing the model ID and endpoint.
Is Flux 2 free to use via API?
No provider offers unlimited free access. BFL, fal.ai, and Together AI each have pay-per-image pricing. Some offer free trial credits for new accounts. Check each provider's pricing page for current rates per image generation.
Which Flux 2 variant should I use?
Flux 2 Pro is the best general-purpose option. Flux 2 Max produces higher-detail outputs at higher cost. Flux 2 Flash is faster and cheaper, suited for previews or draft work in rapid prototyping.
Can I generate multiple images in a single API call?
Together AI supports the n parameter to generate multiple images per request. BFL and fal.ai require separate requests for each image, though you can run them concurrently to build batch pipelines.
How do I handle NSFW filtering?
BFL uses a safety_tolerance parameter (0 to 6). fal.ai and Together AI have their own content filtering settings. Review each provider's documentation for your use case and ensure your application complies with their acceptable use policies.
What image sizes does Flux 2 support?
BFL and Together AI accept arbitrary width and height values in multiples of 64. fal.ai uses named presets like landscape_16_9, square, and portrait_16_9. Maximum resolution varies by provider and plan tier.
Can I use Flux 2 for image-to-image generation?
Yes. Both BFL and fal.ai support an optional image input alongside the text prompt. Pass an image field with a base64-encoded or URL-referenced source image to guide the generation. This is useful for style transfer and visual editing workflows.
How do I get the best results from Flux 2 prompts?
Be specific and descriptive. Include details about composition, lighting, style, and subject. Avoid vague terms like "beautiful" or "amazing." Structure your prompt with the main subject first, followed by scene details and style descriptors. Test with the no-code workflow builder to iterate quickly before coding.
Conclusion
Calling Flux 2 from code is a matter of picking the right provider for your needs and following their authentication pattern. The BFL official API gives you the freshest models, fal.ai keeps integration simple with synchronous responses, and Together AI fits into existing OpenAI-compatible toolchains. Wireflow adds a visual layer on top, letting you build multi-step image generation pipelines without writing boilerplate API code.



