Generate Videos
Veo 3.1​
Veo 3.1 is Google's state-of-the-art video generation model, available on FlyMyAI for fast inference.
Example: Generate a Video with Veo 3.1​
from flymyai import client, FlyMyAIPredictException
apikey = "fly-***"
model = "flymyai/veo31-fast-generate"
payload = {
"prompt": "A cinematic shot of a spaceship flying through a nebula, dramatic lighting",
}
fma_client = client(apikey=apikey)
try:
response = fma_client.predict(
model=model,
payload=payload
)
print(f"Output: {response.output_data}")
except FlyMyAIPredictException as e:
print(e)
raise e
caution
Video generation models can take significantly longer than image models. For long-running operations, consider using predict_async_task with polling instead of synchronous predict.
wan2-img-to-video-lora​
wan2-img-to-video-lora transforms static images into dynamic videos with optional LoRA support.
Try wan2-img-to-video-lora on FlyMy.AI
Example: Convert Image to Video​
from flymyai import client, FlyMyAIPredictException
import pathlib
apikey = "fly-***"
model = "flymyai/wan2-img-to-video-lora"
payload = {
"prompt": "Morphing into plushtoy",
"negative_prompt": "bright colors, overexposed, static, blurry details, low quality",
"width": 1280,
"height": 720,
"num_frames": 81,
"num_inference_steps": 30,
"guidance_scale": "5.0",
"input_image": pathlib.Path("/path/to/your/file.jpg"),
"lora_url": "None",
"fps": 16,
"acceleration_factor": "2.5"
}
fma_client = client(apikey=apikey)
try:
response = fma_client.predict(
model=model,
payload=payload
)
print(f"Output: {response.output_data['output']}")
except FlyMyAIPredictException as e:
print(e)
raise e