Skip to main content

Using LLMs Guide 🤖

Overview​

This guide will help you get started with using Language Models (LLMs) through our platform. We'll cover the basics of interacting with LLMs and how to get the best results.

Available Models​

  • GPT-4 - Advanced language model
  • GPT-3.5 - Fast and efficient language model
  • Claude - Anthropic's language model

Basic Usage​

Here's a simple example of how to use an LLM:

from flymyai import client

# Initialize the client
fma_client = client(apikey="your-api-key")

# Set the model
model = "flymyai/gpt-4"

# Prepare the input data
payload = {
"prompt": "Write a short story about a robot learning to paint",
"max_tokens": 500,
"temperature": 0.7
}

# Make the prediction
response = fma_client.predict(
model=model,
payload=payload
)

# Get the generated text
generated_text = response.output_data["text"]

Parameters​

  • prompt: The input text to process
  • max_tokens: Maximum length of the generated text
  • temperature: Controls randomness (0.0 to 1.0)
  • top_p: Controls diversity via nucleus sampling
  • frequency_penalty: Reduces repetition
  • presence_penalty: Encourages new topics

Best Practices​

  1. Write clear and specific prompts
  2. Use appropriate temperature settings
  3. Set reasonable token limits
  4. Handle responses appropriately
  5. Implement error handling