Agent
Language Model Settings

Language Model Settings

AI agents need powerful language models to understand and respond to requests. But different tasks require different levels of capability - a simple chatbot might only need basic comprehension, while a complex analysis tool needs advanced reasoning. That's where language model settings come in - they let you choose and configure the AI engine that powers your agent, ensuring you get the right balance of performance and efficiency for your specific needs.

What are Language Model Settings?

Language model settings control which AI model powers your agent and how it works, directly affecting the quality, speed, and cost of your agent's responses.

Every agent needs a language model to work.

How Language Model Settings Work

When you run an agent, your request is processed according to your chosen model and settings to generate the most appropriate response for your needs.

Configuring Language Model Settings

1. Choose a Model

Available Models

MindPal offers all state-of-the-art AI models. Here's our current model lineup:

ProviderTop ModelsBest For
OpenAIGPT-5, GPT-5 Mini, o3, o3 Mini, o4 MiniGeneral tasks, following instructions
AnthropicClaude Opus 4.5, Claude 4.5 Sonnet, Claude 4.5 HaikuWriting, coding, analysis
GoogleGemini 3.0 Pro, Gemini 2.5 Pro, Gemini 2.5 FlashLarge contexts, visual tasks
DeepSeekDeepSeek V3, DeepSeek R1Coding, cost-effective reasoning
PerplexitySonar, Sonar Pro, Sonar Deep ResearchWeb search, research
XAIGrok 2, Grok 4 FastGeneral tasks
GroqLLaMa 3.3 70b, Kimi K2Fast, cost-effective
💡

If you don't pick a model, we'll use a default model (currently GPT-4o Mini). See AI Credits for credit costs per model.

How to Choose a Model

Consider these rules of thumbs when selecting a model:

  1. Check Model Capabilities

    Make sure the selected model's capabilities match the requirements of the agent's job. Look for these key features in the model tooltip:

    ParameterDescription
    Context windowHow much information the model can process at once, including your input and its memory of the conversation
    Maximum output lengthThe longest response the model can generate in a single turn
    Image processing abilityWhether the model can understand and analyze images you provide
    Tool usage supportWhether the model can use external tools provided in the "Tools" settings of your agent
  2. Don't Overkill

    For simple tasks, cost-effective models are often sufficient:

    • Gemini 2.5 Flash (1 credit) - Excellent for most tasks
    • DeepSeek V3/R1 (0.5 credits) - Great for coding and reasoning
    • GPT-5 Mini (3 credits) - Balanced quality and cost

    Reserve premium models (Claude Opus 4.5, o3) for complex reasoning tasks.

  3. Consider Model Strengths

    If multiple models meet your requirements, consider their unique strengths:

    Model FamilyKey Strengths
    OpenAI (GPT-5, o3)• Excellent at following instructions
    • Consistent output quality
    • Strong at structured data tasks
    Anthropic (Claude)• Superior coding abilities
    • Nuanced writing and analysis
    • Great at technical documentation
    Google (Gemini)• Massive context windows (1M+ tokens)
    • Strong reasoning capabilities
    • Powerful visual understanding
    DeepSeek• Extremely cost-effective (0.5 credits)
    • Excellent for coding tasks
    • Strong reasoning
    Perplexity (Sonar)• Built-in web search
    • Real-time information
    • Research-focused
    Groq (LLaMa, Kimi)• Very fast responses
    • Cost-effective
    • Good for general tasks

2. Set Maximum Output Length

Control how long your agent's responses should be by setting the maximum output tokens.

For your reference, 1,000 tokens is about 750 words.

If you don't set a specific value and use "Auto", the model will adjust the length based on what you're asking it to do.

3. Control Creativity Level

Adjust how creative your agent's responses are by setting the temperature.

The higher the temperature, the more varied and creative the model's responses will be. Good for brainstorming and creative work.

The lower the temperature, the more consistent and predictable the model's responses will be. Good for fact-based tasks.

Pick "Auto" and the model will adjust creativity based on your task.

👋
MindPal is a platform that helps you build AI agents & multi-agent workflows to automate business processes. Get started here!