Skip to content

5-Minute Quickstart

Get your first AI agent running in less than 5 minutes. No complex setup, no configuration files, just Python.

Installation

pip install agenticraft

That's it. No additional dependencies to manually install.

Your First Agent

Step 1: Set Your API Key

export OPENAI_API_KEY="your-key-here"

Or create a .env file:

OPENAI_API_KEY=your-key-here

Step 2: Create Your Agent

Create a file called hello_agent.py:

from agenticraft import Agent

# Create a simple agent
agent = Agent(
    name="Assistant",
    instructions="You are a helpful AI assistant."
)

# Run the agent
response = agent.run("Tell me a fun fact about Python")
print(response.content)

Step 3: Run It

python hello_agent.py

Congratulations! 🎉 You've just created your first AI agent.

Adding Capabilities with Handlers

Let's make your agent more capable by adding handler functions:

from agenticraft import Agent, WorkflowAgent

# Define handler functions for capabilities
def calculate_handler(agent, step, context):
    """Handler for mathematical calculations."""
    expression = context.get("expression", "")
    try:
        result = eval(expression, {"__builtins__": {}}, {})
        context["result"] = result
        return f"Calculated: {expression} = {result}"
    except Exception as e:
        return f"Calculation error: {e}"

def get_time_handler(agent, step, context):
    """Handler to get current time."""
    from datetime import datetime
    current_time = datetime.now().strftime("%I:%M %p")
    context["time"] = current_time
    return f"Current time: {current_time}"

# Create a workflow agent with handlers
agent = WorkflowAgent(
    name="SmartAssistant",
    instructions="You are a helpful assistant that can calculate and tell time."
)

# Register handlers
agent.register_handler("calculate", calculate_handler)
agent.register_handler("get_time", get_time_handler)

# Create and run a workflow
workflow = agent.create_workflow("assist")
workflow.add_step(name="calc", handler="calculate")
workflow.add_step(name="time", handler="get_time")

# Execute with context
context = {"expression": "15 * 0.847"}
result = await agent.execute_workflow(workflow, context=context)
print(result)

Understanding Agent Reasoning

One of AgentiCraft's core features is transparent reasoning:

response = agent.run("Help me plan a birthday party for 20 people")

# See what the agent is thinking
print("=== Agent's Reasoning ===")
print(response.reasoning)

print("\n=== Final Response ===")
print(response.content)

Creating a Simple Workflow

Chain multiple agents together:

from agenticraft import Agent, Workflow, Step

# Create specialized agents
researcher = Agent(
    name="Researcher",
    instructions="You research topics thoroughly and provide detailed information."
)

writer = Agent(
    name="Writer", 
    instructions="You write engaging content based on research."
)

# Create a workflow
workflow = Workflow(name="content_creation")

# Add steps - no complex graphs needed!
workflow.add_steps([
    Step("research", agent=researcher, inputs=["topic"]),
    Step("write", agent=writer, depends_on=["research"])
])

# Run the workflow
result = await workflow.run(topic="The future of AI agents")
print(result["write"])

Memory for Conversational Agents

Make your agents remember context:

from agenticraft import Agent, ConversationMemory

agent = Agent(
    name="ChatBot",
    instructions="You are a friendly conversational AI.",
    memory=[ConversationMemory(max_turns=10)]
)

# First interaction
response1 = agent.run("My name is Alice")
print(response1.content)

# The agent remembers!
response2 = agent.run("What's my name?")
print(response2.content)  # Will correctly recall "Alice"

Using Different LLM Providers

AgentiCraft supports multiple providers:

# OpenAI (default)
agent = Agent(name="GPT4", model="gpt-4")

# Anthropic Claude
agent = Agent(name="Claude", model="claude-3-opus", api_key="anthropic-key")

# Google Gemini
agent = Agent(name="Gemini", model="gemini-pro", api_key="google-key")

# Local Ollama
agent = Agent(name="Local", model="ollama/llama2", base_url="http://localhost:11434")

Next Steps

You've learned the basics! Here's what to explore next:

Learn More

See Examples

Production Ready

Quick Tips

Environment Variables

Create a .env file in your project root to manage API keys:

OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

Async Support

All agent operations support async/await:

response = await agent.arun("Your prompt")

Error Handling

AgentiCraft provides clear error messages:

try:
    response = agent.run("Do something")
except AgentError as e:
    print(f"Agent error: {e}")

Getting Help


Ready for more? Check out our comprehensive examples or dive into the API reference.