Skip to content

CLI Overview

PlanAI provides a powerful command-line interface for monitoring workflows, optimizing prompts, and managing your AI automation tasks.

The CLI is automatically available after installing PlanAI:

Terminal window
pip install planai

Verify installation:

Terminal window
planai --help

These options are available for all commands:

Terminal window
planai [global-options] <command> [command-options]

Configure the LLM provider for commands that use AI:

Terminal window
# Specify provider and model
planai --llm-provider openai --llm-model gpt-4 <command>
# Use different model for reasoning tasks
planai --llm-provider openai --llm-model gpt-4 --llm-reason-model gpt-4 <command>
# Use local Ollama models
planai --llm-provider ollama --llm-model llama2 <command>

Set defaults using environment variables:

Terminal window
export PLANAI_LLM_PROVIDER=openai
export PLANAI_LLM_MODEL=gpt-4
export OPENAI_API_KEY=your-api-key

Examine the planai cache

Terminal window
# Check out the cached tasks
planai cache ./cache
# Filter cache based on the Output Task
planai cache --output-task-filter PageResult ./cache

Options:

  • --clear: Clear the cache
  • --output-task-filter: Filter the output based on the corresponding output task

Automatically optimize prompts using AI and production data:

Terminal window
planai --llm-provider openai --llm-model gpt-4o-mini --llm-reason-model gpt-4 \
optimize-prompt \
--python-file app.py \
--class-name MyLLMWorker \
--search-path . \
--debug-log debug/MyLLMWorker.json \
--goal-prompt "Improve accuracy while reducing token usage"

Required arguments:

  • --python-file: Python file containing the LLMTaskWorker
  • --class-name: Name of the LLMTaskWorker class to optimize
  • --search-path: Python path for imports
  • --debug-log: Debug log file with production data
  • --goal-prompt: Optimization goal description

Optional arguments:

  • --num-iterations: Number of optimization iterations (default: 3)
  • --output-dir: Directory for optimized prompts (default: current directory)
  • --max-samples: Maximum debug samples to use (default: all)

See the Prompt Optimization guide for detailed usage.

Display PlanAI version information:

Terminal window
planai version

Output:

PlanAI version 0.6

Get help for any command:

Terminal window
# General help
planai --help
# Command-specific help
planai optimize-prompt --help
planai cache --help

During development, use the terminal dashboard to track execution which is enabled by default:

Terminal window
# Run your workflow and watch the terminal output
python my_workflow.py

Alternatively, you can pass run_dashboard=True to the Graph run or prepare method. By default, this will create a web based dashboard on port 5000.

  1. Enable debug mode in your LLMTaskWorker:
class MyWorker(LLMTaskWorker):
debug_mode = True # Generates debug logs
  1. Run your workflow to collect data

  2. Optimize the prompt:

Terminal window
planai --llm-provider openai --llm-model gpt-4o-mini \
optimize-prompt \
--python-file my_worker.py \
--class-name MyWorker \
--debug-log debug/MyWorker.json \
--goal-prompt "Improve response quality"