CLI Overview
PlanAI provides a powerful command-line interface for monitoring workflows, optimizing prompts, and managing your AI automation tasks.
Installation
Section titled “Installation”The CLI is automatically available after installing PlanAI:
pip install planaiVerify installation:
planai --helpGlobal Options
Section titled “Global Options”These options are available for all commands:
planai [global-options] <command> [command-options]LLM Configuration
Section titled “LLM Configuration”Configure the LLM provider for commands that use AI:
# Specify provider and modelplanai --llm-provider openai --llm-model gpt-4 <command>
# Use different model for reasoning tasksplanai --llm-provider openai --llm-model gpt-4 --llm-reason-model gpt-4 <command>
# Use local Ollama modelsplanai --llm-provider ollama --llm-model llama2 <command>Environment Variables
Section titled “Environment Variables”Set defaults using environment variables:
export PLANAI_LLM_PROVIDER=openaiexport PLANAI_LLM_MODEL=gpt-4export OPENAI_API_KEY=your-api-keyAvailable Commands
Section titled “Available Commands”Examine the planai cache
# Check out the cached tasksplanai cache ./cache
# Filter cache based on the Output Taskplanai cache --output-task-filter PageResult ./cacheOptions:
--clear: Clear the cache--output-task-filter: Filter the output based on the corresponding output task
optimize-prompt
Section titled “optimize-prompt”Automatically optimize prompts using AI and production data:
planai --llm-provider openai --llm-model gpt-4o-mini --llm-reason-model gpt-4 \ optimize-prompt \ --python-file app.py \ --class-name MyLLMWorker \ --search-path . \ --debug-log debug/MyLLMWorker.json \ --goal-prompt "Improve accuracy while reducing token usage"Required arguments:
--python-file: Python file containing the LLMTaskWorker--class-name: Name of the LLMTaskWorker class to optimize--search-path: Python path for imports--debug-log: Debug log file with production data--goal-prompt: Optimization goal description
Optional arguments:
--num-iterations: Number of optimization iterations (default: 3)--output-dir: Directory for optimized prompts (default: current directory)--max-samples: Maximum debug samples to use (default: all)
See the Prompt Optimization guide for detailed usage.
version
Section titled “version”Display PlanAI version information:
planai versionOutput:
PlanAI version 0.6.1Get help for any command:
# General helpplanai --help
# Command-specific helpplanai optimize-prompt --helpplanai cache --helpCommon Workflows
Section titled “Common Workflows”Development Workflow
Section titled “Development Workflow”During development, use the terminal dashboard to track execution which is enabled by default:
# Run your workflow and watch the terminal outputpython my_workflow.pyAlternatively, you can pass run_dashboard=True to the Graph run or prepare method.
By default, this will create a web based dashboard on port 5000.
Prompt Optimization Workflow
Section titled “Prompt Optimization Workflow”- Enable debug mode in your LLMTaskWorker:
class MyWorker(LLMTaskWorker): debug_mode = True # Generates debug logs-
Run your workflow to collect data
-
Optimize the prompt:
planai --llm-provider openai --llm-model gpt-4o-mini \ optimize-prompt \ --python-file my_worker.py \ --class-name MyWorker \ --debug-log debug/MyWorker.json \ --goal-prompt "Improve response quality"Next Steps
Section titled “Next Steps”- Learn about Prompt Optimization in detail
- Explore Monitoring capabilities
- See Examples using the CLI