Prompts¶
Prompts are the most important component for influencing the performance of an LLM. PlanAI provides a flexible prompting system that helps maintain consistency while allowing customization.
Default Template¶
By default, PlanAI will automatically format the input task into a prompt using this template:
Here is your input data:
{task}
Here are your instructions:
{instructions}
The {task}
placeholder is automatically filled with the formatted input task data, and {instructions}
is filled with the prompt you provide to the LLMTaskWorker.
Customizing Prompts¶
There are several ways to customize how prompts are generated:
Basic Prompt Customization Provide your instructions when creating an LLMTaskWorker:
worker = LLMTaskWorker( llm=llm, prompt="Analyze this text and extract key topics", output_types=[TopicsTask] )
Dynamic Prompt Generation Override
format_prompt()
to generate prompts based on the input task:class CustomWorker(LLMTaskWorker): def format_prompt(self, task: InputTask) -> str: return f"Analyze this {task.content_type} and provide a summary"
The
format_prompt()
method can also be used to fill template parameters in your prompt:class StatementAnalyzer(LLMTaskWorker): prompt="Think about whether {statement} is a good idea", def format_prompt(self, task: Task) -> str: statement = task.find_input(StatementTask) return self.prompt.format(statement=statement.statement)
Input Pre-processing Override
pre_process()
to modify how the input task is presented:def pre_process(self, task: InputTask) -> Optional[Task]: # Remove sensitive fields or reformat data return ProcessedTask(content=task.filtered_content)
The
pre_process()
method serves two important purposes:Data transformation: Convert or filter input data before it reaches the LLM
Input format control: Determine how input data is presented to the LLM
You can use
PydanticDictWrapper
to inject a different Pydantic object:from planai.utils import PydanticDictWrapper def pre_process(self, task: Task) -> Optional[Task]: custom_data = { "filtered": task.content, "metadata": task.get_metadata() } return PydanticDictWrapper(data=custom_data)
If
pre_process()
returnsNone
, PlanAI will not provide any input data in the default template. In this case, your class should provide all necessary context throughformat_prompt()
:class FullyCustomPrompt(LLMTaskWorker): def pre_process(self, task: Task) -> Optional[Task]: # Signal that we'll handle all input formatting return None def format_prompt(self, task: Task) -> str: # Provide complete prompt with all necessary context return f""" System: {task.system_context} Input: {task.content} Question: {task.question} """
XML Serialization For tasks containing complex text data (like markdown or text with newlines), you can enable XML serialization of the input data:
worker = LLMTaskWorker( llm=llm, prompt="Analyze this markdown document", use_xml=True, output_types=[AnalysisTask] )
This will format the input task as XML instead of JSON, which can be easier for the LLM to process when dealing with text that contains newlines or special characters.
System Prompts¶
You can also customize the system prompt that sets the context for the LLM:
worker = LLMTaskWorker(
llm=llm,
prompt="Analyze the text",
system_prompt="You are an expert analyst specialized in text classification",
output_types=[AnalysisTask]
)
Best Practices¶
Be specific and clear in your instructions
Include examples if the task is complex
Consider using pre-processing to simplify complex input data
Test different system prompts to find what works best
Use format_prompt() for dynamic instructions based on input