Instruction Generators
- class afterimage.SimpleInstructionGeneratorCallback(api_key: str | SmartKeyPool, prompt: str | None = None, model_name: str | None = None, model_provider_name: Literal['gemini', 'openai', 'deepseek', 'local', 'openrouter'] = 'gemini', n_instructions: int = 3, safety_settings: dict | None = None, monitor: GenerationMonitor | None = None, llm_create_extras: dict[str, Any] | None = None)[source]
Bases:
LLMBackedInstructionGeneratorCallbackGenerates instructions from the correspondent prompt only (no document context).
Omits a
providerattribute soSamplingStrategydoes not treat this callback as document-backed.- Parameters:
api_key – API key for the generative AI service.
prompt – System instruction for instruction generation. If None, uses the default.
model_name – Model name to use.
model_provider_name – Model provider name to use.
n_instructions – Number of instructions to generate in each round.
safety_settings – Safety settings for the model (mainly for Gemini).
monitor – Optional
GenerationMonitor.
- async agenerate(original_prompt)[source]
Async variant of
generate().
- class afterimage.ContextualInstructionGeneratorCallback(api_key: str | SmartKeyPool, documents: list[str] | DocumentProvider, prompt: str | None = None, model_name: str | None = None, model_provider_name: Literal['gemini', 'openai', 'deepseek', 'local', 'openrouter'] = 'gemini', num_random_contexts: int = 1, n_instructions: int = 3, separator_text: str = '\n--------------------------------------------------------------------------------\n\n', safety_settings: dict | None = None, monitor: GenerationMonitor | None = None, llm_create_extras: dict[str, Any] | None = None)[source]
Bases:
LLMBackedInstructionGeneratorCallbackGenerates instructions based on randomly sampled contexts.
- Parameters:
api_key – API key for the generative AI service.
documents – Either a list of texts or a DocumentProvider instance providing context to ground the instructions. For each round of generation num_random_contexts documents are sampled from this collection.
prompt – Prompt that guides the instruction generation. If None, uses the default instruction generation prompt.
model_name – Model name to use.
model_provider_name – Model provider name to use.
num_random_contexts – Number of contexts to sample for each round of generation.
n_instructions – Number of instructions to generate in each round of generation.
separator_text – Separator text for merging contexts if more than one context is sampled.
safety_settings – Safety settings for the model. Mainly intended for Gemini models. Deprecated and may be removed in the future.
monitor – GenerationMonitor instance to use for tracking. If None, Conversation and/or structured generators will set their own monitor.
- class afterimage.PersonaInstructionGeneratorCallback(api_key: str | SmartKeyPool, documents: list[str] | DocumentProvider, prompt: str | None = None, model_name: str | None = None, model_provider_name: Literal['gemini', 'openai', 'deepseek', 'local', 'openrouter'] = 'gemini', num_random_contexts: int = 1, n_instructions: int = 3, separator_text: str = '\n--------------------------------------------------------------------------------\n\n', safety_settings: dict | None = None, monitor: GenerationMonitor | None = None, llm_create_extras: dict[str, Any] | None = None)[source]
Bases:
ContextualInstructionGeneratorCallbackGenerates instructions based on randomly sampled contexts and personas.
- It works very similarly to ~ContextualInstructionGeneratorCallback but it also samples a persona from the sampled documents.
This usually results in more diverse yet still contextually relevant instructions.
- Parameters:
api_key – API key for the generative AI service.
documents – Either a list of texts or a DocumentProvider instance providing context to ground the instructions. For each round of generation num_random_contexts documents are sampled from this collection.
prompt – Prompt that guides the instruction generation. If None, uses the default instruction generation prompt.
model_name – Model name to use.
model_provider_name – Model provider name to use.
num_random_contexts – Number of contexts to sample for each round of generation.
n_instructions – Number of instructions to generate in each round of generation.
separator_text – Separator text for merging contexts if more than one context is sampled.
safety_settings – Safety settings for the model. Mainly intended for Gemini models. Deprecated and may be removed in the future.
monitor – GenerationMonitor instance to use for tracking. If None, Conversation and/or structured generators will set their own monitor
- async agenerate(original_prompt)[source]
Generates instructions based on the provided prompt, sampled context and persona asynchronously.
- generate(original_prompt)[source]
Generates instructions based on the provided prompt, sampled context and persona.
- Parameters:
original_prompt (str) – The prompt guiding instruction generation.
- Returns:
The instructions generated along with the context and persona used.
- Return type:
GeneratedInstructions
- class afterimage.ToolCallingInstructionGeneratorCallback(api_key: str | SmartKeyPool, tools: List[dict | Type[BaseModel]], documents: list[str] | DocumentProvider, prompt: str | None = None, model_name: str | None = None, model_provider_name: Literal['gemini', 'openai', 'deepseek', 'local', 'openrouter'] = 'gemini', num_random_contexts: int = 1, n_instructions: int = 3, num_tools_to_sample: int = 2, separator_text: str = '\n--------------------------------------------------------------------------------\n\n', safety_settings: dict | None = None, monitor: GenerationMonitor | None = None, llm_create_extras: dict[str, Any] | None = None)[source]
Bases:
PersonaInstructionGeneratorCallbackGenerates instructions that specifically require calling provided tools, optionally using personas.
- Parameters:
api_key – API key for the generative AI service.
tools – List of tools to use. each item of this list should be an OpenAI-style tool description as a dictionary or a pydantic model.
documents – Either a list of texts or a DocumentProvider instance providing context to ground the instructions. For each round of generation num_random_contexts documents are sampled from this collection.
prompt – Prompt that guides the instruction generation. If None, uses the default instruction generation prompt.
model_name – Model name to use.
model_provider_name – Model provider name to use.
num_random_contexts – Number of contexts to sample for each round of generation.
n_instructions – Number of instructions to generate in each round of generation.
num_tools_to_sample – Number of tools to sample as the targets for each round of generation.
separator_text – Separator text for merging contexts if more than one context is sampled.
safety_settings – Safety settings for the model. Mainly intended for Gemini models. Deprecated and may be removed in the future.
monitor – GenerationMonitor instance to use for tracking. If None, Conversation and/or structured generators will set their own monitor
- async acreate_correspondent_prompt(respondent_prompt: str) str[source]
Create a correspondent prompt based on the respondent prompt asynchronously.
This method can be overridden by subclasses to customize correspondent prompt creation. By default, returns None, which means the conversation generator should handle it.
- Parameters:
respondent_prompt – The prompt for the respondent (assistant)
- Returns:
The correspondent prompt, or None if the generator should handle it
- Return type:
str
- async agenerate(original_prompt)[source]
Generates instructions that require tool calls asynchronously.
- create_correspondent_prompt(_respondent_prompt: str) str[source]
Create a correspondent prompt based on the respondent prompt.
This method can be overridden by subclasses to customize correspondent prompt creation. By default, returns None, which means the conversation generator should handle it.
- Parameters:
respondent_prompt – The prompt for the respondent (assistant)
- Returns:
The correspondent prompt, or None if the generator should handle it
- Return type:
str