motleycrew.agents.llama_index

Modules

llama_index

llama_index_react

class motleycrew.agents.llama_index.LlamaIndexMotleyAgent(prompt_prefix: str | None = None, description: str | None = None, name: str | None = None, agent_factory: MotleyAgentFactory[AgentRunner] | None = None, tools: Sequence[MotleySupportedTool] | None = None, force_output_handler: bool = False, verbose: bool = False)

Bases: MotleyAgentParent

MotleyCrew wrapper for LlamaIndex agents.

__init__(prompt_prefix: str | None = None, description: str | None = None, name: str | None = None, agent_factory: MotleyAgentFactory[AgentRunner] | None = None, tools: Sequence[MotleySupportedTool] | None = None, force_output_handler: bool = False, verbose: bool = False)
Parameters:
  • prompt_prefix – Prefix to the agent’s prompt. Can be used for providing additional context, such as the agent’s role or backstory.

  • description

    Description of the agent.

    Unlike the prompt prefix, it is not included in the prompt. The description is only used for describing the agent’s purpose when giving it as a tool to other agents.

  • name

    Name of the agent. The name is used for identifying the agent when it is given as a tool to other agents, as well as for logging purposes.

    It is not included in the agent’s prompt.

  • agent_factory

    Factory function to create the agent. The factory function should accept a dictionary of tools and return an AgentRunner instance.

    See motleycrew.common.types.MotleyAgentFactory for more details.

    Alternatively, you can use the from_agent() method to wrap an existing AgentRunner.

  • tools – Tools to add to the agent.

  • force_output_handler – Whether to force the agent to return through an output handler. If True, at least one tool must have return_direct set to True.

  • output_handler – Output handler for the agent.

  • verbose – Whether to log verbose output.

materialize()

Materialize the agent by creating the agent instance using the agent factory. This method should be called before invoking the agent for the first time.

invoke(input: dict, config: RunnableConfig | None = None, **kwargs: Any) Any

Transform a single input into an output. Override to implement.

Parameters:
  • input – The input to the Runnable.

  • config – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.

Returns:

The output of the Runnable.

async ainvoke(input: dict, config: RunnableConfig | None = None, **kwargs: Any) Any

Default implementation of ainvoke, calls invoke from a thread.

The default implementation allows usage of async code even if the Runnable did not implement a native async version of invoke.

Subclasses should override this method if they can run asynchronously.

static from_agent(agent: AgentRunner, description: str | None = None, prompt_prefix: str | None = None, tools: Sequence[MotleySupportedTool] | None = None, verbose: bool = False) LlamaIndexMotleyAgent

Create a LlamaIndexMotleyAgent from a llama_index.core.agent.AgentRunner instance.

Using this method, you can wrap an existing AgentRunner without providing a factory function.

Parameters:
  • agent – AgentRunner instance to wrap.

  • prompt_prefix – Prefix to the agent’s prompt. Can be used for providing additional context, such as the agent’s role or backstory.

  • description

    Description of the agent.

    Unlike the prompt prefix, it is not included in the prompt. The description is only used for describing the agent’s purpose when giving it as a tool to other agents.

  • tools – Tools to add to the agent.

  • verbose – Whether to log verbose output.

class motleycrew.agents.llama_index.ReActLlamaIndexMotleyAgent(prompt_prefix: str | None = None, description: str | None = None, name: str | None = None, tools: Sequence[MotleySupportedTool] | None = None, force_output_handler: bool = False, llm: LLM | None = None, verbose: bool = False, max_iterations: int = 10)

Bases: LlamaIndexMotleyAgent

Wrapper for the LlamaIndex implementation of ReAct agent.

__init__(prompt_prefix: str | None = None, description: str | None = None, name: str | None = None, tools: Sequence[MotleySupportedTool] | None = None, force_output_handler: bool = False, llm: LLM | None = None, verbose: bool = False, max_iterations: int = 10)
Parameters:
  • prompt_prefix – Prefix to the agent’s prompt. Can be used for providing additional context, such as the agent’s role or backstory.

  • description

    Description of the agent.

    Unlike the prompt prefix, it is not included in the prompt. The description is only used for describing the agent’s purpose when giving it as a tool to other agents.

  • name

    Name of the agent. The name is used for identifying the agent when it is given as a tool to other agents, as well as for logging purposes.

    It is not included in the agent’s prompt.

  • tools – Tools to add to the agent.

  • force_output_handler – Whether to force the agent to return through an output handler. If True, at least one tool must have return_direct set to True.

  • llm – LLM instance to use.

  • verbose – Whether to log verbose output.

  • max_iterations – Maximum number of iterations for the agent. Passed on to the max_iterations parameter of the ReActAgent.