These tools are a bit more complex than the standard LangChain tools. Let’s take a look:

Custom code

This tool lets you execute some custom Python code. For security reasons, only a subset of builtin Python modules are included. For example, you could write a tool that makes the LLM tell a joke about the input:

Screenshot on 2023-01-28 at 16:45:39.png

Your python func will have access to a local variable called input. For the tool to work, it MUST, set a local variable called result

The execution environment has access to most of the standard Python modules, including the following

json.loads
json.dumps

langchain.agents
langchain.llms
langchain.text_splitter
langchain.document_loader
langchain
langchain.prompts.few_shot
langchain.prompts.prompt

aghq.parse_json - this is a function that attempts to parse a JSON string and fix any formatting errors automatically

LLM-API Tool

This tool interacts with external APIs, but unlike the Webhooks tool above, which needs to be carefully configured for every API call, this one’s a utility player. Just give it the URL to your API documentation, and the LLM will (should? might? usually does) figure out how to interact with that API.

The tool parameters requires only one attribute: docs_url, which is, unsurprisingly, the URL where the API docs live. Remember, the LLM needs to ingest this, so it’s better if the docs are plain text, and not too long (you may want to create a Github gist with them). Here’s a quick example of a gist with a portion of the AGHQ API documented.

So, your tool params are like this:

{
	"documentation_urls": [
		"<https://example.com/the-api-docs-url>",
		"<https://example.com/the-api-docs-url-2>"
	]
}

Alternatively, you can pass the URL of the API’s OpenAPI spec, like this:

{
	"open_api_url": "<https://app.agent-hq.io/api/v1/openapi.yaml>"
}

You can (and probably should) also pass the API’s endpoint URL in the params:

{
	"endpoint_url": "<https://app.agent-hq/api/v1>"
}

This ensures that the tool with ONLY make requests to that endpoint (otherwise you risk the LLM hallucinating some other endpoint and sending requests there instead).