Skip to content

Welcome to Banks

Banks is the linguist professor who will help you generate meaningful LLM prompts using a template language that makes sense.

Prompts are instrumental for the success of any LLM application, and Banks focuses around specific areas of their lifecycle:

  • 📙 Templating: Banks provides tools and functions to build prompts text and chat messages from generic blueprints.
  • 🎟 Versioning and metadata: Banks supports attaching metadata to prompts to ease their management, and versioning is first-class citizen.
  • 🗄 Management: Banks provides ways to store prompts on disk along with their metadata.

Banks is fundamentally Jinja2 with additional functionalities specifically designed to work with Large Language Models prompts. Similar to other template languages, Banks takes in input a generic piece of text called template and gives you back its rendered version, where the generic bits are replaced by actual data provided by the user and returned in a form that's suitable for sending to an LLM, like plain text or chat messages.

Features

Banks currently supports all the features from Jinja2 along with some additions specifically designed to help developers with LLM prompts:

  • Filters: useful to manipulate the prompt text during template rendering.
  • Extensions: useful to support custom functions (e.g. text generation via LiteLLM).
  • Macros: useful to implement complex logic in the template itself instead of Python code.

The library comes with its own set of features:

Installation

Install the latest version of Banks using pip:

pip install banks

Some functionalities require additional dependencies that need to be installed manually:

  • pip install simplemma is required by the lemmatize filter

Examples

If you like to jump straight to the code:

Security

Banks uses Jinja2 to render prompt templates through a sandboxed environment to help reduce server-side template injection (SSTI) risk.

However, do not pass untrusted user input as template text. User-controlled templates are still unsafe, and the sandbox is not a guaranteed security boundary. The following pattern is unsafe because it allows users to control the template itself:

# UNSAFE: never pass user-controlled strings as the template
user_input = request.json["template"]
p = Prompt(user_input)
result = p.text()

If your application lets users influence the content of prompts, use template variables instead:

# SAFE: user input goes into the rendering context, not the template
p = Prompt("Write a blog post about {{ topic }}.")
result = p.text({"topic": user_input})

License

banks is distributed under the terms of the MIT license.