Chains

ZBrain Flow provides the Chain component for building applications that require combining various components together to create a single, coherent application. Different types of chains allow for different levels of complexity.

LLMChain

The LLMChain is a straightforward chain that adds functionality around language models. An LLMChain consists of a PromptTemplate, memory and a language model (either an LLM or chat model).

Parameters:

  • LLM: Language Model to use.

  • Memory: Default memory store.

  • Prompt: Prompt template to use in the chain. Prompt variables can be created with any chosen name inside curly brackets, e.g. {variable_name}

  • output_key: This parameter is used to specify which key in the LLM output dictionary should be returned as the final output. By default, the LLMChain returns both the input and output key values — defaults to text.

CombineDocsChain

The CombineDocsChain is the core chain for working with documents. These chains include strategies for aggregating loaded documents to perform tasks such as document summarization, responding to questions based on documents, extracting information from documents, and more.

Parameters:

  • LLM: Language Model to use.

  • chain_type: Each chain type applies a different combination strategy.

    • stuff: It is the most straightforward type that takes a list of documents, inserts them all into a prompt, and passes that to an LLM. This chain is ideal for scenarios where documents are typically small in size, and in most instances, only a few documents are provided as input.

    • map_reduce: The map-reduce documents chain operates in two steps. First, it applies an LLM chain to each document individually("Map" step). In this process, the output of the LLM chain for each document is treated as a new document. Next, all these new documents are passed to a distinct combined documents chain to generate a single output("Reduce" step). It may also compress or condense the mapped documents to ensure they can fit into the combined documents chain, which might further pass them to an LLM. If needed, this compression step can be executed recursively.

    • map_rerank: The map re-rank documents chain runs an initial prompt on each document that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest-scoring response is returned.

    • refine: The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.

ConversationalChain

The ConversationalChain is a simple chain designed for interactive conversations with a language model, making it well-suited for chatbots or virtual assistants. It facilitates dynamic conversations, Q&A and intricate dialogues. Parameters:

  • LLM: Language Model to use in the chain.

  • Memory: Default memory store.

  • input_key: Used to specify the key under which the user input will be stored in the conversation memory. It allows you to provide the user's input to the chain for processing and generating a response.

  • output_key: Used to specify the key under which the generated response will be stored in the conversation memory. It allows you to retrieve the response using the specified key.

ConversationalRetrievalChain

This chain first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever and finally passes those documents and the question to a question-answering chain to return a response.

Parameters:

  • LLM: Language Model to use in the chain.

  • Memory: Default memory store.

  • Retriever: The retriever is used to fetch relevant documents. (under development)

  • chain_type: Each chain type applies a different combination strategy.

    • stuff: It is the most straightforward type that takes a list of documents, inserts them all into a prompt, and passes that to an LLM. This chain is ideal for scenarios where documents are typically small in size, and in most instances, only a few documents are provided as input.

    • map_reduce: The map-reduce documents chain operates in two steps. First, it applies an LLM chain to each document individually("Map" step). In this process, the output of the LLM chain for each document is treated as a new document. Next, all these new documents are passed to a distinct combined documents chain to generate a single output("Reduce" step). It may also compress or condense the mapped documents to ensure they can fit into the combined documents chain, which might further pass them to an LLM. If needed, this compression step can be executed recursively.

    • map_rerank: The map re-rank documents chain runs an initial prompt on each document that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest-scoring response is returned.

    • refine: The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.

LLMCheckerChain

One of the main issues with using LLMs is that they can sometimes hallucinate and make false claims. Interestingly, a remarkably effective approach for mitigating this issue involves employing the LLM itself to scrutinize and validate its own responses, for which LLMCheckerChain can be used.

Parameters:

  • LLM: Language Model to use.

LLMMathChain

The LLMMathChain integrates a language model (LLM) with a math calculation component, allowing users to input intricate mathematical problems and receive corresponding solutions or responses.

Parameters:

  • LLM: Language Model to use.

  • LLMChain: LLM Chain to use.

  • Memory: Default memory store.

  • input_key: Used to specify the input value for the mathematical calculation. It allows you to provide the specific values or variables that you want to use in the calculation; the default is set as the question.

  • output_key: Used to specify the key under which the output of the mathematical calculation will be stored. It allows you to retrieve the result of the calculation using the specified key; default is set as the answer.

SimpleSequentialChain

Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. In SimpleSequentialChain each step has a singular input/output, and the output of one step is the input to the next.

Parameters:

  • Chains: Multiple Chains to use.

  • Memory: Default memory store.

  • input_key: This parameter is used to specify which key in the LLM is used as input; default is set as input.

  • output_key: This parameter is used to specify which key in the Chain should be returned as the final output; default is set as output.

SequentialChain

It is a more general form of sequential chains, allowing for multiple inputs/outputs.

Parameters:

  • Chains: Multiple Chains to use.

  • Memory: Default memory store.

  • output_variables: This parameter is used to specify which response in the Chain should be returned as the final output. To show the specific chain response, append '_print' with the output variable of the chain.

SeriesCharacterChain

SeriesCharacterChain is a chain that can be used to have a conversation with a character from a series.

Parameters:

  • Character: The character that you want to have a conversation with.

  • Series: The name of the series that the character is from.

Last updated