LLM

Large Language Models (LLMs) are a core component of ZBrain Flow. ZBrain Flow provides a standard interface for interacting with different LLMs. These are used widely throughout ZBrain Flow, especially in chains and agents. They can be used to generate text based on a given prompt (or input).

OpenAI

Wrapper around OpenAI's large language models.

  • max_tokens: The maximum number of tokens to generate in the completion. The model's maximal context size defaults to 256.

  • model_kwargs: Holds any model parameters valid for creating non-specified calls.

  • model_name: Defines the OpenAI model to be used.

  • Temperature: Temperature tunes the degree of randomness in text generations and is set at a default value of 0.7 It's important to note that the temperature should always be a non-negative value.

ChatOpenAI

Wrapper around OpenAI's chat large language models. It is used for tasks such as chatbots, Generative Question-Answering (GQA), and summarization.

  • max_tokens: It is the maximum number of tokens to generate in the completion. The model's maximal context size defaults to 256.

  • model_kwargs: Holds any model parameters valid for creating non-specified calls.

  • model_name: Defines the OpenAI model to be used.

  • temperature: It adjusts the level of randomness in text generation, and it's crucial to remember that this value must be non-negative, with a default setting of 0.7.

Last updated