General LLM Node

Updated on May 10, 2024

The General LLM node allows GPT or LLM enabled projects to run a single prompt, or a complex chain of prompts within a skill flow to automate or accomplish tasks utilizing a large language model. The LLM node also has the ability to enable a complex “agent” that will be able to perform more complex tasks that would be needed to gain insight into the information within a company.

Let’s take a look at the fields and features available with the node.

Single Prompt #

  1. Choose the LLM base model you would like to use for prompting
  2. Choose the prompt you would like to use. We will cover how to manage prompts later in this document.
  3. Map reduce is useful for multiple documents and long text prompts that will come close to, or pass token/length limits of each LLM model.
  4. This allows you to insert previously held dialogue into the prompt. The contents and information from the conversation before entering the node are passed on when executing the prompt at the LLM node, allowing continuous conversation, including answers to follow-up questions.
  5. Check this box if you would like the generated answer to be displayed in the chat
  6. Save the response generated in a variable to be used in another node
  7. Leave an internal note

Agent #

  1. Choose the agent you would like to use. Currently, alli_agent is the only option
  2. View a description of what the agent does. alli_agent allows for document title searches, document summarizations, small talk, and generative answers
  3. Check this box if you would like the generated answer to be displayed in the chat
  4. Save the response generated in a variable to be used in another node
  5. Leave an internal note

LLM Models #

These models are the current offerings for what can be used for the General LLM Node out of the box. We are not limited to these three models – however have found that they provide the best responses.

Text-Davinci-003 – well-suited for generating text on a wide range of topics and writing styles. It possesses extensive knowledge and supports interactive responses. It can be used in the following scenarios:

  • When answering questions that require detailed information or specialized knowledge
  • When engaging in conversations with users to explore information or generate documents

GPT-3.5-Turbo – characterized by its fast response times and performance optimization. With enhanced accuracy, it enables high-quality text generation. It is suitable for the following scenarios:

  • When real-time interaction or response is required
  • When more efficient resource usage is desired

GPT-4 – with its larger model size, improved responsiveness, and advancements in deep learning, is expected to excel in the following scenarios (although it has not been officially released yet):

  • When complex text generation or advanced semantic understanding is required
  • When leveraging the latest natural language processing capabilities is desired

Prompt Management #

To perform tasks utilizing an LLM and the General LLM Node, we will have to add and manage prompts in order to have the LLM know what it is trying to accomplish. In order to manage prompts, please see our prompt management guide here.