The Generative Answer Node allows users of GPT or LLM enabled projects to query internal documents, Q&A’s, and/or external resources to generate high quality, natural language and easy to read responses from their data. Whether it is a client or internal facing use case, the generative answer node will allow questions to be answered quickly and accurately in a conversational manner.

Let’s take a look at the fields and features available with this node.
User input #

- If needed, prompt the user with a question or text
- Repeat the text every time the node is used
- Choose the LLM base model you would like to use for generating answers
- Choose the prompt you would like to use to generate answers. We will cover how to manage prompts later in this document.
- Choose where you would like the answer to be searched from. Whether it be the list of Q&A’s within the project, any documents uploaded, or from a web (external) search.
- If Q&A’s will be used for searching, choose whether to include or exclude certain hashtags. This is a great way to narrow down the scope of the search if needed. Leave blank if the entire Q&A list will be searched as shown above.
- You can scope by a folder contained in the knowledge base. A maximum of 1000 documents can be selected.
- If documents will be used for searching, choose whether to include or exclude certain hashtags. This is a great way to narrow down the scope of the search if needed. Leave blank if the entire knowledge base will be searched as shown above.
- Save the response in a variable if it is needed to be used in another node.
- After the answer is generated, either have the flow proceed to the next node, or continue to have the user query and ask questions again
- Add a branch option for when an answer cannot be found from internal knowledge base, but a response is still generated from general knowledge
- Leave an internal note
Variable #

- Choose the variable where user input will be coming from
- Choose the LLM base model you would like to use for generating answers
- Choose the prompt you would like to use to generate answers. We will cover how to manage prompts later in this document.
- Choose where you would like the answer to be searched from. Whether it be the list of Q&A’s within the project, any documents uploaded, or from a web (external) search.
- If Q&A’s will be used for searching, choose whether to include or exclude certain hashtags. This is a great way to narrow down the scope of the search if needed. Leave blank if the entire Q&A list will be searched as shown above.
- If documents will be used for searching, choose whether to include or exclude certain hashtags. This is a great way to narrow down the scope of the search if needed. Leave blank if the entire knowledge base will be searched as shown above.
- Save the response in a variable if it is needed to be used in another node.
- After the answer is generated, either have the flow proceed to the next node, or continue to have the user query and ask questions again
- Leave an internal note
When branching option is enabled, below is the new branch shown.

LLM Models #
These models are the current offerings for what can be used for the Generative Answer Node out of the box. We are not limited to these three models – however have found that they provide the best responses.
Text-Davinci-003 – well-suited for generating text on a wide range of topics and writing styles. It possesses extensive knowledge and supports interactive responses. It can be used in the following scenarios:
- When answering questions that require detailed information or specialized knowledge
- When engaging in conversations with users to explore information or generate documents
GPT-3.5-Turbo – characterized by its fast response times and performance optimization. With enhanced accuracy, it enables high-quality text generation. It is suitable for the following scenarios:
- When real-time interaction or response is required
- When more efficient resource usage is desired
GPT-4 – with its larger model size, improved responsiveness, and advancements in deep learning, is expected to excel in the following scenarios (although it has not been officially released yet):
- When complex text generation or advanced semantic understanding is required
- When leveraging the latest natural language processing capabilities is desired
Prompt Management #
To utilize the Generative Answer Node, a prompt has been created and can be used for generative answer as-is. If any changes would like to be made to the prompt however, we will have to add and manage prompts in order to have the LLM know what it is trying to accomplish. Please see our user guide for Prompt Management here.