LLM Statistics (LLM APPS)
Now you can use statistics based on LLM and LLM apps. You can check the number of user requests and LLM calls by usage, including apps, through Dashboard > Statistics icon. Let’s learn more about it.
1. You can set the statistical period you want to check.
2. You can download the overall statistics for each metric/chart as an Excel file. You can download up to a 6-month range at a time.
3. Number of user requests: This refers to the number of times users have requested the execution of LLM-related functions in the project and the number of inputs (proceeding to the next step through button clicks, message entries, etc.) entered within the app.
3-1. Total: Refers to the sum of all request histories from answer-type apps, conversation-type apps, API, and knowledge base.
3-2. Answer-type apps: Refers to the number of times executed through the generate button (or equivalent API call).
- Even if LLM is called multiple times in a single request, such as using document upload input, it is counted as 1 time = pressing the ‘generate button’.
3-3. Conversation-type apps: Refers to the number of times the user entered input (message entry, button click, input form submission, etc.) (or called an equivalent API).
3-4. Answer Generation API: Refers to the number of times answer generation was requested through the API.
3-5. Knowledge Base: Refers to the number of times the ‘Generate’ button was clicked in the knowledge base screen within the dashboard.
4. Top 20 by number of user requests: The top 20 apps with the most user requests are ranked and displayed. This includes only the number of times the app was actually run, and does not include the number of times tested with the ‘Preview’ button. These records are summed up and calculated as 5. Other preview counts.
5. Other preview counts: Represents the sum of the number of tests (preview count) with the ‘Preview’ button + requests from other resources (others). At this time, there is no distinction by app.
*View Answerbot / LLM Statistics: By clicking this button, for projects that were using the existing Alli answer bot, you can also check the Answer Bot statistics through a screen transition.
1. LLM Credits: Represents the number of credits deducted (consumed) within the project.
1.2 Total: Refers to the credit consumption that combines all LLM execution histories + model usage histories from answer-type apps, conversation-type apps, API, and knowledge base.
1-3. Answer-type apps: Refers to the credits consumed due to answer generation or action execution.
1-4. Conversation-type apps: Refers to the credits consumed due to answer generation or action execution through user input.
1-5. Answer Generation API: Refers to the credits consumed when generating answers through the API.
1-6. Knowledge Base: Refers to the credits consumed when performing Q&A generation and summary tasks in the knowledge base screen within the dashboard.
The credit calculation method varies depending on the model and number of tokens, so please refer to the price page below.
https://www.allganize.ai/en/pricing
2. Top 20 by credit: The top 20 apps with the most credit consumption are ranked and displayed. This includes only the credits consumed in actually published apps and does not include the number of times tested with the ‘Preview’ button. These records are summed up and calculated as other preview counts, the same as other items.
*1. Number of LLM calls: Represents the actual number of times LLM was called in the project. However, the numbers used for testing when installing apps from the app market page are not included.
1-2. Total: Refers to the sum of all LLM execution histories from answer-type apps, conversation-type apps, API, and knowledge base.
1-3. Answer-type apps: Refers to the number of times LLM was executed until an answer was generated or an action was executed.
1-4. Conversation-type apps: Refers to the number of times LLM was executed until an answer was generated or an action was executed through user input.
1-5. Answer Generation API: Refers to the number of times LLM was executed when generating answers through the API.
1-6. Knowledge Base: Refers to the number of times LLM was executed when performing Q&A generation and summary tasks in the knowledge base screen within the dashboard.
2. Top 20 by number of user requests: The top 20 apps with the most user requests are ranked and displayed. This includes only the number of times actually run and does not include the number of times tested with the ‘Preview’ button.
3. Other preview counts: Represents the sum of the number of times tested with the ‘Preview’ button.
LLM Statistics (LLM APP+ANSWERBOT)
Now you can use statistics based on LLM and LLM apps. You can check the number of user requests and LLM calls by usage, including apps, through Dashboard > Statistics icon. Let’s learn more about it.
1. You can set the statistical period you want to check.
2. You can download the overall statistics for each metric/chart as an Excel file. You can download up to a 6-month range at a time.
3. Number of user requests: This refers to the number of times users have requested the execution of LLM-related functions in the project and the number of inputs (proceeding to the next step through button clicks, message entries, etc.) entered within the app.
3-1. Total: Refers to the sum of all request histories from answer-type apps, conversation-type apps, API, and knowledge base.
3-2. Answer-type apps: Refers to the number of times executed through the generate button (or equivalent API call).
- Even if LLM is called multiple times in a single request, such as using document upload input, it is counted as 1 time = pressing the ‘generate button’.
3-3. Conversation-type apps: Refers to the number of times the user entered input (message entry, button click, input form submission, etc.) (or called an equivalent API).
3-4. Skills: Refers to the number of times user actions were executed in scenarios within skills.
3-5. Answer Generation API: Refers to the number of times answer generation was requested through the API.
3-6. Knowledge Base: Refers to the number of times the ‘Generate’ button was clicked in the knowledge base screen within the dashboard.
4. Top 20 by number of user requests: The top 20 apps with the most user requests are ranked and displayed. This includes only the number of times the app was actually run, and does not include the number of times tested with the ‘Preview’ button. These records are summed up and calculated as 5. Other preview counts.
5. Other preview counts: Represents the sum of the number of tests (preview count) with the ‘Preview’ button + requests from other resources (others). At this time, there is no distinction by app.
*View Answerbot / LLM Statistics: By clicking this button, for projects that were using the existing Alli answer bot, you can also check the Answer Bot statistics through a screen transition.
1. LLM Credits: Represents the number of credits deducted (consumed) within the project.
1-2. Total: Refers to the credit consumption that combines all LLM execution histories + model usage histories from answer-type apps, conversation-type apps, API, and knowledge base.
1-3. Answer-type apps: Refers to the credits consumed due to answer generation or action execution.
1-4. Conversation-type apps: Refers to the credits consumed due to answer generation or action execution through user input.
1-5. Skills: Refers to the credits consumed due to answer generation or action execution through scenarios within skills.
1-6. Answer Generation API: Refers to the credits consumed when generating answers through the API.
1-7. Knowledge Base: Refers to the credits consumed when performing Q&A generation and summary tasks in the knowledge base screen within the dashboard.
The credit calculation method varies depending on the model and number of tokens, so please refer to the price page below.
https://www.allganize.ai/en/pricing
*1. Number of LLM calls: Represents the actual number of times LLM was called in the project. However, the numbers used for testing when installing apps from the app market page are not included.
1-2. Total: Refers to the sum of all LLM execution histories from answer-type apps, conversation-type apps, API, and knowledge base.
1-3. Answer-type apps: Refers to the number of times LLM was executed until an answer was generated or an action was executed.
1-4. Conversation-type apps: Refers to the number of times LLM was executed until an answer was generated or an action was executed through user input.
1-5. Skills: Refers to the number of times LLM was executed in scenarios within skills.
1-6. Answer Generation API: Refers to the number of times LLM was executed when generating answers through the API.
1-7. Knowledge Base: Refers to the number of times LLM was executed when performing Q&A generation and summary tasks in the knowledge base screen within the dashboard.
2. Top 20 by number of user requests: The top 20 apps with the most user requests are ranked and displayed. This includes only the number of times actually run and does not include the number of times tested with the ‘Preview’ button.
3. Other preview counts: Represents the sum of the number of times tested with the ‘Preview’ button.
Customer Count Statistics #

1. In the customer count tab, you can check the monthly and daily active user counts.
2. You can also view the monthly and daily average active user counts.
Monthly active user counts are collected starting at 00:30 (UTC+0) on the 1st of every month, and daily active user counts are collected starting at 00:30 (UTC+0) every day. This process takes several hours.
3. You can view the top 20 apps by the number of active users. Only unique user counts, with duplicates removed, are calculated.
1) Answer-based Apps: The number of users who executed the create button.
2) Conversational Apps: The number of users who submitted a message or clicked a button.
3) Answer Generation API: The number of users who requested an answer generation via API.