Feature/Add bullmq redis for message queue processing (#3568)

* add bullmq redis for message queue processing

* Update pnpm-lock.yaml

* update queue manager

* remove singleton patterns, add redis to cache pool

* add bull board ui

* update rate limit handler

* update redis configuration

* Merge add rate limit redis prefix

* update rate limit queue events

* update preview loader to queue

* refractor namings to constants

* update env variable for queue

* update worker shutdown gracefully
This commit is contained in:
Henry Heng 2025-01-23 14:08:02 +00:00 committed by GitHub
parent 14adb936f2
commit a2a475ba7a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
59 changed files with 38958 additions and 36985 deletions

View File

@ -120,46 +120,45 @@ Flowise has 3 different modules in a single mono repository.
Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://docs.flowiseai.com/environment-variables)
| Variable | Description | Type | Default |
| ---------------------------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
| PORT | The HTTP port Flowise runs on | Number | 3000 |
| CORS_ORIGINS | The allowed origins for all cross-origin HTTP calls | String | |
| IFRAME_ORIGINS | The allowed origins for iframe src embedding | String | |
| FLOWISE_USERNAME | Username to login | String | |
| FLOWISE_PASSWORD | Password to login | String | |
| FLOWISE_FILE_SIZE_LIMIT | Upload File Size Limit | String | 50mb |
| DISABLE_CHATFLOW_REUSE | Forces the creation of a new ChatFlow for each call instead of reusing existing ones from cache | Boolean | |
| DEBUG | Print logs from components | Boolean | |
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
| LOG_JSON_SPACES | Spaces to beautify JSON logs | | 2 |
| APIKEY_STORAGE_TYPE | To store api keys on a JSON file or database. Default is `json` | Enum String: `json`, `db` | `json` |
| APIKEY_PATH | Location where api keys are saved when `APIKEY_STORAGE_TYPE` is `json` | String | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_SSL_KEY_BASE64 | Database SSL client cert in base64 (takes priority over DATABASE_SSL) | Boolean | false |
| DATABASE_SSL | Database connection overssl (When DATABASE_TYPE is postgre) | Boolean | false |
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | Encryption key to be used instead of the key stored in SECRETKEY_PATH | String | |
| DISABLE_FLOWISE_TELEMETRY | Turn off telemetry | Boolean | |
| MODEL_LIST_CONFIG_JSON | File path to load list of models from your local config file | String | `/your_model_list_config_file_path` |
| STORAGE_TYPE | Type of storage for uploaded files. default is `local` | Enum String: `s3`, `local` | `local` |
| BLOB_STORAGE_PATH | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3` | String | |
| S3_STORAGE_ACCESS_KEY_ID | AWS Access Key | String | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS Secret Key | String | |
| S3_STORAGE_REGION | Region for S3 bucket | String | |
| S3_ENDPOINT_URL | Custom Endpoint for S3 | String | |
| S3_FORCE_PATH_STYLE | Set this to true to force the request to use path-style addressing | Boolean | false |
| SHOW_COMMUNITY_NODES | Show nodes created by community | Boolean | |
| DISABLED_NODES | Hide nodes from UI (comma separated list of node names) | String | |
| Variable | Description | Type | Default |
| ---------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
| PORT | The HTTP port Flowise runs on | Number | 3000 |
| CORS_ORIGINS | The allowed origins for all cross-origin HTTP calls | String | |
| IFRAME_ORIGINS | The allowed origins for iframe src embedding | String | |
| FLOWISE_USERNAME | Username to login | String | |
| FLOWISE_PASSWORD | Password to login | String | |
| FLOWISE_FILE_SIZE_LIMIT | Upload File Size Limit | String | 50mb |
| DEBUG | Print logs from components | Boolean | |
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
| LOG_JSON_SPACES | Spaces to beautify JSON logs | | 2 |
| APIKEY_STORAGE_TYPE | To store api keys on a JSON file or database. Default is `json` | Enum String: `json`, `db` | `json` |
| APIKEY_PATH | Location where api keys are saved when `APIKEY_STORAGE_TYPE` is `json` | String | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_SSL_KEY_BASE64 | Database SSL client cert in base64 (takes priority over DATABASE_SSL) | Boolean | false |
| DATABASE_SSL | Database connection overssl (When DATABASE_TYPE is postgre) | Boolean | false |
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | Encryption key to be used instead of the key stored in SECRETKEY_PATH | String | |
| DISABLE_FLOWISE_TELEMETRY | Turn off telemetry | Boolean | |
| MODEL_LIST_CONFIG_JSON | File path to load list of models from your local config file | String | `/your_model_list_config_file_path` |
| STORAGE_TYPE | Type of storage for uploaded files. default is `local` | Enum String: `s3`, `local` | `local` |
| BLOB_STORAGE_PATH | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3` | String | |
| S3_STORAGE_ACCESS_KEY_ID | AWS Access Key | String | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS Secret Key | String | |
| S3_STORAGE_REGION | Region for S3 bucket | String | |
| S3_ENDPOINT_URL | Custom Endpoint for S3 | String | |
| S3_FORCE_PATH_STYLE | Set this to true to force the request to use path-style addressing | Boolean | false |
| SHOW_COMMUNITY_NODES | Show nodes created by community | Boolean | |
| DISABLED_NODES | Hide nodes from UI (comma separated list of node names) | String | |
You can also specify the env variables when using `npx`. For example:

View File

@ -32,8 +32,6 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey
# FLOWISE_FILE_SIZE_LIMIT=50mb
# DISABLE_CHATFLOW_REUSE=true
# DEBUG=true
# LOG_LEVEL=info (error | warn | info | verbose | debug)
# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
@ -79,4 +77,21 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# see https://www.npmjs.com/package/global-agent for more details
# GLOBAL_AGENT_HTTP_PROXY=CorporateHttpProxyUrl
# GLOBAL_AGENT_HTTPS_PROXY=CorporateHttpsProxyUrl
# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
######################
# QUEUE CONFIGURATION
#######################
# MODE=queue #(queue | main)
# QUEUE_NAME=flowise-queue
# QUEUE_REDIS_EVENT_STREAM_MAX_LEN=100000
# WORKER_CONCURRENCY=100000
# REDIS_URL=
# REDIS_HOST=localhost
# REDIS_PORT=6379
# REDIS_USERNAME=
# REDIS_PASSWORD=
# REDIS_TLS=
# REDIS_CERT=
# REDIS_KEY=
# REDIS_CA=

View File

@ -34,6 +34,19 @@ services:
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
- DISABLED_NODES=${DISABLED_NODES}
- MODE=${MODE}
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
- QUEUE_NAME=${QUEUE_NAME}
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
- REDIS_URL=${REDIS_URL}
- REDIS_HOST=${REDIS_HOST}
- REDIS_PORT=${REDIS_PORT}
- REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_USERNAME=${REDIS_USERNAME}
- REDIS_TLS=${REDIS_TLS}
- REDIS_CERT=${REDIS_CERT}
- REDIS_KEY=${REDIS_KEY}
- REDIS_CA=${REDIS_CA}
ports:
- '${PORT}:${PORT}'
volumes:

24
docker/worker/README.md Normal file
View File

@ -0,0 +1,24 @@
# Flowise Worker
By utilizing worker instances when operating in queue mode, Flowise can be scaled horizontally by adding more workers to handle increased workloads or scaled down by removing workers when demand decreases.
Heres an overview of the process:
1. The primary Flowise instance sends an execution ID to a message broker, Redis, which maintains a queue of pending executions, allowing the next available worker to process them.
2. A worker from the pool retrieves a message from Redis.
The worker starts execute the actual job.
3. Once the execution is completed, the worker alerts the main instance that the execution is finished.
# How to use
## Setting up Main Server:
1. Follow [setup guide](https://github.com/FlowiseAI/Flowise/blob/main/docker/README.md)
2. In the `.env.example`, setup all the necessary env variables for `QUEUE CONFIGURATION`
## Setting up Worker:
1. Copy paste the same `.env` file used to setup main server. Change the `PORT` to other available port numbers. Ex: 5566
2. `docker compose up -d`
3. Open [http://localhost:5566](http://localhost:5566)
4. You can bring the worker container down by `docker compose stop`

View File

@ -0,0 +1,54 @@
version: '3.1'
services:
flowise:
image: flowiseai/flowise
restart: always
environment:
- PORT=${PORT}
- CORS_ORIGINS=${CORS_ORIGINS}
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
- DEBUG=${DEBUG}
- DATABASE_PATH=${DATABASE_PATH}
- DATABASE_TYPE=${DATABASE_TYPE}
- DATABASE_PORT=${DATABASE_PORT}
- DATABASE_HOST=${DATABASE_HOST}
- DATABASE_NAME=${DATABASE_NAME}
- DATABASE_USER=${DATABASE_USER}
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_SSL=${DATABASE_SSL}
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
- APIKEY_STORAGE_TYPE=${APIKEY_STORAGE_TYPE}
- APIKEY_PATH=${APIKEY_PATH}
- SECRETKEY_PATH=${SECRETKEY_PATH}
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
- LOG_LEVEL=${LOG_LEVEL}
- LOG_PATH=${LOG_PATH}
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
- DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
- DISABLED_NODES=${DISABLED_NODES}
- MODE=${MODE}
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
- QUEUE_NAME=${QUEUE_NAME}
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
- REDIS_URL=${REDIS_URL}
- REDIS_HOST=${REDIS_HOST}
- REDIS_PORT=${REDIS_PORT}
- REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_USERNAME=${REDIS_USERNAME}
- REDIS_TLS=${REDIS_TLS}
- REDIS_CERT=${REDIS_CERT}
- REDIS_KEY=${REDIS_KEY}
- REDIS_CA=${REDIS_CA}
ports:
- '${PORT}:${PORT}'
volumes:
- ~/.flowise:/root/.flowise
entrypoint: /bin/sh -c "sleep 3; flowise worker"

View File

@ -118,41 +118,40 @@ Flowise 在一个单一的单体存储库中有 3 个不同的模块。
Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。阅读[更多信息](https://docs.flowiseai.com/environment-variables)
| 变量名 | 描述 | 类型 | 默认值 |
| ---------------------------- | -------------------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- |
| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
| FLOWISE_FILE_SIZE_LIMIT | 上传文件大小限制 | 字符串 | 50mb |
| DISABLE_CHATFLOW_REUSE | 强制为每次调用创建一个新的 ChatFlow而不是重用缓存中的现有 ChatFlow | 布尔值 | |
| DEBUG | 打印组件的日志 | 布尔值 | |
| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
| APIKEY_STORAGE_TYPE | 存储 API 密钥的存储类型 | 枚举字符串: `json`, `db` | `json` |
| APIKEY_PATH | 存储 API 密钥的位置, 当`APIKEY_STORAGE_TYPE`是`json` | 字符串 | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| SECRETKEY_PATH | 保存加密密钥(用于加密/解密凭据)的位置 | 字符串 | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | 加密密钥用于替代存储在 SECRETKEY_PATH 中的密钥 | 字符串 |
| DISABLE_FLOWISE_TELEMETRY | 关闭遥测 | 字符串 |
| MODEL_LIST_CONFIG_JSON | 加载模型的位置 | 字符 | `/your_model_list_config_file_path` |
| STORAGE_TYPE | 上传文件的存储类型 | 枚举字符串: `local`, `s3` | `local` |
| BLOB_STORAGE_PATH | 上传文件存储的本地文件夹路径, 当`STORAGE_TYPE`是`local` | 字符串 | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | S3 存储文件夹路径, 当`STORAGE_TYPE`是`s3` | 字符串 | |
| S3_STORAGE_ACCESS_KEY_ID | AWS 访问密钥 (Access Key) | 字符串 | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS 密钥 (Secret Key) | 字符串 | |
| S3_STORAGE_REGION | S3 存储地区 | 字符串 | |
| S3_ENDPOINT_URL | S3 端点 URL | 字符串 | |
| S3_FORCE_PATH_STYLE | 将其设置为 true 以强制请求使用路径样式寻址 | 布尔值 | false |
| SHOW_COMMUNITY_NODES | 显示由社区创建的节点 | 布尔值 | |
| DISABLED_NODES | 从界面中隐藏节点(以逗号分隔的节点名称列表) | 字符串 | |
| 变量名 | 描述 | 类型 | 默认值 |
| ---------------------------- | ------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- | --- |
| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
| FLOWISE_FILE_SIZE_LIMIT | 上传文件大小限制 | 字符串 | 50mb | |
| DEBUG | 打印组件的日志 | 布尔值 | |
| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
| APIKEY_STORAGE_TYPE | 存储 API 密钥的存储类型 | 枚举字符串: `json`, `db` | `json` |
| APIKEY_PATH | 存储 API 密钥的位置, 当`APIKEY_STORAGE_TYPE`是`json` | 字符串 | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| SECRETKEY_PATH | 保存加密密钥(用于加密/解密凭据)的位置 | 字符串 | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | 加密密钥用于替代存储在 SECRETKEY_PATH 中的密钥 | 字符串 |
| DISABLE_FLOWISE_TELEMETRY | 关闭遥测 | 字符串 |
| MODEL_LIST_CONFIG_JSON | 加载模型的位置 | 字符 | `/your_model_list_config_file_path` |
| STORAGE_TYPE | 上传文件的存储类型 | 枚举字符串: `local`, `s3` | `local` |
| BLOB_STORAGE_PATH | 上传文件存储的本地文件夹路径, 当`STORAGE_TYPE`是`local` | 字符串 | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | S3 存储文件夹路径, 当`STORAGE_TYPE`是`s3` | 字符串 | |
| S3_STORAGE_ACCESS_KEY_ID | AWS 访问密钥 (Access Key) | 字符串 | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS 密钥 (Secret Key) | 字符串 | |
| S3_STORAGE_REGION | S3 存储地区 | 字符串 | |
| S3_ENDPOINT_URL | S3 端点 URL | 字符串 | |
| S3_FORCE_PATH_STYLE | 将其设置为 true 以强制请求使用路径样式寻址 | 布尔值 | false |
| SHOW_COMMUNITY_NODES | 显示由社区创建的节点 | 布尔值 | |
| DISABLED_NODES | 从界面中隐藏节点(以逗号分隔的节点名称列表) | 字符串 | |
您也可以在使用 `npx` 时指定环境变量。例如:

View File

@ -17,6 +17,9 @@
"start": "run-script-os",
"start:windows": "cd packages/server/bin && run start",
"start:default": "cd packages/server/bin && ./run start",
"start-worker": "run-script-os",
"start-worker:windows": "cd packages/server/bin && run worker",
"start-worker:default": "cd packages/server/bin && ./run worker",
"clean": "pnpm --filter \"./packages/**\" clean",
"nuke": "pnpm --filter \"./packages/**\" nuke && rimraf node_modules .turbo",
"format": "prettier --write \"**/*.{ts,tsx,md}\"",

View File

@ -21,7 +21,7 @@ import {
} from '../../../src/Interface'
import { AgentExecutor } from '../../../src/agents'
import { addImagesToMessages, llmSupportsVision } from '../../../src/multiModalUtils'
import { checkInputs, Moderation } from '../../moderation/Moderation'
import { checkInputs, Moderation, streamResponse } from '../../moderation/Moderation'
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
const DEFAULT_PREFIX = `Assistant is a large language model trained by OpenAI.
@ -124,10 +124,9 @@ class ConversationalAgent_Agents implements INode {
input = await checkInputs(moderations, input)
} catch (e) {
await new Promise((resolve) => setTimeout(resolve, 500))
// if (options.shouldStreamResponse) {
// streamResponse(options.sseStreamer, options.chatId, e.message)
// }
//streamResponse(options.socketIO && options.socketIOClientId, e.message, options.socketIO, options.socketIOClientId)
if (options.shouldStreamResponse) {
streamResponse(sseStreamer, chatId, e.message)
}
return formatResponse(e.message)
}
}

View File

@ -27,17 +27,17 @@ class InMemoryCache implements INode {
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const memoryMap = options.cachePool.getLLMCache(options.chatflowid) ?? new Map()
const memoryMap = (await options.cachePool.getLLMCache(options.chatflowid)) ?? new Map()
const inMemCache = new InMemoryCacheExtended(memoryMap)
inMemCache.lookup = async (prompt: string, llmKey: string): Promise<any | null> => {
const memory = options.cachePool.getLLMCache(options.chatflowid) ?? inMemCache.cache
const memory = (await options.cachePool.getLLMCache(options.chatflowid)) ?? inMemCache.cache
return Promise.resolve(memory.get(getCacheKey(prompt, llmKey)) ?? null)
}
inMemCache.update = async (prompt: string, llmKey: string, value: any): Promise<void> => {
inMemCache.cache.set(getCacheKey(prompt, llmKey), value)
options.cachePool.addLLMCache(options.chatflowid, inMemCache.cache)
await options.cachePool.addLLMCache(options.chatflowid, inMemCache.cache)
}
return inMemCache
}

View File

@ -43,11 +43,11 @@ class InMemoryEmbeddingCache implements INode {
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const namespace = nodeData.inputs?.namespace as string
const underlyingEmbeddings = nodeData.inputs?.embeddings as Embeddings
const memoryMap = options.cachePool.getEmbeddingCache(options.chatflowid) ?? {}
const memoryMap = (await options.cachePool.getEmbeddingCache(options.chatflowid)) ?? {}
const inMemCache = new InMemoryEmbeddingCacheExtended(memoryMap)
inMemCache.mget = async (keys: string[]) => {
const memory = options.cachePool.getEmbeddingCache(options.chatflowid) ?? inMemCache.store
const memory = (await options.cachePool.getEmbeddingCache(options.chatflowid)) ?? inMemCache.store
return keys.map((key) => memory[key])
}
@ -55,14 +55,14 @@ class InMemoryEmbeddingCache implements INode {
for (const [key, value] of keyValuePairs) {
inMemCache.store[key] = value
}
options.cachePool.addEmbeddingCache(options.chatflowid, inMemCache.store)
await options.cachePool.addEmbeddingCache(options.chatflowid, inMemCache.store)
}
inMemCache.mdelete = async (keys: string[]): Promise<void> => {
for (const key of keys) {
delete inMemCache.store[key]
}
options.cachePool.addEmbeddingCache(options.chatflowid, inMemCache.store)
await options.cachePool.addEmbeddingCache(options.chatflowid, inMemCache.store)
}
return CacheBackedEmbeddings.fromBytesStore(underlyingEmbeddings, inMemCache, {

View File

@ -1,47 +1,10 @@
import { Redis, RedisOptions } from 'ioredis'
import { isEqual } from 'lodash'
import { Redis } from 'ioredis'
import hash from 'object-hash'
import { RedisCache as LangchainRedisCache } from '@langchain/community/caches/ioredis'
import { StoredGeneration, mapStoredMessageToChatMessage } from '@langchain/core/messages'
import { Generation, ChatGeneration } from '@langchain/core/outputs'
import { getBaseClasses, getCredentialData, getCredentialParam, ICommonObject, INode, INodeData, INodeParams } from '../../../src'
let redisClientSingleton: Redis
let redisClientOption: RedisOptions
let redisClientUrl: string
const getRedisClientbyOption = (option: RedisOptions) => {
if (!redisClientSingleton) {
// if client doesn't exists
redisClientSingleton = new Redis(option)
redisClientOption = option
return redisClientSingleton
} else if (redisClientSingleton && !isEqual(option, redisClientOption)) {
// if client exists but option changed
redisClientSingleton.quit()
redisClientSingleton = new Redis(option)
redisClientOption = option
return redisClientSingleton
}
return redisClientSingleton
}
const getRedisClientbyUrl = (url: string) => {
if (!redisClientSingleton) {
// if client doesn't exists
redisClientSingleton = new Redis(url)
redisClientUrl = url
return redisClientSingleton
} else if (redisClientSingleton && url !== redisClientUrl) {
// if client exists but option changed
redisClientSingleton.quit()
redisClientSingleton = new Redis(url)
redisClientUrl = url
return redisClientSingleton
}
return redisClientSingleton
}
class RedisCache implements INode {
label: string
name: string
@ -85,33 +48,19 @@ class RedisCache implements INode {
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const ttl = nodeData.inputs?.ttl as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const redisUrl = getCredentialParam('redisUrl', credentialData, nodeData)
let client: Redis
if (!redisUrl || redisUrl === '') {
const username = getCredentialParam('redisCacheUser', credentialData, nodeData)
const password = getCredentialParam('redisCachePwd', credentialData, nodeData)
const portStr = getCredentialParam('redisCachePort', credentialData, nodeData)
const host = getCredentialParam('redisCacheHost', credentialData, nodeData)
const sslEnabled = getCredentialParam('redisCacheSslEnabled', credentialData, nodeData)
const tlsOptions = sslEnabled === true ? { tls: { rejectUnauthorized: false } } : {}
client = getRedisClientbyOption({
port: portStr ? parseInt(portStr) : 6379,
host,
username,
password,
...tlsOptions
})
} else {
client = getRedisClientbyUrl(redisUrl)
}
let client = await getRedisClient(nodeData, options)
const redisClient = new LangchainRedisCache(client)
redisClient.lookup = async (prompt: string, llmKey: string) => {
try {
const pingResp = await client.ping()
if (pingResp !== 'PONG') {
client = await getRedisClient(nodeData, options)
}
} catch (error) {
client = await getRedisClient(nodeData, options)
}
let idx = 0
let key = getCacheKey(prompt, llmKey, String(idx))
let value = await client.get(key)
@ -125,10 +74,21 @@ class RedisCache implements INode {
value = await client.get(key)
}
client.quit()
return generations.length > 0 ? generations : null
}
redisClient.update = async (prompt: string, llmKey: string, value: Generation[]) => {
try {
const pingResp = await client.ping()
if (pingResp !== 'PONG') {
client = await getRedisClient(nodeData, options)
}
} catch (error) {
client = await getRedisClient(nodeData, options)
}
for (let i = 0; i < value.length; i += 1) {
const key = getCacheKey(prompt, llmKey, String(i))
if (ttl) {
@ -137,12 +97,43 @@ class RedisCache implements INode {
await client.set(key, JSON.stringify(serializeGeneration(value[i])))
}
}
client.quit()
}
client.quit()
return redisClient
}
}
const getRedisClient = async (nodeData: INodeData, options: ICommonObject) => {
let client: Redis
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const redisUrl = getCredentialParam('redisUrl', credentialData, nodeData)
if (!redisUrl || redisUrl === '') {
const username = getCredentialParam('redisCacheUser', credentialData, nodeData)
const password = getCredentialParam('redisCachePwd', credentialData, nodeData)
const portStr = getCredentialParam('redisCachePort', credentialData, nodeData)
const host = getCredentialParam('redisCacheHost', credentialData, nodeData)
const sslEnabled = getCredentialParam('redisCacheSslEnabled', credentialData, nodeData)
const tlsOptions = sslEnabled === true ? { tls: { rejectUnauthorized: false } } : {}
client = new Redis({
port: portStr ? parseInt(portStr) : 6379,
host,
username,
password,
...tlsOptions
})
} else {
client = new Redis(redisUrl)
}
return client
}
const getCacheKey = (...strings: string[]): string => hash(strings.join('_'))
const deserializeStoredGeneration = (storedGeneration: StoredGeneration) => {
if (storedGeneration.message !== undefined) {

View File

@ -1,45 +1,11 @@
import { Redis, RedisOptions } from 'ioredis'
import { isEqual } from 'lodash'
import { Redis } from 'ioredis'
import { RedisByteStore } from '@langchain/community/storage/ioredis'
import { Embeddings } from '@langchain/core/embeddings'
import { CacheBackedEmbeddings } from 'langchain/embeddings/cache_backed'
import { Embeddings, EmbeddingsInterface } from '@langchain/core/embeddings'
import { CacheBackedEmbeddingsFields } from 'langchain/embeddings/cache_backed'
import { getBaseClasses, getCredentialData, getCredentialParam, ICommonObject, INode, INodeData, INodeParams } from '../../../src'
let redisClientSingleton: Redis
let redisClientOption: RedisOptions
let redisClientUrl: string
const getRedisClientbyOption = (option: RedisOptions) => {
if (!redisClientSingleton) {
// if client doesn't exists
redisClientSingleton = new Redis(option)
redisClientOption = option
return redisClientSingleton
} else if (redisClientSingleton && !isEqual(option, redisClientOption)) {
// if client exists but option changed
redisClientSingleton.quit()
redisClientSingleton = new Redis(option)
redisClientOption = option
return redisClientSingleton
}
return redisClientSingleton
}
const getRedisClientbyUrl = (url: string) => {
if (!redisClientSingleton) {
// if client doesn't exists
redisClientSingleton = new Redis(url)
redisClientUrl = url
return redisClientSingleton
} else if (redisClientSingleton && url !== redisClientUrl) {
// if client exists but option changed
redisClientSingleton.quit()
redisClientSingleton = new Redis(url)
redisClientUrl = url
return redisClientSingleton
}
return redisClientSingleton
}
import { BaseStore } from '@langchain/core/stores'
import { insecureHash } from '@langchain/core/utils/hash'
import { Document } from '@langchain/core/documents'
class RedisEmbeddingsCache implements INode {
label: string
@ -112,7 +78,7 @@ class RedisEmbeddingsCache implements INode {
const tlsOptions = sslEnabled === true ? { tls: { rejectUnauthorized: false } } : {}
client = getRedisClientbyOption({
client = new Redis({
port: portStr ? parseInt(portStr) : 6379,
host,
username,
@ -120,7 +86,7 @@ class RedisEmbeddingsCache implements INode {
...tlsOptions
})
} else {
client = getRedisClientbyUrl(redisUrl)
client = new Redis(redisUrl)
}
ttl ??= '3600'
@ -130,10 +96,143 @@ class RedisEmbeddingsCache implements INode {
ttl: ttlNumber
})
return CacheBackedEmbeddings.fromBytesStore(underlyingEmbeddings, redisStore, {
namespace: namespace
const store = CacheBackedEmbeddings.fromBytesStore(underlyingEmbeddings, redisStore, {
namespace: namespace,
redisClient: client
})
return store
}
}
class CacheBackedEmbeddings extends Embeddings {
protected underlyingEmbeddings: EmbeddingsInterface
protected documentEmbeddingStore: BaseStore<string, number[]>
protected redisClient?: Redis
constructor(fields: CacheBackedEmbeddingsFields & { redisClient?: Redis }) {
super(fields)
this.underlyingEmbeddings = fields.underlyingEmbeddings
this.documentEmbeddingStore = fields.documentEmbeddingStore
this.redisClient = fields.redisClient
}
async embedQuery(document: string): Promise<number[]> {
const res = this.underlyingEmbeddings.embedQuery(document)
this.redisClient?.quit()
return res
}
async embedDocuments(documents: string[]): Promise<number[][]> {
const vectors = await this.documentEmbeddingStore.mget(documents)
const missingIndicies = []
const missingDocuments = []
for (let i = 0; i < vectors.length; i += 1) {
if (vectors[i] === undefined) {
missingIndicies.push(i)
missingDocuments.push(documents[i])
}
}
if (missingDocuments.length) {
const missingVectors = await this.underlyingEmbeddings.embedDocuments(missingDocuments)
const keyValuePairs: [string, number[]][] = missingDocuments.map((document, i) => [document, missingVectors[i]])
await this.documentEmbeddingStore.mset(keyValuePairs)
for (let i = 0; i < missingIndicies.length; i += 1) {
vectors[missingIndicies[i]] = missingVectors[i]
}
}
this.redisClient?.quit()
return vectors as number[][]
}
static fromBytesStore(
underlyingEmbeddings: EmbeddingsInterface,
documentEmbeddingStore: BaseStore<string, Uint8Array>,
options?: {
namespace?: string
redisClient?: Redis
}
) {
const encoder = new TextEncoder()
const decoder = new TextDecoder()
const encoderBackedStore = new EncoderBackedStore<string, number[], Uint8Array>({
store: documentEmbeddingStore,
keyEncoder: (key) => (options?.namespace ?? '') + insecureHash(key),
valueSerializer: (value) => encoder.encode(JSON.stringify(value)),
valueDeserializer: (serializedValue) => JSON.parse(decoder.decode(serializedValue))
})
return new this({
underlyingEmbeddings,
documentEmbeddingStore: encoderBackedStore,
redisClient: options?.redisClient
})
}
}
class EncoderBackedStore<K, V, SerializedType = any> extends BaseStore<K, V> {
lc_namespace = ['langchain', 'storage']
store: BaseStore<string, SerializedType>
keyEncoder: (key: K) => string
valueSerializer: (value: V) => SerializedType
valueDeserializer: (value: SerializedType) => V
constructor(fields: {
store: BaseStore<string, SerializedType>
keyEncoder: (key: K) => string
valueSerializer: (value: V) => SerializedType
valueDeserializer: (value: SerializedType) => V
}) {
super(fields)
this.store = fields.store
this.keyEncoder = fields.keyEncoder
this.valueSerializer = fields.valueSerializer
this.valueDeserializer = fields.valueDeserializer
}
async mget(keys: K[]): Promise<(V | undefined)[]> {
const encodedKeys = keys.map(this.keyEncoder)
const values = await this.store.mget(encodedKeys)
return values.map((value) => {
if (value === undefined) {
return undefined
}
return this.valueDeserializer(value)
})
}
async mset(keyValuePairs: [K, V][]): Promise<void> {
const encodedPairs: [string, SerializedType][] = keyValuePairs.map(([key, value]) => [
this.keyEncoder(key),
this.valueSerializer(value)
])
return this.store.mset(encodedPairs)
}
async mdelete(keys: K[]): Promise<void> {
const encodedKeys = keys.map(this.keyEncoder)
return this.store.mdelete(encodedKeys)
}
async *yieldKeys(prefix?: string | undefined): AsyncGenerator<string | K> {
yield* this.store.yieldKeys(prefix)
}
}
export function createDocumentStoreFromByteStore(store: BaseStore<string, Uint8Array>) {
const encoder = new TextEncoder()
const decoder = new TextDecoder()
return new EncoderBackedStore({
store,
keyEncoder: (key: string) => key,
valueSerializer: (doc: Document) => encoder.encode(JSON.stringify({ pageContent: doc.pageContent, metadata: doc.metadata })),
valueDeserializer: (bytes: Uint8Array) => new Document(JSON.parse(decoder.decode(bytes)))
})
}
module.exports = { nodeClass: RedisEmbeddingsCache }

View File

@ -180,7 +180,6 @@ class SqlDatabaseChain_Chains implements INode {
if (shouldStreamResponse) {
streamResponse(sseStreamer, chatId, e.message)
}
// streamResponse(options.socketIO && options.socketIOClientId, e.message, options.socketIO, options.socketIOClientId)
return formatResponse(e.message)
}
}

View File

@ -1,5 +1,4 @@
import { Redis, RedisConfigNodejs } from '@upstash/redis'
import { isEqual } from 'lodash'
import { Redis } from '@upstash/redis'
import { BufferMemory, BufferMemoryInput } from 'langchain/memory'
import { UpstashRedisChatMessageHistory } from '@langchain/community/stores/message/upstash_redis'
import { mapStoredMessageToChatMessage, AIMessage, HumanMessage, StoredMessage, BaseMessage } from '@langchain/core/messages'
@ -13,24 +12,6 @@ import {
} from '../../../src/utils'
import { ICommonObject } from '../../../src/Interface'
let redisClientSingleton: Redis
let redisClientOption: RedisConfigNodejs
const getRedisClientbyOption = (option: RedisConfigNodejs) => {
if (!redisClientSingleton) {
// if client doesn't exists
redisClientSingleton = new Redis(option)
redisClientOption = option
return redisClientSingleton
} else if (redisClientSingleton && !isEqual(option, redisClientOption)) {
// if client exists but option changed
redisClientSingleton = new Redis(option)
redisClientOption = option
return redisClientSingleton
}
return redisClientSingleton
}
class UpstashRedisBackedChatMemory_Memory implements INode {
label: string
name: string
@ -109,7 +90,7 @@ const initalizeUpstashRedis = async (nodeData: INodeData, options: ICommonObject
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const upstashRestToken = getCredentialParam('upstashRestToken', credentialData, nodeData)
const client = getRedisClientbyOption({
const client = new Redis({
url: baseURL,
token: upstashRestToken
})

View File

@ -138,7 +138,14 @@ class Elasticsearch_VectorStores implements INode {
})
// end of workaround
const elasticSearchClientArgs = prepareClientArgs(endPoint, cloudId, credentialData, nodeData, similarityMeasure, indexName)
const { elasticClient, elasticSearchClientArgs } = prepareClientArgs(
endPoint,
cloudId,
credentialData,
nodeData,
similarityMeasure,
indexName
)
const vectorStore = new ElasticVectorSearch(embeddings, elasticSearchClientArgs)
try {
@ -155,9 +162,11 @@ class Elasticsearch_VectorStores implements INode {
vectorStoreName: indexName
}
})
await elasticClient.close()
return res
} else {
await vectorStore.addDocuments(finalDocs)
await elasticClient.close()
return { numAdded: finalDocs.length, addedDocs: finalDocs }
}
} catch (e) {
@ -174,7 +183,14 @@ class Elasticsearch_VectorStores implements INode {
const endPoint = getCredentialParam('endpoint', credentialData, nodeData)
const cloudId = getCredentialParam('cloudId', credentialData, nodeData)
const elasticSearchClientArgs = prepareClientArgs(endPoint, cloudId, credentialData, nodeData, similarityMeasure, indexName)
const { elasticClient, elasticSearchClientArgs } = prepareClientArgs(
endPoint,
cloudId,
credentialData,
nodeData,
similarityMeasure,
indexName
)
const vectorStore = new ElasticVectorSearch(embeddings, elasticSearchClientArgs)
try {
@ -186,8 +202,10 @@ class Elasticsearch_VectorStores implements INode {
await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)
await elasticClient.close()
} else {
await vectorStore.delete({ ids })
await elasticClient.close()
}
} catch (e) {
throw new Error(e)
@ -206,8 +224,22 @@ class Elasticsearch_VectorStores implements INode {
const k = topK ? parseFloat(topK) : 4
const output = nodeData.outputs?.output as string
const elasticSearchClientArgs = prepareClientArgs(endPoint, cloudId, credentialData, nodeData, similarityMeasure, indexName)
const { elasticClient, elasticSearchClientArgs } = prepareClientArgs(
endPoint,
cloudId,
credentialData,
nodeData,
similarityMeasure,
indexName
)
const vectorStore = await ElasticVectorSearch.fromExistingIndex(embeddings, elasticSearchClientArgs)
const originalSimilaritySearchVectorWithScore = vectorStore.similaritySearchVectorWithScore
vectorStore.similaritySearchVectorWithScore = async (query: number[], k: number, filter?: any) => {
const results = await originalSimilaritySearchVectorWithScore.call(vectorStore, query, k, filter)
await elasticClient.close()
return results
}
if (output === 'retriever') {
return vectorStore.asRetriever(k)
@ -289,12 +321,17 @@ const prepareClientArgs = (
similarity: 'l2_norm'
}
}
const elasticClient = new Client(elasticSearchClientOptions)
const elasticSearchClientArgs: ElasticClientArgs = {
client: new Client(elasticSearchClientOptions),
client: elasticClient,
indexName: indexName,
vectorSearchOptions: vectorSearchOptions
}
return elasticSearchClientArgs
return {
elasticClient,
elasticSearchClientArgs
}
}
module.exports = { nodeClass: Elasticsearch_VectorStores }

View File

@ -1,5 +1,5 @@
import { flatten, isEqual } from 'lodash'
import { Pinecone, PineconeConfiguration } from '@pinecone-database/pinecone'
import { flatten } from 'lodash'
import { Pinecone } from '@pinecone-database/pinecone'
import { PineconeStoreParams, PineconeStore } from '@langchain/pinecone'
import { Embeddings } from '@langchain/core/embeddings'
import { Document } from '@langchain/core/documents'
@ -9,23 +9,6 @@ import { FLOWISE_CHATID, getBaseClasses, getCredentialData, getCredentialParam }
import { addMMRInputParams, howToUseFileUpload, resolveVectorStoreOrRetriever } from '../VectorStoreUtils'
import { index } from '../../../src/indexing'
let pineconeClientSingleton: Pinecone
let pineconeClientOption: PineconeConfiguration
const getPineconeClient = (option: PineconeConfiguration) => {
if (!pineconeClientSingleton) {
// if client doesn't exists
pineconeClientSingleton = new Pinecone(option)
pineconeClientOption = option
return pineconeClientSingleton
} else if (pineconeClientSingleton && !isEqual(option, pineconeClientOption)) {
// if client exists but option changed
pineconeClientSingleton = new Pinecone(option)
return pineconeClientSingleton
}
return pineconeClientSingleton
}
class Pinecone_VectorStores implements INode {
label: string
name: string
@ -155,7 +138,7 @@ class Pinecone_VectorStores implements INode {
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const pineconeApiKey = getCredentialParam('pineconeApiKey', credentialData, nodeData)
const client = getPineconeClient({ apiKey: pineconeApiKey })
const client = new Pinecone({ apiKey: pineconeApiKey })
const pineconeIndex = client.Index(_index)
@ -211,7 +194,7 @@ class Pinecone_VectorStores implements INode {
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const pineconeApiKey = getCredentialParam('pineconeApiKey', credentialData, nodeData)
const client = getPineconeClient({ apiKey: pineconeApiKey })
const client = new Pinecone({ apiKey: pineconeApiKey })
const pineconeIndex = client.Index(_index)
@ -253,7 +236,7 @@ class Pinecone_VectorStores implements INode {
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const pineconeApiKey = getCredentialParam('pineconeApiKey', credentialData, nodeData)
const client = getPineconeClient({ apiKey: pineconeApiKey })
const client = new Pinecone({ apiKey: pineconeApiKey })
const pineconeIndex = client.Index(index)

View File

@ -7,7 +7,7 @@ import { howToUseFileUpload } from '../VectorStoreUtils'
import { VectorStore } from '@langchain/core/vectorstores'
import { VectorStoreDriver } from './driver/Base'
import { TypeORMDriver } from './driver/TypeORM'
import { PGVectorDriver } from './driver/PGVector'
// import { PGVectorDriver } from './driver/PGVector'
import { getContentColumnName, getDatabase, getHost, getPort, getTableName } from './utils'
const serverCredentialsExists = !!process.env.POSTGRES_VECTORSTORE_USER && !!process.env.POSTGRES_VECTORSTORE_PASSWORD
@ -91,7 +91,7 @@ class Postgres_VectorStores implements INode {
additionalParams: true,
optional: true
},
{
/*{
label: 'Driver',
name: 'driver',
type: 'options',
@ -109,7 +109,7 @@ class Postgres_VectorStores implements INode {
],
optional: true,
additionalParams: true
},
},*/
{
label: 'Distance Strategy',
name: 'distanceStrategy',
@ -300,14 +300,15 @@ class Postgres_VectorStores implements INode {
}
static getDriverFromConfig(nodeData: INodeData, options: ICommonObject): VectorStoreDriver {
switch (nodeData.inputs?.driver) {
/*switch (nodeData.inputs?.driver) {
case 'typeorm':
return new TypeORMDriver(nodeData, options)
case 'pgvector':
return new PGVectorDriver(nodeData, options)
default:
return new TypeORMDriver(nodeData, options)
}
}*/
return new TypeORMDriver(nodeData, options)
}
}

View File

@ -1,3 +1,7 @@
/*
* Temporary disabled due to increasing open connections without releasing them
* Use TypeORM instead
import { VectorStoreDriver } from './Base'
import { FLOWISE_CHATID } from '../../../../src'
import { DistanceStrategy, PGVectorStore, PGVectorStoreArgs } from '@langchain/community/vectorstores/pgvector'
@ -120,3 +124,4 @@ export class PGVectorDriver extends VectorStoreDriver {
return instance
}
}
*/

View File

@ -51,7 +51,9 @@ export class TypeORMDriver extends VectorStoreDriver {
}
async instanciate(metadataFilters?: any) {
return this.adaptInstance(await TypeORMVectorStore.fromDataSource(this.getEmbeddings(), await this.getArgs()), metadataFilters)
// @ts-ignore
const instance = new TypeORMVectorStore(this.getEmbeddings(), await this.getArgs())
return this.adaptInstance(instance, metadataFilters)
}
async fromDocuments(documents: Document[]) {
@ -77,7 +79,8 @@ export class TypeORMDriver extends VectorStoreDriver {
[ERROR]: uncaughtException: Illegal invocation TypeError: Illegal invocation at Socket.ref (node:net:1524:18) at Connection.ref (.../node_modules/pg/lib/connection.js:183:17) at Client.ref (.../node_modules/pg/lib/client.js:591:21) at BoundPool._pulseQueue (/node_modules/pg-pool/index.js:148:28) at .../node_modules/pg-pool/index.js:184:37 at process.processTicksAndRejections (node:internal/process/task_queues:77:11)
*/
instance.similaritySearchVectorWithScore = async (query: number[], k: number, filter?: any) => {
return await TypeORMDriver.similaritySearchVectorWithScore(
await instance.appDataSource.initialize()
const res = await TypeORMDriver.similaritySearchVectorWithScore(
query,
k,
tableName,
@ -85,6 +88,8 @@ export class TypeORMDriver extends VectorStoreDriver {
filter ?? metadataFilters,
this.computedOperatorString
)
await instance.appDataSource.destroy()
return res
}
instance.delete = async (params: { ids: string[] }): Promise<void> => {
@ -92,9 +97,12 @@ export class TypeORMDriver extends VectorStoreDriver {
if (ids?.length) {
try {
await instance.appDataSource.initialize()
instance.appDataSource.getRepository(instance.documentEntity).delete(ids)
} catch (e) {
console.error('Failed to delete')
} finally {
await instance.appDataSource.destroy()
}
}
}
@ -102,7 +110,10 @@ export class TypeORMDriver extends VectorStoreDriver {
const baseAddVectorsFn = instance.addVectors.bind(instance)
instance.addVectors = async (vectors, documents) => {
return baseAddVectorsFn(vectors, this.sanitizeDocuments(documents))
await instance.appDataSource.initialize()
const res = baseAddVectorsFn(vectors, this.sanitizeDocuments(documents))
await instance.appDataSource.destroy()
return res
}
return instance

View File

@ -1,32 +1,11 @@
import { flatten, isEqual } from 'lodash'
import { createClient, SearchOptions, RedisClientOptions } from 'redis'
import { flatten } from 'lodash'
import { createClient, SearchOptions } from 'redis'
import { Embeddings } from '@langchain/core/embeddings'
import { RedisVectorStore, RedisVectorStoreConfig } from '@langchain/community/vectorstores/redis'
import { Document } from '@langchain/core/documents'
import { ICommonObject, INode, INodeData, INodeOutputsValue, INodeParams, IndexingResult } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { escapeAllStrings, escapeSpecialChars, unEscapeSpecialChars } from './utils'
let redisClientSingleton: ReturnType<typeof createClient>
let redisClientOption: RedisClientOptions
const getRedisClient = async (option: RedisClientOptions) => {
if (!redisClientSingleton) {
// if client doesn't exists
redisClientSingleton = createClient(option)
await redisClientSingleton.connect()
redisClientOption = option
return redisClientSingleton
} else if (redisClientSingleton && !isEqual(option, redisClientOption)) {
// if client exists but option changed
redisClientSingleton.quit()
redisClientSingleton = createClient(option)
await redisClientSingleton.connect()
redisClientOption = option
return redisClientSingleton
}
return redisClientSingleton
}
import { escapeSpecialChars, unEscapeSpecialChars } from './utils'
class Redis_VectorStores implements INode {
label: string
@ -163,13 +142,13 @@ class Redis_VectorStores implements INode {
for (let i = 0; i < flattenDocs.length; i += 1) {
if (flattenDocs[i] && flattenDocs[i].pageContent) {
const document = new Document(flattenDocs[i])
escapeAllStrings(document.metadata)
finalDocs.push(document)
}
}
try {
const redisClient = await getRedisClient({ url: redisUrl })
const redisClient = createClient({ url: redisUrl })
await redisClient.connect()
const storeConfig: RedisVectorStoreConfig = {
redisClient: redisClient,
@ -203,6 +182,8 @@ class Redis_VectorStores implements INode {
)
}
await redisClient.quit()
return { numAdded: finalDocs.length, addedDocs: finalDocs }
} catch (e) {
throw new Error(e)
@ -231,7 +212,7 @@ class Redis_VectorStores implements INode {
redisUrl = 'redis://' + username + ':' + password + '@' + host + ':' + portStr
}
const redisClient = await getRedisClient({ url: redisUrl })
const redisClient = createClient({ url: redisUrl })
const storeConfig: RedisVectorStoreConfig = {
redisClient: redisClient,
@ -246,7 +227,19 @@ class Redis_VectorStores implements INode {
// Avoid Illegal invocation error
vectorStore.similaritySearchVectorWithScore = async (query: number[], k: number, filter?: any) => {
return await similaritySearchVectorWithScore(query, k, indexName, metadataKey, vectorKey, contentKey, redisClient, filter)
await redisClient.connect()
const results = await similaritySearchVectorWithScore(
query,
k,
indexName,
metadataKey,
vectorKey,
contentKey,
redisClient,
filter
)
await redisClient.quit()
return results
}
if (output === 'retriever') {

View File

@ -125,7 +125,6 @@
"redis": "^4.6.7",
"replicate": "^0.31.1",
"sanitize-filename": "^1.6.3",
"socket.io": "^4.6.1",
"srt-parser-2": "^1.2.3",
"typeorm": "^0.3.6",
"weaviate-ts-client": "^1.1.0",

View File

@ -406,12 +406,9 @@ export interface IStateWithMessages extends ICommonObject {
}
export interface IServerSideEventStreamer {
streamEvent(chatId: string, data: string): void
streamStartEvent(chatId: string, data: any): void
streamTokenEvent(chatId: string, data: string): void
streamCustomEvent(chatId: string, eventType: string, data: any): void
streamSourceDocumentsEvent(chatId: string, data: any): void
streamUsedToolsEvent(chatId: string, data: any): void
streamFileAnnotationsEvent(chatId: string, data: any): void

View File

@ -28,8 +28,6 @@ PORT=3000
# FLOWISE_PASSWORD=1234
# FLOWISE_FILE_SIZE_LIMIT=50mb
# DISABLE_CHATFLOW_REUSE=true
# DEBUG=true
# LOG_PATH=/your_log_path/.flowise/logs
# LOG_LEVEL=info (error | warn | info | verbose | debug)
@ -77,3 +75,20 @@ PORT=3000
# GLOBAL_AGENT_HTTP_PROXY=CorporateHttpProxyUrl
# GLOBAL_AGENT_HTTPS_PROXY=CorporateHttpsProxyUrl
# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
######################
# QUEUE CONFIGURATION
#######################
# MODE=queue #(queue | main)
# QUEUE_NAME=flowise-queue
# QUEUE_REDIS_EVENT_STREAM_MAX_LEN=100000
# WORKER_CONCURRENCY=100000
# REDIS_URL=
# REDIS_HOST=localhost
# REDIS_PORT=6379
# REDIS_USERNAME=
# REDIS_PASSWORD=
# REDIS_TLS=
# REDIS_CERT=
# REDIS_KEY=
# REDIS_CA=

View File

@ -26,6 +26,8 @@
"nuke": "rimraf dist node_modules .turbo",
"start:windows": "cd bin && run start",
"start:default": "cd bin && ./run start",
"start-worker:windows": "cd bin && run worker",
"start-worker:default": "cd bin && ./run worker",
"dev": "tsc-watch --noClear -p ./tsconfig.json --onSuccess \"pnpm start\"",
"oclif-dev": "run-script-os",
"oclif-dev:windows": "cd bin && dev start",
@ -55,7 +57,7 @@
"license": "SEE LICENSE IN LICENSE.md",
"dependencies": {
"@aws-sdk/client-secrets-manager": "^3.699.0",
"@oclif/core": "^1.13.10",
"@oclif/core": "4.0.7",
"@opentelemetry/api": "^1.3.0",
"@opentelemetry/auto-instrumentations-node": "^0.52.0",
"@opentelemetry/core": "1.27.0",
@ -74,6 +76,8 @@
"@types/uuid": "^9.0.7",
"async-mutex": "^0.4.0",
"axios": "1.6.2",
"bull-board": "^2.1.3",
"bullmq": "^5.13.2",
"content-disposition": "0.5.4",
"cors": "^2.8.5",
"crypto-js": "^4.1.1",
@ -97,10 +101,10 @@
"pg": "^8.11.1",
"posthog-node": "^3.5.0",
"prom-client": "^15.1.3",
"rate-limit-redis": "^4.2.0",
"reflect-metadata": "^0.1.13",
"s3-streamlogger": "^1.11.0",
"sanitize-html": "^2.11.0",
"socket.io": "^4.6.1",
"sqlite3": "^5.1.6",
"typeorm": "^0.3.6",
"uuid": "^9.0.1",

View File

@ -0,0 +1,45 @@
/**
* This pool is to keep track of abort controllers mapped to chatflowid_chatid
*/
export class AbortControllerPool {
abortControllers: Record<string, AbortController> = {}
/**
* Add to the pool
* @param {string} id
* @param {AbortController} abortController
*/
add(id: string, abortController: AbortController) {
this.abortControllers[id] = abortController
}
/**
* Remove from the pool
* @param {string} id
*/
remove(id: string) {
if (Object.prototype.hasOwnProperty.call(this.abortControllers, id)) {
delete this.abortControllers[id]
}
}
/**
* Get the abort controller
* @param {string} id
*/
get(id: string) {
return this.abortControllers[id]
}
/**
* Abort
* @param {string} id
*/
abort(id: string) {
const abortController = this.abortControllers[id]
if (abortController) {
abortController.abort()
this.remove(id)
}
}
}

View File

@ -1,19 +1,51 @@
import { IActiveCache } from './Interface'
import { IActiveCache, MODE } from './Interface'
import Redis from 'ioredis'
/**
* This pool is to keep track of in-memory cache used for LLM and Embeddings
*/
export class CachePool {
private redisClient: Redis | null = null
activeLLMCache: IActiveCache = {}
activeEmbeddingCache: IActiveCache = {}
constructor() {
if (process.env.MODE === MODE.QUEUE) {
if (process.env.REDIS_URL) {
this.redisClient = new Redis(process.env.REDIS_URL)
} else {
this.redisClient = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
username: process.env.REDIS_USERNAME || undefined,
password: process.env.REDIS_PASSWORD || undefined,
tls:
process.env.REDIS_TLS === 'true'
? {
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
: undefined
})
}
}
}
/**
* Add to the llm cache pool
* @param {string} chatflowid
* @param {Map<any, any>} value
*/
addLLMCache(chatflowid: string, value: Map<any, any>) {
this.activeLLMCache[chatflowid] = value
async addLLMCache(chatflowid: string, value: Map<any, any>) {
if (process.env.MODE === MODE.QUEUE) {
if (this.redisClient) {
const serializedValue = JSON.stringify(Array.from(value.entries()))
await this.redisClient.set(`llmCache:${chatflowid}`, serializedValue)
}
} else {
this.activeLLMCache[chatflowid] = value
}
}
/**
@ -21,24 +53,60 @@ export class CachePool {
* @param {string} chatflowid
* @param {Map<any, any>} value
*/
addEmbeddingCache(chatflowid: string, value: Map<any, any>) {
this.activeEmbeddingCache[chatflowid] = value
async addEmbeddingCache(chatflowid: string, value: Map<any, any>) {
if (process.env.MODE === MODE.QUEUE) {
if (this.redisClient) {
const serializedValue = JSON.stringify(Array.from(value.entries()))
await this.redisClient.set(`embeddingCache:${chatflowid}`, serializedValue)
}
} else {
this.activeEmbeddingCache[chatflowid] = value
}
}
/**
* Get item from llm cache pool
* @param {string} chatflowid
*/
getLLMCache(chatflowid: string): Map<any, any> | undefined {
return this.activeLLMCache[chatflowid]
async getLLMCache(chatflowid: string): Promise<Map<any, any> | undefined> {
if (process.env.MODE === MODE.QUEUE) {
if (this.redisClient) {
const serializedValue = await this.redisClient.get(`llmCache:${chatflowid}`)
if (serializedValue) {
return new Map(JSON.parse(serializedValue))
}
}
} else {
return this.activeLLMCache[chatflowid]
}
return undefined
}
/**
* Get item from embedding cache pool
* @param {string} chatflowid
*/
getEmbeddingCache(chatflowid: string): Map<any, any> | undefined {
return this.activeEmbeddingCache[chatflowid]
async getEmbeddingCache(chatflowid: string): Promise<Map<any, any> | undefined> {
if (process.env.MODE === MODE.QUEUE) {
if (this.redisClient) {
const serializedValue = await this.redisClient.get(`embeddingCache:${chatflowid}`)
if (serializedValue) {
return new Map(JSON.parse(serializedValue))
}
}
} else {
return this.activeEmbeddingCache[chatflowid]
}
return undefined
}
/**
* Close Redis connection if applicable
*/
async close() {
if (this.redisClient) {
await this.redisClient.quit()
}
}
}

View File

@ -1,59 +0,0 @@
import { ICommonObject } from 'flowise-components'
import { IActiveChatflows, INodeData, IReactFlowNode } from './Interface'
import logger from './utils/logger'
/**
* This pool is to keep track of active chatflow pools
* so we can prevent building langchain flow all over again
*/
export class ChatflowPool {
activeChatflows: IActiveChatflows = {}
/**
* Add to the pool
* @param {string} chatflowid
* @param {INodeData} endingNodeData
* @param {IReactFlowNode[]} startingNodes
* @param {ICommonObject} overrideConfig
*/
add(
chatflowid: string,
endingNodeData: INodeData | undefined,
startingNodes: IReactFlowNode[],
overrideConfig?: ICommonObject,
chatId?: string
) {
this.activeChatflows[chatflowid] = {
startingNodes,
endingNodeData,
inSync: true
}
if (overrideConfig) this.activeChatflows[chatflowid].overrideConfig = overrideConfig
if (chatId) this.activeChatflows[chatflowid].chatId = chatId
logger.info(`[server]: Chatflow ${chatflowid} added into ChatflowPool`)
}
/**
* Update to the pool
* @param {string} chatflowid
* @param {boolean} inSync
*/
updateInSync(chatflowid: string, inSync: boolean) {
if (Object.prototype.hasOwnProperty.call(this.activeChatflows, chatflowid)) {
this.activeChatflows[chatflowid].inSync = inSync
logger.info(`[server]: Chatflow ${chatflowid} updated inSync=${inSync} in ChatflowPool`)
}
}
/**
* Remove from the pool
* @param {string} chatflowid
*/
async remove(chatflowid: string) {
if (Object.prototype.hasOwnProperty.call(this.activeChatflows, chatflowid)) {
delete this.activeChatflows[chatflowid]
logger.info(`[server]: Chatflow ${chatflowid} removed from ChatflowPool`)
}
}
}

View File

@ -1,5 +1,9 @@
import { ICommonObject } from 'flowise-components'
import { DocumentStore } from './database/entities/DocumentStore'
import { DataSource } from 'typeorm'
import { IComponentNodes } from './Interface'
import { Telemetry } from './utils/telemetry'
import { CachePool } from './CachePool'
export enum DocumentStoreStatus {
EMPTY_SYNC = 'EMPTY',
@ -112,6 +116,38 @@ export interface IDocumentStoreWhereUsed {
name: string
}
export interface IUpsertQueueAppServer {
appDataSource: DataSource
componentNodes: IComponentNodes
telemetry: Telemetry
cachePool?: CachePool
}
export interface IExecuteDocStoreUpsert extends IUpsertQueueAppServer {
storeId: string
totalItems: IDocumentStoreUpsertData[]
files: Express.Multer.File[]
isRefreshAPI: boolean
}
export interface IExecutePreviewLoader extends Omit<IUpsertQueueAppServer, 'telemetry'> {
data: IDocumentStoreLoaderForPreview
isPreviewOnly: boolean
telemetry?: Telemetry
}
export interface IExecuteProcessLoader extends IUpsertQueueAppServer {
data: IDocumentStoreLoaderForPreview
docLoaderId: string
isProcessWithoutUpsert: boolean
}
export interface IExecuteVectorStoreInsert extends IUpsertQueueAppServer {
data: ICommonObject
isStrictSave: boolean
isVectorStoreInsert: boolean
}
const getFileName = (fileBase64: string) => {
let fileNames = []
if (fileBase64.startsWith('FILE-STORAGE::')) {

View File

@ -1,4 +1,15 @@
import { IAction, ICommonObject, IFileUpload, INode, INodeData as INodeDataFromComponent, INodeParams } from 'flowise-components'
import {
IAction,
ICommonObject,
IFileUpload,
INode,
INodeData as INodeDataFromComponent,
INodeParams,
IServerSideEventStreamer
} from 'flowise-components'
import { DataSource } from 'typeorm'
import { CachePool } from './CachePool'
import { Telemetry } from './utils/telemetry'
export type MessageType = 'apiMessage' | 'userMessage'
@ -6,6 +17,11 @@ export type ChatflowType = 'CHATFLOW' | 'MULTIAGENT' | 'ASSISTANT'
export type AssistantType = 'CUSTOM' | 'OPENAI' | 'AZURE'
export enum MODE {
QUEUE = 'queue',
MAIN = 'main'
}
export enum ChatType {
INTERNAL = 'INTERNAL',
EXTERNAL = 'EXTERNAL'
@ -28,6 +44,7 @@ export interface IChatFlow {
isPublic?: boolean
apikeyid?: string
analytic?: string
speechToText?: string
chatbotConfig?: string
followUpPrompts?: string
apiConfig?: string
@ -226,6 +243,7 @@ export interface IncomingInput {
leadEmail?: string
history?: IMessage[]
action?: IAction
streaming?: boolean
}
export interface IActiveChatflows {
@ -290,6 +308,34 @@ export interface ICustomTemplate {
usecases?: string
}
export interface IFlowConfig {
chatflowid: string
chatId: string
sessionId: string
chatHistory: IMessage[]
apiMessageId: string
overrideConfig?: ICommonObject
}
export interface IPredictionQueueAppServer {
appDataSource: DataSource
componentNodes: IComponentNodes
sseStreamer: IServerSideEventStreamer
telemetry: Telemetry
cachePool: CachePool
}
export interface IExecuteFlowParams extends IPredictionQueueAppServer {
incomingInput: IncomingInput
chatflow: IChatFlow
chatId: string
baseURL: string
isInternal: boolean
signal?: AbortController
files?: Express.Multer.File[]
isUpsert?: boolean
}
export interface INodeOverrides {
[key: string]: {
label: string

View File

@ -0,0 +1,201 @@
import { Command, Flags } from '@oclif/core'
import path from 'path'
import dotenv from 'dotenv'
import logger from '../utils/logger'
dotenv.config({ path: path.join(__dirname, '..', '..', '.env'), override: true })
enum EXIT_CODE {
SUCCESS = 0,
FAILED = 1
}
export abstract class BaseCommand extends Command {
static flags = {
FLOWISE_USERNAME: Flags.string(),
FLOWISE_PASSWORD: Flags.string(),
FLOWISE_FILE_SIZE_LIMIT: Flags.string(),
PORT: Flags.string(),
CORS_ORIGINS: Flags.string(),
IFRAME_ORIGINS: Flags.string(),
DEBUG: Flags.string(),
BLOB_STORAGE_PATH: Flags.string(),
APIKEY_STORAGE_TYPE: Flags.string(),
APIKEY_PATH: Flags.string(),
LOG_PATH: Flags.string(),
LOG_LEVEL: Flags.string(),
TOOL_FUNCTION_BUILTIN_DEP: Flags.string(),
TOOL_FUNCTION_EXTERNAL_DEP: Flags.string(),
NUMBER_OF_PROXIES: Flags.string(),
DATABASE_TYPE: Flags.string(),
DATABASE_PATH: Flags.string(),
DATABASE_PORT: Flags.string(),
DATABASE_HOST: Flags.string(),
DATABASE_NAME: Flags.string(),
DATABASE_USER: Flags.string(),
DATABASE_PASSWORD: Flags.string(),
DATABASE_SSL: Flags.string(),
DATABASE_SSL_KEY_BASE64: Flags.string(),
LANGCHAIN_TRACING_V2: Flags.string(),
LANGCHAIN_ENDPOINT: Flags.string(),
LANGCHAIN_API_KEY: Flags.string(),
LANGCHAIN_PROJECT: Flags.string(),
DISABLE_FLOWISE_TELEMETRY: Flags.string(),
MODEL_LIST_CONFIG_JSON: Flags.string(),
STORAGE_TYPE: Flags.string(),
S3_STORAGE_BUCKET_NAME: Flags.string(),
S3_STORAGE_ACCESS_KEY_ID: Flags.string(),
S3_STORAGE_SECRET_ACCESS_KEY: Flags.string(),
S3_STORAGE_REGION: Flags.string(),
S3_ENDPOINT_URL: Flags.string(),
S3_FORCE_PATH_STYLE: Flags.string(),
SHOW_COMMUNITY_NODES: Flags.string(),
SECRETKEY_STORAGE_TYPE: Flags.string(),
SECRETKEY_PATH: Flags.string(),
FLOWISE_SECRETKEY_OVERWRITE: Flags.string(),
SECRETKEY_AWS_ACCESS_KEY: Flags.string(),
SECRETKEY_AWS_SECRET_KEY: Flags.string(),
SECRETKEY_AWS_REGION: Flags.string(),
DISABLED_NODES: Flags.string(),
MODE: Flags.string(),
WORKER_CONCURRENCY: Flags.string(),
QUEUE_NAME: Flags.string(),
QUEUE_REDIS_EVENT_STREAM_MAX_LEN: Flags.string(),
REDIS_URL: Flags.string(),
REDIS_HOST: Flags.string(),
REDIS_PORT: Flags.string(),
REDIS_USERNAME: Flags.string(),
REDIS_PASSWORD: Flags.string(),
REDIS_TLS: Flags.string(),
REDIS_CERT: Flags.string(),
REDIS_KEY: Flags.string(),
REDIS_CA: Flags.string()
}
protected async stopProcess() {
// Overridden method by child class
}
protected onTerminate() {
return async () => {
try {
// Shut down the app after timeout if it ever stuck removing pools
setTimeout(async () => {
logger.info('Flowise was forced to shut down after 30 secs')
await this.failExit()
}, 30000)
await this.stopProcess()
} catch (error) {
logger.error('There was an error shutting down Flowise...', error)
}
}
}
protected async gracefullyExit() {
process.exit(EXIT_CODE.SUCCESS)
}
protected async failExit() {
process.exit(EXIT_CODE.FAILED)
}
async init(): Promise<void> {
await super.init()
process.on('SIGTERM', this.onTerminate())
process.on('SIGINT', this.onTerminate())
// Prevent throw new Error from crashing the app
// TODO: Get rid of this and send proper error message to ui
process.on('uncaughtException', (err) => {
logger.error('uncaughtException: ', err)
})
process.on('unhandledRejection', (err) => {
logger.error('unhandledRejection: ', err)
})
const { flags } = await this.parse(BaseCommand)
if (flags.PORT) process.env.PORT = flags.PORT
if (flags.CORS_ORIGINS) process.env.CORS_ORIGINS = flags.CORS_ORIGINS
if (flags.IFRAME_ORIGINS) process.env.IFRAME_ORIGINS = flags.IFRAME_ORIGINS
if (flags.DEBUG) process.env.DEBUG = flags.DEBUG
if (flags.NUMBER_OF_PROXIES) process.env.NUMBER_OF_PROXIES = flags.NUMBER_OF_PROXIES
if (flags.SHOW_COMMUNITY_NODES) process.env.SHOW_COMMUNITY_NODES = flags.SHOW_COMMUNITY_NODES
if (flags.DISABLED_NODES) process.env.DISABLED_NODES = flags.DISABLED_NODES
// Authorization
if (flags.FLOWISE_USERNAME) process.env.FLOWISE_USERNAME = flags.FLOWISE_USERNAME
if (flags.FLOWISE_PASSWORD) process.env.FLOWISE_PASSWORD = flags.FLOWISE_PASSWORD
if (flags.APIKEY_STORAGE_TYPE) process.env.APIKEY_STORAGE_TYPE = flags.APIKEY_STORAGE_TYPE
if (flags.APIKEY_PATH) process.env.APIKEY_PATH = flags.APIKEY_PATH
// API Configuration
if (flags.FLOWISE_FILE_SIZE_LIMIT) process.env.FLOWISE_FILE_SIZE_LIMIT = flags.FLOWISE_FILE_SIZE_LIMIT
// Credentials
if (flags.SECRETKEY_STORAGE_TYPE) process.env.SECRETKEY_STORAGE_TYPE = flags.SECRETKEY_STORAGE_TYPE
if (flags.SECRETKEY_PATH) process.env.SECRETKEY_PATH = flags.SECRETKEY_PATH
if (flags.FLOWISE_SECRETKEY_OVERWRITE) process.env.FLOWISE_SECRETKEY_OVERWRITE = flags.FLOWISE_SECRETKEY_OVERWRITE
if (flags.SECRETKEY_AWS_ACCESS_KEY) process.env.SECRETKEY_AWS_ACCESS_KEY = flags.SECRETKEY_AWS_ACCESS_KEY
if (flags.SECRETKEY_AWS_SECRET_KEY) process.env.SECRETKEY_AWS_SECRET_KEY = flags.SECRETKEY_AWS_SECRET_KEY
if (flags.SECRETKEY_AWS_REGION) process.env.SECRETKEY_AWS_REGION = flags.SECRETKEY_AWS_REGION
// Logs
if (flags.LOG_PATH) process.env.LOG_PATH = flags.LOG_PATH
if (flags.LOG_LEVEL) process.env.LOG_LEVEL = flags.LOG_LEVEL
// Tool functions
if (flags.TOOL_FUNCTION_BUILTIN_DEP) process.env.TOOL_FUNCTION_BUILTIN_DEP = flags.TOOL_FUNCTION_BUILTIN_DEP
if (flags.TOOL_FUNCTION_EXTERNAL_DEP) process.env.TOOL_FUNCTION_EXTERNAL_DEP = flags.TOOL_FUNCTION_EXTERNAL_DEP
// Database config
if (flags.DATABASE_TYPE) process.env.DATABASE_TYPE = flags.DATABASE_TYPE
if (flags.DATABASE_PATH) process.env.DATABASE_PATH = flags.DATABASE_PATH
if (flags.DATABASE_PORT) process.env.DATABASE_PORT = flags.DATABASE_PORT
if (flags.DATABASE_HOST) process.env.DATABASE_HOST = flags.DATABASE_HOST
if (flags.DATABASE_NAME) process.env.DATABASE_NAME = flags.DATABASE_NAME
if (flags.DATABASE_USER) process.env.DATABASE_USER = flags.DATABASE_USER
if (flags.DATABASE_PASSWORD) process.env.DATABASE_PASSWORD = flags.DATABASE_PASSWORD
if (flags.DATABASE_SSL) process.env.DATABASE_SSL = flags.DATABASE_SSL
if (flags.DATABASE_SSL_KEY_BASE64) process.env.DATABASE_SSL_KEY_BASE64 = flags.DATABASE_SSL_KEY_BASE64
// Langsmith tracing
if (flags.LANGCHAIN_TRACING_V2) process.env.LANGCHAIN_TRACING_V2 = flags.LANGCHAIN_TRACING_V2
if (flags.LANGCHAIN_ENDPOINT) process.env.LANGCHAIN_ENDPOINT = flags.LANGCHAIN_ENDPOINT
if (flags.LANGCHAIN_API_KEY) process.env.LANGCHAIN_API_KEY = flags.LANGCHAIN_API_KEY
if (flags.LANGCHAIN_PROJECT) process.env.LANGCHAIN_PROJECT = flags.LANGCHAIN_PROJECT
// Telemetry
if (flags.DISABLE_FLOWISE_TELEMETRY) process.env.DISABLE_FLOWISE_TELEMETRY = flags.DISABLE_FLOWISE_TELEMETRY
// Model list config
if (flags.MODEL_LIST_CONFIG_JSON) process.env.MODEL_LIST_CONFIG_JSON = flags.MODEL_LIST_CONFIG_JSON
// Storage
if (flags.STORAGE_TYPE) process.env.STORAGE_TYPE = flags.STORAGE_TYPE
if (flags.BLOB_STORAGE_PATH) process.env.BLOB_STORAGE_PATH = flags.BLOB_STORAGE_PATH
if (flags.S3_STORAGE_BUCKET_NAME) process.env.S3_STORAGE_BUCKET_NAME = flags.S3_STORAGE_BUCKET_NAME
if (flags.S3_STORAGE_ACCESS_KEY_ID) process.env.S3_STORAGE_ACCESS_KEY_ID = flags.S3_STORAGE_ACCESS_KEY_ID
if (flags.S3_STORAGE_SECRET_ACCESS_KEY) process.env.S3_STORAGE_SECRET_ACCESS_KEY = flags.S3_STORAGE_SECRET_ACCESS_KEY
if (flags.S3_STORAGE_REGION) process.env.S3_STORAGE_REGION = flags.S3_STORAGE_REGION
if (flags.S3_ENDPOINT_URL) process.env.S3_ENDPOINT_URL = flags.S3_ENDPOINT_URL
if (flags.S3_FORCE_PATH_STYLE) process.env.S3_FORCE_PATH_STYLE = flags.S3_FORCE_PATH_STYLE
// Queue
if (flags.MODE) process.env.MODE = flags.MODE
if (flags.REDIS_URL) process.env.REDIS_URL = flags.REDIS_URL
if (flags.REDIS_HOST) process.env.REDIS_HOST = flags.REDIS_HOST
if (flags.REDIS_PORT) process.env.REDIS_PORT = flags.REDIS_PORT
if (flags.REDIS_USERNAME) process.env.REDIS_USERNAME = flags.REDIS_USERNAME
if (flags.REDIS_PASSWORD) process.env.REDIS_PASSWORD = flags.REDIS_PASSWORD
if (flags.REDIS_TLS) process.env.REDIS_TLS = flags.REDIS_TLS
if (flags.REDIS_CERT) process.env.REDIS_CERT = flags.REDIS_CERT
if (flags.REDIS_KEY) process.env.REDIS_KEY = flags.REDIS_KEY
if (flags.REDIS_CA) process.env.REDIS_CA = flags.REDIS_CA
if (flags.WORKER_CONCURRENCY) process.env.WORKER_CONCURRENCY = flags.WORKER_CONCURRENCY
if (flags.QUEUE_NAME) process.env.QUEUE_NAME = flags.QUEUE_NAME
if (flags.QUEUE_REDIS_EVENT_STREAM_MAX_LEN) process.env.QUEUE_REDIS_EVENT_STREAM_MAX_LEN = flags.QUEUE_REDIS_EVENT_STREAM
}
}

View File

@ -1,181 +1,33 @@
import { Command, Flags } from '@oclif/core'
import path from 'path'
import * as Server from '../index'
import * as DataSource from '../DataSource'
import dotenv from 'dotenv'
import logger from '../utils/logger'
import { BaseCommand } from './base'
dotenv.config({ path: path.join(__dirname, '..', '..', '.env'), override: true })
export default class Start extends BaseCommand {
async run(): Promise<void> {
logger.info('Starting Flowise...')
await DataSource.init()
await Server.start()
}
enum EXIT_CODE {
SUCCESS = 0,
FAILED = 1
}
let processExitCode = EXIT_CODE.SUCCESS
export default class Start extends Command {
static args = []
static flags = {
FLOWISE_USERNAME: Flags.string(),
FLOWISE_PASSWORD: Flags.string(),
FLOWISE_FILE_SIZE_LIMIT: Flags.string(),
PORT: Flags.string(),
CORS_ORIGINS: Flags.string(),
IFRAME_ORIGINS: Flags.string(),
DEBUG: Flags.string(),
BLOB_STORAGE_PATH: Flags.string(),
APIKEY_STORAGE_TYPE: Flags.string(),
APIKEY_PATH: Flags.string(),
LOG_PATH: Flags.string(),
LOG_LEVEL: Flags.string(),
TOOL_FUNCTION_BUILTIN_DEP: Flags.string(),
TOOL_FUNCTION_EXTERNAL_DEP: Flags.string(),
NUMBER_OF_PROXIES: Flags.string(),
DISABLE_CHATFLOW_REUSE: Flags.string(),
DATABASE_TYPE: Flags.string(),
DATABASE_PATH: Flags.string(),
DATABASE_PORT: Flags.string(),
DATABASE_HOST: Flags.string(),
DATABASE_NAME: Flags.string(),
DATABASE_USER: Flags.string(),
DATABASE_PASSWORD: Flags.string(),
DATABASE_SSL: Flags.string(),
DATABASE_SSL_KEY_BASE64: Flags.string(),
LANGCHAIN_TRACING_V2: Flags.string(),
LANGCHAIN_ENDPOINT: Flags.string(),
LANGCHAIN_API_KEY: Flags.string(),
LANGCHAIN_PROJECT: Flags.string(),
DISABLE_FLOWISE_TELEMETRY: Flags.string(),
MODEL_LIST_CONFIG_JSON: Flags.string(),
STORAGE_TYPE: Flags.string(),
S3_STORAGE_BUCKET_NAME: Flags.string(),
S3_STORAGE_ACCESS_KEY_ID: Flags.string(),
S3_STORAGE_SECRET_ACCESS_KEY: Flags.string(),
S3_STORAGE_REGION: Flags.string(),
S3_ENDPOINT_URL: Flags.string(),
S3_FORCE_PATH_STYLE: Flags.string(),
SHOW_COMMUNITY_NODES: Flags.string(),
SECRETKEY_STORAGE_TYPE: Flags.string(),
SECRETKEY_PATH: Flags.string(),
FLOWISE_SECRETKEY_OVERWRITE: Flags.string(),
SECRETKEY_AWS_ACCESS_KEY: Flags.string(),
SECRETKEY_AWS_SECRET_KEY: Flags.string(),
SECRETKEY_AWS_REGION: Flags.string(),
DISABLED_NODES: Flags.string()
async catch(error: Error) {
if (error.stack) logger.error(error.stack)
await new Promise((resolve) => {
setTimeout(resolve, 1000)
})
await this.failExit()
}
async stopProcess() {
logger.info('Shutting down Flowise...')
try {
// Shut down the app after timeout if it ever stuck removing pools
setTimeout(() => {
logger.info('Flowise was forced to shut down after 30 secs')
process.exit(processExitCode)
}, 30000)
// Removing pools
logger.info(`Shutting down Flowise...`)
const serverApp = Server.getInstance()
if (serverApp) await serverApp.stopApp()
} catch (error) {
logger.error('There was an error shutting down Flowise...', error)
await this.failExit()
}
process.exit(processExitCode)
}
async run(): Promise<void> {
process.on('SIGTERM', this.stopProcess)
process.on('SIGINT', this.stopProcess)
// Prevent throw new Error from crashing the app
// TODO: Get rid of this and send proper error message to ui
process.on('uncaughtException', (err) => {
logger.error('uncaughtException: ', err)
})
process.on('unhandledRejection', (err) => {
logger.error('unhandledRejection: ', err)
})
const { flags } = await this.parse(Start)
if (flags.PORT) process.env.PORT = flags.PORT
if (flags.CORS_ORIGINS) process.env.CORS_ORIGINS = flags.CORS_ORIGINS
if (flags.IFRAME_ORIGINS) process.env.IFRAME_ORIGINS = flags.IFRAME_ORIGINS
if (flags.DEBUG) process.env.DEBUG = flags.DEBUG
if (flags.NUMBER_OF_PROXIES) process.env.NUMBER_OF_PROXIES = flags.NUMBER_OF_PROXIES
if (flags.DISABLE_CHATFLOW_REUSE) process.env.DISABLE_CHATFLOW_REUSE = flags.DISABLE_CHATFLOW_REUSE
if (flags.SHOW_COMMUNITY_NODES) process.env.SHOW_COMMUNITY_NODES = flags.SHOW_COMMUNITY_NODES
if (flags.DISABLED_NODES) process.env.DISABLED_NODES = flags.DISABLED_NODES
// Authorization
if (flags.FLOWISE_USERNAME) process.env.FLOWISE_USERNAME = flags.FLOWISE_USERNAME
if (flags.FLOWISE_PASSWORD) process.env.FLOWISE_PASSWORD = flags.FLOWISE_PASSWORD
if (flags.APIKEY_STORAGE_TYPE) process.env.APIKEY_STORAGE_TYPE = flags.APIKEY_STORAGE_TYPE
if (flags.APIKEY_PATH) process.env.APIKEY_PATH = flags.APIKEY_PATH
// API Configuration
if (flags.FLOWISE_FILE_SIZE_LIMIT) process.env.FLOWISE_FILE_SIZE_LIMIT = flags.FLOWISE_FILE_SIZE_LIMIT
// Credentials
if (flags.SECRETKEY_STORAGE_TYPE) process.env.SECRETKEY_STORAGE_TYPE = flags.SECRETKEY_STORAGE_TYPE
if (flags.SECRETKEY_PATH) process.env.SECRETKEY_PATH = flags.SECRETKEY_PATH
if (flags.FLOWISE_SECRETKEY_OVERWRITE) process.env.FLOWISE_SECRETKEY_OVERWRITE = flags.FLOWISE_SECRETKEY_OVERWRITE
if (flags.SECRETKEY_AWS_ACCESS_KEY) process.env.SECRETKEY_AWS_ACCESS_KEY = flags.SECRETKEY_AWS_ACCESS_KEY
if (flags.SECRETKEY_AWS_SECRET_KEY) process.env.SECRETKEY_AWS_SECRET_KEY = flags.SECRETKEY_AWS_SECRET_KEY
if (flags.SECRETKEY_AWS_REGION) process.env.SECRETKEY_AWS_REGION = flags.SECRETKEY_AWS_REGION
// Logs
if (flags.LOG_PATH) process.env.LOG_PATH = flags.LOG_PATH
if (flags.LOG_LEVEL) process.env.LOG_LEVEL = flags.LOG_LEVEL
// Tool functions
if (flags.TOOL_FUNCTION_BUILTIN_DEP) process.env.TOOL_FUNCTION_BUILTIN_DEP = flags.TOOL_FUNCTION_BUILTIN_DEP
if (flags.TOOL_FUNCTION_EXTERNAL_DEP) process.env.TOOL_FUNCTION_EXTERNAL_DEP = flags.TOOL_FUNCTION_EXTERNAL_DEP
// Database config
if (flags.DATABASE_TYPE) process.env.DATABASE_TYPE = flags.DATABASE_TYPE
if (flags.DATABASE_PATH) process.env.DATABASE_PATH = flags.DATABASE_PATH
if (flags.DATABASE_PORT) process.env.DATABASE_PORT = flags.DATABASE_PORT
if (flags.DATABASE_HOST) process.env.DATABASE_HOST = flags.DATABASE_HOST
if (flags.DATABASE_NAME) process.env.DATABASE_NAME = flags.DATABASE_NAME
if (flags.DATABASE_USER) process.env.DATABASE_USER = flags.DATABASE_USER
if (flags.DATABASE_PASSWORD) process.env.DATABASE_PASSWORD = flags.DATABASE_PASSWORD
if (flags.DATABASE_SSL) process.env.DATABASE_SSL = flags.DATABASE_SSL
if (flags.DATABASE_SSL_KEY_BASE64) process.env.DATABASE_SSL_KEY_BASE64 = flags.DATABASE_SSL_KEY_BASE64
// Langsmith tracing
if (flags.LANGCHAIN_TRACING_V2) process.env.LANGCHAIN_TRACING_V2 = flags.LANGCHAIN_TRACING_V2
if (flags.LANGCHAIN_ENDPOINT) process.env.LANGCHAIN_ENDPOINT = flags.LANGCHAIN_ENDPOINT
if (flags.LANGCHAIN_API_KEY) process.env.LANGCHAIN_API_KEY = flags.LANGCHAIN_API_KEY
if (flags.LANGCHAIN_PROJECT) process.env.LANGCHAIN_PROJECT = flags.LANGCHAIN_PROJECT
// Telemetry
if (flags.DISABLE_FLOWISE_TELEMETRY) process.env.DISABLE_FLOWISE_TELEMETRY = flags.DISABLE_FLOWISE_TELEMETRY
// Model list config
if (flags.MODEL_LIST_CONFIG_JSON) process.env.MODEL_LIST_CONFIG_JSON = flags.MODEL_LIST_CONFIG_JSON
// Storage
if (flags.STORAGE_TYPE) process.env.STORAGE_TYPE = flags.STORAGE_TYPE
if (flags.BLOB_STORAGE_PATH) process.env.BLOB_STORAGE_PATH = flags.BLOB_STORAGE_PATH
if (flags.S3_STORAGE_BUCKET_NAME) process.env.S3_STORAGE_BUCKET_NAME = flags.S3_STORAGE_BUCKET_NAME
if (flags.S3_STORAGE_ACCESS_KEY_ID) process.env.S3_STORAGE_ACCESS_KEY_ID = flags.S3_STORAGE_ACCESS_KEY_ID
if (flags.S3_STORAGE_SECRET_ACCESS_KEY) process.env.S3_STORAGE_SECRET_ACCESS_KEY = flags.S3_STORAGE_SECRET_ACCESS_KEY
if (flags.S3_STORAGE_REGION) process.env.S3_STORAGE_REGION = flags.S3_STORAGE_REGION
if (flags.S3_ENDPOINT_URL) process.env.S3_ENDPOINT_URL = flags.S3_ENDPOINT_URL
if (flags.S3_FORCE_PATH_STYLE) process.env.S3_FORCE_PATH_STYLE = flags.S3_FORCE_PATH_STYLE
await (async () => {
try {
logger.info('Starting Flowise...')
await DataSource.init()
await Server.start()
} catch (error) {
logger.error('There was an error starting Flowise...', error)
processExitCode = EXIT_CODE.FAILED
// @ts-ignore
process.emit('SIGINT')
}
})()
await this.gracefullyExit()
}
}

View File

@ -0,0 +1,103 @@
import logger from '../utils/logger'
import { QueueManager } from '../queue/QueueManager'
import { BaseCommand } from './base'
import { getDataSource } from '../DataSource'
import { Telemetry } from '../utils/telemetry'
import { NodesPool } from '../NodesPool'
import { CachePool } from '../CachePool'
import { QueueEvents, QueueEventsListener } from 'bullmq'
import { AbortControllerPool } from '../AbortControllerPool'
interface CustomListener extends QueueEventsListener {
abort: (args: { id: string }, id: string) => void
}
export default class Worker extends BaseCommand {
predictionWorkerId: string
upsertionWorkerId: string
async run(): Promise<void> {
logger.info('Starting Flowise Worker...')
const { appDataSource, telemetry, componentNodes, cachePool, abortControllerPool } = await this.prepareData()
const queueManager = QueueManager.getInstance()
queueManager.setupAllQueues({
componentNodes,
telemetry,
cachePool,
appDataSource,
abortControllerPool
})
/** Prediction */
const predictionQueue = queueManager.getQueue('prediction')
const predictionWorker = predictionQueue.createWorker()
this.predictionWorkerId = predictionWorker.id
logger.info(`Prediction Worker ${this.predictionWorkerId} created`)
const predictionQueueName = predictionQueue.getQueueName()
const queueEvents = new QueueEvents(predictionQueueName, { connection: queueManager.getConnection() })
queueEvents.on<CustomListener>('abort', async ({ id }: { id: string }) => {
abortControllerPool.abort(id)
})
/** Upsertion */
const upsertionQueue = queueManager.getQueue('upsert')
const upsertionWorker = upsertionQueue.createWorker()
this.upsertionWorkerId = upsertionWorker.id
logger.info(`Upsertion Worker ${this.upsertionWorkerId} created`)
// Keep the process running
process.stdin.resume()
}
async prepareData() {
// Init database
const appDataSource = getDataSource()
await appDataSource.initialize()
await appDataSource.runMigrations({ transaction: 'each' })
// Initialize abortcontroller pool
const abortControllerPool = new AbortControllerPool()
// Init telemetry
const telemetry = new Telemetry()
// Initialize nodes pool
const nodesPool = new NodesPool()
await nodesPool.initialize()
// Initialize cache pool
const cachePool = new CachePool()
return { appDataSource, telemetry, componentNodes: nodesPool.componentNodes, cachePool, abortControllerPool }
}
async catch(error: Error) {
if (error.stack) logger.error(error.stack)
await new Promise((resolve) => {
setTimeout(resolve, 1000)
})
await this.failExit()
}
async stopProcess() {
try {
const queueManager = QueueManager.getInstance()
const predictionWorker = queueManager.getQueue('prediction').getWorker()
logger.info(`Shutting down Flowise Prediction Worker ${this.predictionWorkerId}...`)
await predictionWorker.close()
const upsertWorker = queueManager.getQueue('upsert').getWorker()
logger.info(`Shutting down Flowise Upsertion Worker ${this.upsertionWorkerId}...`)
await upsertWorker.close()
} catch (error) {
logger.error('There was an error shutting down Flowise Worker...', error)
await this.failExit()
}
await this.gracefullyExit()
}
}

View File

@ -2,7 +2,7 @@ import { NextFunction, Request, Response } from 'express'
import { StatusCodes } from 'http-status-codes'
import apiKeyService from '../../services/apikey'
import { ChatFlow } from '../../database/entities/ChatFlow'
import { updateRateLimiter } from '../../utils/rateLimit'
import { RateLimiterManager } from '../../utils/rateLimit'
import { InternalFlowiseError } from '../../errors/internalFlowiseError'
import { ChatflowType } from '../../Interface'
import chatflowsService from '../../services/chatflows'
@ -130,7 +130,8 @@ const updateChatflow = async (req: Request, res: Response, next: NextFunction) =
Object.assign(updateChatFlow, body)
updateChatFlow.id = chatflow.id
updateRateLimiter(updateChatFlow)
const rateLimiterManager = RateLimiterManager.getInstance()
await rateLimiterManager.updateRateLimiter(updateChatFlow)
const apiResponse = await chatflowsService.updateChatflow(chatflow, updateChatFlow)
return res.json(apiResponse)

View File

@ -4,15 +4,8 @@ import documentStoreService from '../../services/documentstore'
import { DocumentStore } from '../../database/entities/DocumentStore'
import { InternalFlowiseError } from '../../errors/internalFlowiseError'
import { DocumentStoreDTO } from '../../Interface'
import { getRateLimiter } from '../../utils/rateLimit'
const getRateLimiterMiddleware = async (req: Request, res: Response, next: NextFunction) => {
try {
return getRateLimiter(req, res, next)
} catch (error) {
next(error)
}
}
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import { FLOWISE_COUNTER_STATUS, FLOWISE_METRIC_COUNTERS } from '../../Interface.Metrics'
const createDocumentStore = async (req: Request, res: Response, next: NextFunction) => {
try {
@ -90,8 +83,14 @@ const getDocumentStoreFileChunks = async (req: Request, res: Response, next: Nex
`Error: documentStoreController.getDocumentStoreFileChunks - fileId not provided!`
)
}
const appDataSource = getRunningExpressApp().AppDataSource
const page = req.params.pageNo ? parseInt(req.params.pageNo) : 1
const apiResponse = await documentStoreService.getDocumentStoreFileChunks(req.params.storeId, req.params.fileId, page)
const apiResponse = await documentStoreService.getDocumentStoreFileChunks(
appDataSource,
req.params.storeId,
req.params.fileId,
page
)
return res.json(apiResponse)
} catch (error) {
next(error)
@ -171,6 +170,7 @@ const editDocumentStoreFileChunk = async (req: Request, res: Response, next: Nex
const saveProcessingLoader = async (req: Request, res: Response, next: NextFunction) => {
try {
const appServer = getRunningExpressApp()
if (typeof req.body === 'undefined') {
throw new InternalFlowiseError(
StatusCodes.PRECONDITION_FAILED,
@ -178,7 +178,7 @@ const saveProcessingLoader = async (req: Request, res: Response, next: NextFunct
)
}
const body = req.body
const apiResponse = await documentStoreService.saveProcessingLoader(body)
const apiResponse = await documentStoreService.saveProcessingLoader(appServer.AppDataSource, body)
return res.json(apiResponse)
} catch (error) {
next(error)
@ -201,7 +201,7 @@ const processLoader = async (req: Request, res: Response, next: NextFunction) =>
}
const docLoaderId = req.params.loaderId
const body = req.body
const apiResponse = await documentStoreService.processLoader(body, docLoaderId)
const apiResponse = await documentStoreService.processLoaderMiddleware(body, docLoaderId)
return res.json(apiResponse)
} catch (error) {
next(error)
@ -264,7 +264,7 @@ const previewFileChunks = async (req: Request, res: Response, next: NextFunction
}
const body = req.body
body.preview = true
const apiResponse = await documentStoreService.previewChunks(body)
const apiResponse = await documentStoreService.previewChunksMiddleware(body)
return res.json(apiResponse)
} catch (error) {
next(error)
@ -286,9 +286,15 @@ const insertIntoVectorStore = async (req: Request, res: Response, next: NextFunc
throw new Error('Error: documentStoreController.insertIntoVectorStore - body not provided!')
}
const body = req.body
const apiResponse = await documentStoreService.insertIntoVectorStore(body)
const apiResponse = await documentStoreService.insertIntoVectorStoreMiddleware(body)
getRunningExpressApp().metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.SUCCESS
})
return res.json(DocumentStoreDTO.fromEntity(apiResponse))
} catch (error) {
getRunningExpressApp().metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.FAILURE
})
next(error)
}
}
@ -327,7 +333,9 @@ const saveVectorStoreConfig = async (req: Request, res: Response, next: NextFunc
throw new Error('Error: documentStoreController.saveVectorStoreConfig - body not provided!')
}
const body = req.body
const apiResponse = await documentStoreService.saveVectorStoreConfig(body)
const appDataSource = getRunningExpressApp().AppDataSource
const componentNodes = getRunningExpressApp().nodesPool.componentNodes
const apiResponse = await documentStoreService.saveVectorStoreConfig(appDataSource, componentNodes, body)
return res.json(apiResponse)
} catch (error) {
next(error)
@ -388,8 +396,14 @@ const upsertDocStoreMiddleware = async (req: Request, res: Response, next: NextF
const body = req.body
const files = (req.files as Express.Multer.File[]) || []
const apiResponse = await documentStoreService.upsertDocStoreMiddleware(req.params.id, body, files)
getRunningExpressApp().metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.SUCCESS
})
return res.json(apiResponse)
} catch (error) {
getRunningExpressApp().metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.FAILURE
})
next(error)
}
}
@ -404,8 +418,14 @@ const refreshDocStoreMiddleware = async (req: Request, res: Response, next: Next
}
const body = req.body
const apiResponse = await documentStoreService.refreshDocStoreMiddleware(req.params.id, body)
getRunningExpressApp().metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.SUCCESS
})
return res.json(apiResponse)
} catch (error) {
getRunningExpressApp().metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.FAILURE
})
next(error)
}
}
@ -470,7 +490,6 @@ export default {
queryVectorStore,
deleteVectorStoreFromStore,
updateVectorStoreConfigOnly,
getRateLimiterMiddleware,
upsertDocStoreMiddleware,
refreshDocStoreMiddleware,
saveProcessingLoader,

View File

@ -2,6 +2,7 @@ import { Request, Response, NextFunction } from 'express'
import { utilBuildChatflow } from '../../utils/buildChatflow'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import { getErrorMessage } from '../../errors/utils'
import { MODE } from '../../Interface'
// Send input message and get prediction result (Internal)
const createInternalPrediction = async (req: Request, res: Response, next: NextFunction) => {
@ -11,7 +12,7 @@ const createInternalPrediction = async (req: Request, res: Response, next: NextF
return
} else {
const apiResponse = await utilBuildChatflow(req, true)
return res.json(apiResponse)
if (apiResponse) return res.json(apiResponse)
}
} catch (error) {
next(error)
@ -22,6 +23,7 @@ const createInternalPrediction = async (req: Request, res: Response, next: NextF
const createAndStreamInternalPrediction = async (req: Request, res: Response, next: NextFunction) => {
const chatId = req.body.chatId
const sseStreamer = getRunningExpressApp().sseStreamer
try {
sseStreamer.addClient(chatId, res)
res.setHeader('Content-Type', 'text/event-stream')
@ -30,6 +32,10 @@ const createAndStreamInternalPrediction = async (req: Request, res: Response, ne
res.setHeader('X-Accel-Buffering', 'no') //nginx config: https://serverfault.com/a/801629
res.flushHeaders()
if (process.env.MODE === MODE.QUEUE) {
getRunningExpressApp().redisSubscriber.subscribe(chatId)
}
const apiResponse = await utilBuildChatflow(req, true)
sseStreamer.streamMetadataEvent(apiResponse.chatId, apiResponse)
} catch (error) {

View File

@ -1,5 +1,5 @@
import { Request, Response, NextFunction } from 'express'
import { getRateLimiter } from '../../utils/rateLimit'
import { RateLimiterManager } from '../../utils/rateLimit'
import chatflowsService from '../../services/chatflows'
import logger from '../../utils/logger'
import predictionsServices from '../../services/predictions'
@ -8,6 +8,7 @@ import { StatusCodes } from 'http-status-codes'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
import { v4 as uuidv4 } from 'uuid'
import { getErrorMessage } from '../../errors/utils'
import { MODE } from '../../Interface'
// Send input message and get prediction result (External)
const createPrediction = async (req: Request, res: Response, next: NextFunction) => {
@ -55,6 +56,7 @@ const createPrediction = async (req: Request, res: Response, next: NextFunction)
const isStreamingRequested = req.body.streaming === 'true' || req.body.streaming === true
if (streamable?.isStreaming && isStreamingRequested) {
const sseStreamer = getRunningExpressApp().sseStreamer
let chatId = req.body.chatId
if (!req.body.chatId) {
chatId = req.body.chatId ?? req.body.overrideConfig?.sessionId ?? uuidv4()
@ -68,6 +70,10 @@ const createPrediction = async (req: Request, res: Response, next: NextFunction)
res.setHeader('X-Accel-Buffering', 'no') //nginx config: https://serverfault.com/a/801629
res.flushHeaders()
if (process.env.MODE === MODE.QUEUE) {
getRunningExpressApp().redisSubscriber.subscribe(chatId)
}
const apiResponse = await predictionsServices.buildChatflow(req)
sseStreamer.streamMetadataEvent(apiResponse.chatId, apiResponse)
} catch (error) {
@ -96,7 +102,7 @@ const createPrediction = async (req: Request, res: Response, next: NextFunction)
const getRateLimiterMiddleware = async (req: Request, res: Response, next: NextFunction) => {
try {
return getRateLimiter(req, res, next)
return RateLimiterManager.getInstance().getRateLimiter()(req, res, next)
} catch (error) {
next(error)
}

View File

@ -1,10 +1,10 @@
import { Request, Response, NextFunction } from 'express'
import vectorsService from '../../services/vectors'
import { getRateLimiter } from '../../utils/rateLimit'
import { RateLimiterManager } from '../../utils/rateLimit'
const getRateLimiterMiddleware = async (req: Request, res: Response, next: NextFunction) => {
try {
return getRateLimiter(req, res, next)
return RateLimiterManager.getInstance().getRateLimiter()(req, res, next)
} catch (error) {
next(error)
}

View File

@ -4,17 +4,16 @@ import path from 'path'
import cors from 'cors'
import http from 'http'
import basicAuth from 'express-basic-auth'
import { Server } from 'socket.io'
import { DataSource } from 'typeorm'
import { IChatFlow } from './Interface'
import { MODE } from './Interface'
import { getNodeModulesPackagePath, getEncryptionKey } from './utils'
import logger, { expressRequestLogger } from './utils/logger'
import { getDataSource } from './DataSource'
import { NodesPool } from './NodesPool'
import { ChatFlow } from './database/entities/ChatFlow'
import { ChatflowPool } from './ChatflowPool'
import { CachePool } from './CachePool'
import { initializeRateLimiter } from './utils/rateLimit'
import { AbortControllerPool } from './AbortControllerPool'
import { RateLimiterManager } from './utils/rateLimit'
import { getAPIKeys } from './utils/apiKey'
import { sanitizeMiddleware, getCorsOptions, getAllowedIframeOrigins } from './utils/XSS'
import { Telemetry } from './utils/telemetry'
@ -25,14 +24,13 @@ import { validateAPIKey } from './utils/validateKey'
import { IMetricsProvider } from './Interface.Metrics'
import { Prometheus } from './metrics/Prometheus'
import { OpenTelemetry } from './metrics/OpenTelemetry'
import { QueueManager } from './queue/QueueManager'
import { RedisEventSubscriber } from './queue/RedisEventSubscriber'
import { WHITELIST_URLS } from './utils/constants'
import 'global-agent/bootstrap'
declare global {
namespace Express {
interface Request {
io?: Server
}
namespace Multer {
interface File {
bucket: string
@ -53,12 +51,15 @@ declare global {
export class App {
app: express.Application
nodesPool: NodesPool
chatflowPool: ChatflowPool
abortControllerPool: AbortControllerPool
cachePool: CachePool
telemetry: Telemetry
rateLimiterManager: RateLimiterManager
AppDataSource: DataSource = getDataSource()
sseStreamer: SSEStreamer
metricsProvider: IMetricsProvider
queueManager: QueueManager
redisSubscriber: RedisEventSubscriber
constructor() {
this.app = express()
@ -77,8 +78,8 @@ export class App {
this.nodesPool = new NodesPool()
await this.nodesPool.initialize()
// Initialize chatflow pool
this.chatflowPool = new ChatflowPool()
// Initialize abort controllers pool
this.abortControllerPool = new AbortControllerPool()
// Initialize API keys
await getAPIKeys()
@ -87,21 +88,39 @@ export class App {
await getEncryptionKey()
// Initialize Rate Limit
const AllChatFlow: IChatFlow[] = await getAllChatFlow()
await initializeRateLimiter(AllChatFlow)
this.rateLimiterManager = RateLimiterManager.getInstance()
await this.rateLimiterManager.initializeRateLimiters(await getDataSource().getRepository(ChatFlow).find())
// Initialize cache pool
this.cachePool = new CachePool()
// Initialize telemetry
this.telemetry = new Telemetry()
// Initialize SSE Streamer
this.sseStreamer = new SSEStreamer()
// Init Queues
if (process.env.MODE === MODE.QUEUE) {
this.queueManager = QueueManager.getInstance()
this.queueManager.setupAllQueues({
componentNodes: this.nodesPool.componentNodes,
telemetry: this.telemetry,
cachePool: this.cachePool,
appDataSource: this.AppDataSource,
abortControllerPool: this.abortControllerPool
})
this.redisSubscriber = new RedisEventSubscriber(this.sseStreamer)
await this.redisSubscriber.connect()
}
logger.info('📦 [server]: Data Source has been initialized!')
} catch (error) {
logger.error('❌ [server]: Error during Data Source initialization:', error)
}
}
async config(socketIO?: Server) {
async config() {
// Limit is needed to allow sending/receiving base64 encoded string
const flowise_file_size_limit = process.env.FLOWISE_FILE_SIZE_LIMIT || '50mb'
this.app.use(express.json({ limit: flowise_file_size_limit }))
@ -133,12 +152,6 @@ export class App {
// Add the sanitizeMiddleware to guard against XSS
this.app.use(sanitizeMiddleware)
// Make io accessible to our router on req.io
this.app.use((req, res, next) => {
req.io = socketIO
next()
})
const whitelistURLs = WHITELIST_URLS
const URL_CASE_INSENSITIVE_REGEX: RegExp = /\/api\/v1\//i
const URL_CASE_SENSITIVE_REGEX: RegExp = /\/api\/v1\//
@ -227,7 +240,6 @@ export class App {
}
this.app.use('/api/v1', flowiseApiV1Router)
this.sseStreamer = new SSEStreamer(this.app)
// ----------------------------------------
// Configure number of proxies in Host Environment
@ -239,6 +251,10 @@ export class App {
})
})
if (process.env.MODE === MODE.QUEUE) {
this.app.use('/admin/queues', this.queueManager.getBullBoardRouter())
}
// ----------------------------------------
// Serve UI static
// ----------------------------------------
@ -262,6 +278,9 @@ export class App {
try {
const removePromises: any[] = []
removePromises.push(this.telemetry.flush())
if (this.queueManager) {
removePromises.push(this.redisSubscriber.disconnect())
}
await Promise.all(removePromises)
} catch (e) {
logger.error(`❌[server]: Flowise Server shut down error: ${e}`)
@ -271,10 +290,6 @@ export class App {
let serverApp: App | undefined
export async function getAllChatFlow(): Promise<IChatFlow[]> {
return await getDataSource().getRepository(ChatFlow).find()
}
export async function start(): Promise<void> {
serverApp = new App()
@ -282,12 +297,8 @@ export async function start(): Promise<void> {
const port = parseInt(process.env.PORT || '', 10) || 3000
const server = http.createServer(serverApp.app)
const io = new Server(server, {
cors: getCorsOptions()
})
await serverApp.initDatabase()
await serverApp.config(io)
await serverApp.config()
server.listen(port, host, () => {
logger.info(`⚡️ [server]: Flowise Server is listening at ${host ? 'http://' + host : ''}:${port}`)

View File

@ -0,0 +1,81 @@
import { Queue, Worker, Job, QueueEvents, RedisOptions } from 'bullmq'
import { v4 as uuidv4 } from 'uuid'
import logger from '../utils/logger'
const QUEUE_REDIS_EVENT_STREAM_MAX_LEN = process.env.QUEUE_REDIS_EVENT_STREAM_MAX_LEN
? parseInt(process.env.QUEUE_REDIS_EVENT_STREAM_MAX_LEN)
: 10000
const WORKER_CONCURRENCY = process.env.WORKER_CONCURRENCY ? parseInt(process.env.WORKER_CONCURRENCY) : 100000
export abstract class BaseQueue {
protected queue: Queue
protected queueEvents: QueueEvents
protected connection: RedisOptions
private worker: Worker
constructor(queueName: string, connection: RedisOptions) {
this.connection = connection
this.queue = new Queue(queueName, {
connection: this.connection,
streams: { events: { maxLen: QUEUE_REDIS_EVENT_STREAM_MAX_LEN } }
})
this.queueEvents = new QueueEvents(queueName, { connection: this.connection })
}
abstract processJob(data: any): Promise<any>
abstract getQueueName(): string
abstract getQueue(): Queue
public getWorker(): Worker {
return this.worker
}
public async addJob(jobData: any): Promise<Job> {
const jobId = jobData.id || uuidv4()
return await this.queue.add(jobId, jobData, { removeOnFail: true })
}
public createWorker(concurrency: number = WORKER_CONCURRENCY): Worker {
this.worker = new Worker(
this.queue.name,
async (job: Job) => {
const start = new Date().getTime()
logger.info(`Processing job ${job.id} in ${this.queue.name} at ${new Date().toISOString()}`)
const result = await this.processJob(job.data)
const end = new Date().getTime()
logger.info(`Completed job ${job.id} in ${this.queue.name} at ${new Date().toISOString()} (${end - start}ms)`)
return result
},
{
connection: this.connection,
concurrency
}
)
return this.worker
}
public async getJobs(): Promise<Job[]> {
return await this.queue.getJobs()
}
public async getJobCounts(): Promise<{ [index: string]: number }> {
return await this.queue.getJobCounts()
}
public async getJobByName(jobName: string): Promise<Job> {
const jobs = await this.queue.getJobs()
const job = jobs.find((job) => job.name === jobName)
if (!job) throw new Error(`Job name ${jobName} not found`)
return job
}
public getQueueEvents(): QueueEvents {
return this.queueEvents
}
public async clearQueue(): Promise<void> {
await this.queue.obliterate({ force: true })
}
}

View File

@ -0,0 +1,64 @@
import { DataSource } from 'typeorm'
import { executeFlow } from '../utils/buildChatflow'
import { IComponentNodes, IExecuteFlowParams } from '../Interface'
import { Telemetry } from '../utils/telemetry'
import { CachePool } from '../CachePool'
import { RedisEventPublisher } from './RedisEventPublisher'
import { AbortControllerPool } from '../AbortControllerPool'
import { BaseQueue } from './BaseQueue'
import { RedisOptions } from 'bullmq'
interface PredictionQueueOptions {
appDataSource: DataSource
telemetry: Telemetry
cachePool: CachePool
componentNodes: IComponentNodes
abortControllerPool: AbortControllerPool
}
export class PredictionQueue extends BaseQueue {
private componentNodes: IComponentNodes
private telemetry: Telemetry
private cachePool: CachePool
private appDataSource: DataSource
private abortControllerPool: AbortControllerPool
private redisPublisher: RedisEventPublisher
private queueName: string
constructor(name: string, connection: RedisOptions, options: PredictionQueueOptions) {
super(name, connection)
this.queueName = name
this.componentNodes = options.componentNodes || {}
this.telemetry = options.telemetry
this.cachePool = options.cachePool
this.appDataSource = options.appDataSource
this.abortControllerPool = options.abortControllerPool
this.redisPublisher = new RedisEventPublisher()
this.redisPublisher.connect()
}
public getQueueName() {
return this.queueName
}
public getQueue() {
return this.queue
}
async processJob(data: IExecuteFlowParams) {
if (this.appDataSource) data.appDataSource = this.appDataSource
if (this.telemetry) data.telemetry = this.telemetry
if (this.cachePool) data.cachePool = this.cachePool
if (this.componentNodes) data.componentNodes = this.componentNodes
if (this.redisPublisher) data.sseStreamer = this.redisPublisher
if (this.abortControllerPool) {
const abortControllerId = `${data.chatflow.id}_${data.chatId}`
const signal = new AbortController()
this.abortControllerPool.add(abortControllerId, signal)
data.signal = signal
}
return await executeFlow(data)
}
}

View File

@ -0,0 +1,127 @@
import { BaseQueue } from './BaseQueue'
import { PredictionQueue } from './PredictionQueue'
import { UpsertQueue } from './UpsertQueue'
import { IComponentNodes } from '../Interface'
import { Telemetry } from '../utils/telemetry'
import { CachePool } from '../CachePool'
import { DataSource } from 'typeorm'
import { AbortControllerPool } from '../AbortControllerPool'
import { QueueEventsProducer, RedisOptions } from 'bullmq'
import { createBullBoard } from 'bull-board'
import { BullMQAdapter } from 'bull-board/bullMQAdapter'
import { Express } from 'express'
const QUEUE_NAME = process.env.QUEUE_NAME || 'flowise-queue'
type QUEUE_TYPE = 'prediction' | 'upsert'
export class QueueManager {
private static instance: QueueManager
private queues: Map<string, BaseQueue> = new Map()
private connection: RedisOptions
private bullBoardRouter?: Express
private predictionQueueEventsProducer?: QueueEventsProducer
private constructor() {
let tlsOpts = undefined
if (process.env.REDIS_URL && process.env.REDIS_URL.startsWith('rediss://')) {
tlsOpts = {
rejectUnauthorized: false
}
} else if (process.env.REDIS_TLS === 'true') {
tlsOpts = {
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
}
this.connection = {
url: process.env.REDIS_URL || undefined,
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
username: process.env.REDIS_USERNAME || undefined,
password: process.env.REDIS_PASSWORD || undefined,
tls: tlsOpts
}
}
public static getInstance(): QueueManager {
if (!QueueManager.instance) {
QueueManager.instance = new QueueManager()
}
return QueueManager.instance
}
public registerQueue(name: string, queue: BaseQueue) {
this.queues.set(name, queue)
}
public getConnection() {
return this.connection
}
public getQueue(name: QUEUE_TYPE): BaseQueue {
const queue = this.queues.get(name)
if (!queue) throw new Error(`Queue ${name} not found`)
return queue
}
public getPredictionQueueEventsProducer(): QueueEventsProducer {
if (!this.predictionQueueEventsProducer) throw new Error('Prediction queue events producer not found')
return this.predictionQueueEventsProducer
}
public getBullBoardRouter(): Express {
if (!this.bullBoardRouter) throw new Error('BullBoard router not found')
return this.bullBoardRouter
}
public async getAllJobCounts(): Promise<{ [queueName: string]: { [status: string]: number } }> {
const counts: { [queueName: string]: { [status: string]: number } } = {}
for (const [name, queue] of this.queues) {
counts[name] = await queue.getJobCounts()
}
return counts
}
public setupAllQueues({
componentNodes,
telemetry,
cachePool,
appDataSource,
abortControllerPool
}: {
componentNodes: IComponentNodes
telemetry: Telemetry
cachePool: CachePool
appDataSource: DataSource
abortControllerPool: AbortControllerPool
}) {
const predictionQueueName = `${QUEUE_NAME}-prediction`
const predictionQueue = new PredictionQueue(predictionQueueName, this.connection, {
componentNodes,
telemetry,
cachePool,
appDataSource,
abortControllerPool
})
this.registerQueue('prediction', predictionQueue)
this.predictionQueueEventsProducer = new QueueEventsProducer(predictionQueue.getQueueName(), {
connection: this.connection
})
const upsertionQueueName = `${QUEUE_NAME}-upsertion`
const upsertionQueue = new UpsertQueue(upsertionQueueName, this.connection, {
componentNodes,
telemetry,
cachePool,
appDataSource
})
this.registerQueue('upsert', upsertionQueue)
const bullboard = createBullBoard([new BullMQAdapter(predictionQueue.getQueue()), new BullMQAdapter(upsertionQueue.getQueue())])
this.bullBoardRouter = bullboard.router
}
}

View File

@ -0,0 +1,262 @@
import { IServerSideEventStreamer } from 'flowise-components'
import { createClient } from 'redis'
export class RedisEventPublisher implements IServerSideEventStreamer {
private redisPublisher: ReturnType<typeof createClient>
constructor() {
if (process.env.REDIS_URL) {
this.redisPublisher = createClient({
url: process.env.REDIS_URL
})
} else {
this.redisPublisher = createClient({
username: process.env.REDIS_USERNAME || undefined,
password: process.env.REDIS_PASSWORD || undefined,
socket: {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
tls: process.env.REDIS_TLS === 'true',
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
})
}
}
async connect() {
await this.redisPublisher.connect()
}
streamCustomEvent(chatId: string, eventType: string, data: any) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType,
data
})
)
} catch (error) {
console.error('Error streaming custom event:', error)
}
}
streamStartEvent(chatId: string, data: string) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'start',
data
})
)
} catch (error) {
console.error('Error streaming start event:', error)
}
}
streamTokenEvent(chatId: string, data: string) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'token',
data
})
)
} catch (error) {
console.error('Error streaming token event:', error)
}
}
streamSourceDocumentsEvent(chatId: string, data: any) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'sourceDocuments',
data
})
)
} catch (error) {
console.error('Error streaming sourceDocuments event:', error)
}
}
streamArtifactsEvent(chatId: string, data: any) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'artifacts',
data
})
)
} catch (error) {
console.error('Error streaming artifacts event:', error)
}
}
streamUsedToolsEvent(chatId: string, data: any) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'usedTools',
data
})
)
} catch (error) {
console.error('Error streaming usedTools event:', error)
}
}
streamFileAnnotationsEvent(chatId: string, data: any) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'fileAnnotations',
data
})
)
} catch (error) {
console.error('Error streaming fileAnnotations event:', error)
}
}
streamToolEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'tool',
data
})
)
} catch (error) {
console.error('Error streaming tool event:', error)
}
}
streamAgentReasoningEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'agentReasoning',
data
})
)
} catch (error) {
console.error('Error streaming agentReasoning event:', error)
}
}
streamNextAgentEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'nextAgent',
data
})
)
} catch (error) {
console.error('Error streaming nextAgent event:', error)
}
}
streamActionEvent(chatId: string, data: any): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'action',
data
})
)
} catch (error) {
console.error('Error streaming action event:', error)
}
}
streamAbortEvent(chatId: string): void {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'abort',
data: '[DONE]'
})
)
} catch (error) {
console.error('Error streaming abort event:', error)
}
}
streamEndEvent(_: string) {
// placeholder for future use
}
streamErrorEvent(chatId: string, msg: string) {
try {
this.redisPublisher.publish(
chatId,
JSON.stringify({
chatId,
eventType: 'error',
data: msg
})
)
} catch (error) {
console.error('Error streaming error event:', error)
}
}
streamMetadataEvent(chatId: string, apiResponse: any) {
try {
const metadataJson: any = {}
if (apiResponse.chatId) {
metadataJson['chatId'] = apiResponse.chatId
}
if (apiResponse.chatMessageId) {
metadataJson['chatMessageId'] = apiResponse.chatMessageId
}
if (apiResponse.question) {
metadataJson['question'] = apiResponse.question
}
if (apiResponse.sessionId) {
metadataJson['sessionId'] = apiResponse.sessionId
}
if (apiResponse.memoryType) {
metadataJson['memoryType'] = apiResponse.memoryType
}
if (Object.keys(metadataJson).length > 0) {
this.streamCustomEvent(chatId, 'metadata', metadataJson)
}
} catch (error) {
console.error('Error streaming metadata event:', error)
}
}
async disconnect() {
if (this.redisPublisher) {
await this.redisPublisher.quit()
}
}
}

View File

@ -0,0 +1,108 @@
import { createClient } from 'redis'
import { SSEStreamer } from '../utils/SSEStreamer'
export class RedisEventSubscriber {
private redisSubscriber: ReturnType<typeof createClient>
private sseStreamer: SSEStreamer
private subscribedChannels: Set<string> = new Set()
constructor(sseStreamer: SSEStreamer) {
if (process.env.REDIS_URL) {
this.redisSubscriber = createClient({
url: process.env.REDIS_URL
})
} else {
this.redisSubscriber = createClient({
username: process.env.REDIS_USERNAME || undefined,
password: process.env.REDIS_PASSWORD || undefined,
socket: {
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
tls: process.env.REDIS_TLS === 'true',
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
})
}
this.sseStreamer = sseStreamer
}
async connect() {
await this.redisSubscriber.connect()
}
subscribe(channel: string) {
// Subscribe to the Redis channel for job events
if (!this.redisSubscriber) {
throw new Error('Redis subscriber not connected.')
}
// Check if already subscribed
if (this.subscribedChannels.has(channel)) {
return // Prevent duplicate subscription
}
this.redisSubscriber.subscribe(channel, (message) => {
this.handleEvent(message)
})
// Mark the channel as subscribed
this.subscribedChannels.add(channel)
}
private handleEvent(message: string) {
// Parse the message from Redis
const event = JSON.parse(message)
const { eventType, chatId, data } = event
// Stream the event to the client
switch (eventType) {
case 'start':
this.sseStreamer.streamStartEvent(chatId, data)
break
case 'token':
this.sseStreamer.streamTokenEvent(chatId, data)
break
case 'sourceDocuments':
this.sseStreamer.streamSourceDocumentsEvent(chatId, data)
break
case 'artifacts':
this.sseStreamer.streamArtifactsEvent(chatId, data)
break
case 'usedTools':
this.sseStreamer.streamUsedToolsEvent(chatId, data)
break
case 'fileAnnotations':
this.sseStreamer.streamFileAnnotationsEvent(chatId, data)
break
case 'tool':
this.sseStreamer.streamToolEvent(chatId, data)
break
case 'agentReasoning':
this.sseStreamer.streamAgentReasoningEvent(chatId, data)
break
case 'nextAgent':
this.sseStreamer.streamNextAgentEvent(chatId, data)
break
case 'action':
this.sseStreamer.streamActionEvent(chatId, data)
break
case 'abort':
this.sseStreamer.streamAbortEvent(chatId)
break
case 'error':
this.sseStreamer.streamErrorEvent(chatId, data)
break
case 'metadata':
this.sseStreamer.streamMetadataEvent(chatId, data)
break
}
}
async disconnect() {
if (this.redisSubscriber) {
await this.redisSubscriber.quit()
}
}
}

View File

@ -0,0 +1,85 @@
import { DataSource } from 'typeorm'
import {
IComponentNodes,
IExecuteDocStoreUpsert,
IExecuteFlowParams,
IExecutePreviewLoader,
IExecuteProcessLoader,
IExecuteVectorStoreInsert
} from '../Interface'
import { Telemetry } from '../utils/telemetry'
import { CachePool } from '../CachePool'
import { BaseQueue } from './BaseQueue'
import { executeUpsert } from '../utils/upsertVector'
import { executeDocStoreUpsert, insertIntoVectorStore, previewChunks, processLoader } from '../services/documentstore'
import { RedisOptions } from 'bullmq'
import logger from '../utils/logger'
interface UpsertQueueOptions {
appDataSource: DataSource
telemetry: Telemetry
cachePool: CachePool
componentNodes: IComponentNodes
}
export class UpsertQueue extends BaseQueue {
private componentNodes: IComponentNodes
private telemetry: Telemetry
private cachePool: CachePool
private appDataSource: DataSource
private queueName: string
constructor(name: string, connection: RedisOptions, options: UpsertQueueOptions) {
super(name, connection)
this.queueName = name
this.componentNodes = options.componentNodes || {}
this.telemetry = options.telemetry
this.cachePool = options.cachePool
this.appDataSource = options.appDataSource
}
public getQueueName() {
return this.queueName
}
public getQueue() {
return this.queue
}
async processJob(
data: IExecuteFlowParams | IExecuteDocStoreUpsert | IExecuteProcessLoader | IExecuteVectorStoreInsert | IExecutePreviewLoader
) {
if (this.appDataSource) data.appDataSource = this.appDataSource
if (this.telemetry) data.telemetry = this.telemetry
if (this.cachePool) data.cachePool = this.cachePool
if (this.componentNodes) data.componentNodes = this.componentNodes
// document-store/loader/preview
if (Object.prototype.hasOwnProperty.call(data, 'isPreviewOnly')) {
logger.info('Previewing loader...')
return await previewChunks(data as IExecutePreviewLoader)
}
// document-store/loader/process/:loaderId
if (Object.prototype.hasOwnProperty.call(data, 'isProcessWithoutUpsert')) {
logger.info('Processing loader...')
return await processLoader(data as IExecuteProcessLoader)
}
// document-store/vectorstore/insert/:loaderId
if (Object.prototype.hasOwnProperty.call(data, 'isVectorStoreInsert')) {
logger.info('Inserting vector store...')
return await insertIntoVectorStore(data as IExecuteVectorStoreInsert)
}
// document-store/upsert/:storeId
if (Object.prototype.hasOwnProperty.call(data, 'storeId')) {
logger.info('Upserting to vector store via document loader...')
return await executeDocStoreUpsert(data as IExecuteDocStoreUpsert)
}
// upsert-vector/:chatflowid
logger.info('Upserting to vector store via chatflow...')
return await executeUpsert(data as IExecuteFlowParams)
}
}

View File

@ -1,6 +1,6 @@
import { DeleteResult, FindOptionsWhere } from 'typeorm'
import { StatusCodes } from 'http-status-codes'
import { ChatMessageRatingType, ChatType, IChatMessage } from '../../Interface'
import { ChatMessageRatingType, ChatType, IChatMessage, MODE } from '../../Interface'
import { utilGetChatMessage } from '../../utils/getChatMessage'
import { utilAddChatMessage } from '../../utils/addChatMesage'
import { getRunningExpressApp } from '../../utils/getRunningExpressApp'
@ -160,16 +160,15 @@ const removeChatMessagesByMessageIds = async (
const abortChatMessage = async (chatId: string, chatflowid: string) => {
try {
const appServer = getRunningExpressApp()
const id = `${chatflowid}_${chatId}`
const endingNodeData = appServer.chatflowPool.activeChatflows[`${chatflowid}_${chatId}`]?.endingNodeData as any
if (endingNodeData && endingNodeData.signal) {
try {
endingNodeData.signal.abort()
await appServer.chatflowPool.remove(`${chatflowid}_${chatId}`)
} catch (e) {
logger.error(`[server]: Error aborting chat message for ${chatflowid}, chatId ${chatId}: ${e}`)
}
if (process.env.MODE === MODE.QUEUE) {
await appServer.queueManager.getPredictionQueueEventsProducer().publishEvent({
eventName: 'abort',
id
})
} else {
appServer.abortControllerPool.abort(id)
}
} catch (error) {
throw new InternalFlowiseError(

View File

@ -267,12 +267,6 @@ const updateChatflow = async (chatflow: ChatFlow, updateChatFlow: ChatFlow): Pro
await _checkAndUpdateDocumentStoreUsage(newDbChatflow)
const dbResponse = await appServer.AppDataSource.getRepository(ChatFlow).save(newDbChatflow)
// chatFlowPool is initialized only when a flow is opened
// if the user attempts to rename/update category without opening any flow, chatFlowPool will be undefined
if (appServer.chatflowPool) {
// Update chatflowpool inSync to false, to build flow from scratch again because data has been changed
appServer.chatflowPool.updateInSync(chatflow.id, false)
}
return dbResponse
} catch (error) {
throw new InternalFlowiseError(

View File

@ -18,6 +18,7 @@ import {
addLoaderSource,
ChatType,
DocumentStoreStatus,
IComponentNodes,
IDocumentStoreFileChunkPagedResponse,
IDocumentStoreLoader,
IDocumentStoreLoaderFile,
@ -25,8 +26,13 @@ import {
IDocumentStoreRefreshData,
IDocumentStoreUpsertData,
IDocumentStoreWhereUsed,
IExecuteDocStoreUpsert,
IExecuteProcessLoader,
IExecuteVectorStoreInsert,
INodeData,
IOverrideConfig
MODE,
IOverrideConfig,
IExecutePreviewLoader
} from '../../Interface'
import { DocumentStoreFileChunk } from '../../database/entities/DocumentStoreFileChunk'
import { v4 as uuidv4 } from 'uuid'
@ -38,12 +44,12 @@ import { StatusCodes } from 'http-status-codes'
import { getErrorMessage } from '../../errors/utils'
import { ChatFlow } from '../../database/entities/ChatFlow'
import { Document } from '@langchain/core/documents'
import { App } from '../../index'
import { UpsertHistory } from '../../database/entities/UpsertHistory'
import { cloneDeep, omit } from 'lodash'
import { FLOWISE_COUNTER_STATUS, FLOWISE_METRIC_COUNTERS } from '../../Interface.Metrics'
import { DOCUMENTSTORE_TOOL_DESCRIPTION_PROMPT_GENERATOR } from '../../utils/prompt'
import { INPUT_PARAMS_TYPE } from '../../utils/constants'
import { DataSource } from 'typeorm'
import { Telemetry } from '../../utils/telemetry'
import { INPUT_PARAMS_TYPE, OMIT_QUEUE_JOB_DATA } from '../../utils/constants'
const DOCUMENT_STORE_BASE_FOLDER = 'docustore'
@ -185,10 +191,9 @@ const getUsedChatflowNames = async (entity: DocumentStore) => {
}
// Get chunks for a specific loader or store
const getDocumentStoreFileChunks = async (storeId: string, docId: string, pageNo: number = 1) => {
const getDocumentStoreFileChunks = async (appDataSource: DataSource, storeId: string, docId: string, pageNo: number = 1) => {
try {
const appServer = getRunningExpressApp()
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: storeId
})
if (!entity) {
@ -230,10 +235,10 @@ const getDocumentStoreFileChunks = async (storeId: string, docId: string, pageNo
if (docId === 'all') {
whereCondition = { storeId: storeId }
}
const count = await appServer.AppDataSource.getRepository(DocumentStoreFileChunk).count({
const count = await appDataSource.getRepository(DocumentStoreFileChunk).count({
where: whereCondition
})
const chunksWithCount = await appServer.AppDataSource.getRepository(DocumentStoreFileChunk).find({
const chunksWithCount = await appDataSource.getRepository(DocumentStoreFileChunk).find({
skip,
take,
where: whereCondition,
@ -326,7 +331,7 @@ const deleteDocumentStoreFileChunk = async (storeId: string, docId: string, chun
found.totalChars -= tbdChunk.pageContent.length
entity.loaders = JSON.stringify(loaders)
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
return getDocumentStoreFileChunks(storeId, docId)
return getDocumentStoreFileChunks(appServer.AppDataSource, storeId, docId)
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
@ -338,6 +343,8 @@ const deleteDocumentStoreFileChunk = async (storeId: string, docId: string, chun
const deleteVectorStoreFromStore = async (storeId: string) => {
try {
const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
id: storeId
})
@ -370,7 +377,7 @@ const deleteVectorStoreFromStore = async (storeId: string) => {
// Get Record Manager Instance
const recordManagerConfig = JSON.parse(entity.recordManagerConfig)
const recordManagerObj = await _createRecordManagerObject(
appServer,
componentNodes,
{ recordManagerName: recordManagerConfig.name, recordManagerConfig: recordManagerConfig.config },
options
)
@ -378,7 +385,7 @@ const deleteVectorStoreFromStore = async (storeId: string) => {
// Get Embeddings Instance
const embeddingConfig = JSON.parse(entity.embeddingConfig)
const embeddingObj = await _createEmbeddingsObject(
appServer,
componentNodes,
{ embeddingName: embeddingConfig.name, embeddingConfig: embeddingConfig.config },
options
)
@ -386,7 +393,7 @@ const deleteVectorStoreFromStore = async (storeId: string) => {
// Get Vector Store Node Data
const vectorStoreConfig = JSON.parse(entity.vectorStoreConfig)
const vStoreNodeData = _createVectorStoreNodeData(
appServer,
componentNodes,
{ vectorStoreName: vectorStoreConfig.name, vectorStoreConfig: vectorStoreConfig.config },
embeddingObj,
recordManagerObj
@ -394,7 +401,7 @@ const deleteVectorStoreFromStore = async (storeId: string) => {
// Get Vector Store Instance
const vectorStoreObj = await _createVectorStoreObject(
appServer,
componentNodes,
{ vectorStoreName: vectorStoreConfig.name, vectorStoreConfig: vectorStoreConfig.config },
vStoreNodeData
)
@ -440,7 +447,7 @@ const editDocumentStoreFileChunk = async (storeId: string, docId: string, chunkI
await appServer.AppDataSource.getRepository(DocumentStoreFileChunk).save(editChunk)
entity.loaders = JSON.stringify(loaders)
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
return getDocumentStoreFileChunks(storeId, docId)
return getDocumentStoreFileChunks(appServer.AppDataSource, storeId, docId)
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
@ -449,7 +456,6 @@ const editDocumentStoreFileChunk = async (storeId: string, docId: string, chunkI
}
}
// Update documentStore
const updateDocumentStore = async (documentStore: DocumentStore, updatedDocumentStore: DocumentStore) => {
try {
const appServer = getRunningExpressApp()
@ -484,12 +490,11 @@ const _saveFileToStorage = async (fileBase64: string, entity: DocumentStore) =>
}
}
const _splitIntoChunks = async (data: IDocumentStoreLoaderForPreview) => {
const _splitIntoChunks = async (appDataSource: DataSource, componentNodes: IComponentNodes, data: IDocumentStoreLoaderForPreview) => {
try {
const appServer = getRunningExpressApp()
let splitterInstance = null
if (data.splitterId && data.splitterConfig && Object.keys(data.splitterConfig).length > 0) {
const nodeInstanceFilePath = appServer.nodesPool.componentNodes[data.splitterId].filePath as string
const nodeInstanceFilePath = componentNodes[data.splitterId].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
const newNodeInstance = new nodeModule.nodeClass()
let nodeData = {
@ -499,7 +504,7 @@ const _splitIntoChunks = async (data: IDocumentStoreLoaderForPreview) => {
splitterInstance = await newNodeInstance.init(nodeData)
}
if (!data.loaderId) return []
const nodeInstanceFilePath = appServer.nodesPool.componentNodes[data.loaderId].filePath as string
const nodeInstanceFilePath = componentNodes[data.loaderId].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
// doc loader configs
const nodeData = {
@ -509,7 +514,7 @@ const _splitIntoChunks = async (data: IDocumentStoreLoaderForPreview) => {
}
const options: ICommonObject = {
chatflowid: uuidv4(),
appDataSource: appServer.AppDataSource,
appDataSource,
databaseEntities,
logger
}
@ -524,7 +529,7 @@ const _splitIntoChunks = async (data: IDocumentStoreLoaderForPreview) => {
}
}
const _normalizeFilePaths = async (data: IDocumentStoreLoaderForPreview, entity: DocumentStore | null) => {
const _normalizeFilePaths = async (appDataSource: DataSource, data: IDocumentStoreLoaderForPreview, entity: DocumentStore | null) => {
const keys = Object.getOwnPropertyNames(data.loaderConfig)
let rehydrated = false
for (let i = 0; i < keys.length; i++) {
@ -538,8 +543,7 @@ const _normalizeFilePaths = async (data: IDocumentStoreLoaderForPreview, entity:
let documentStoreEntity: DocumentStore | null = entity
if (input.startsWith('FILE-STORAGE::')) {
if (!documentStoreEntity) {
const appServer = getRunningExpressApp()
documentStoreEntity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
documentStoreEntity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId
})
if (!documentStoreEntity) {
@ -573,7 +577,43 @@ const _normalizeFilePaths = async (data: IDocumentStoreLoaderForPreview, entity:
data.rehydrated = rehydrated
}
const previewChunks = async (data: IDocumentStoreLoaderForPreview) => {
const previewChunksMiddleware = async (data: IDocumentStoreLoaderForPreview) => {
try {
const appServer = getRunningExpressApp()
const appDataSource = appServer.AppDataSource
const componentNodes = appServer.nodesPool.componentNodes
const executeData: IExecutePreviewLoader = {
appDataSource,
componentNodes,
data,
isPreviewOnly: true
}
if (process.env.MODE === MODE.QUEUE) {
const upsertQueue = appServer.queueManager.getQueue('upsert')
const job = await upsertQueue.addJob(omit(executeData, OMIT_QUEUE_JOB_DATA))
logger.debug(`[server]: Job added to queue: ${job.id}`)
const queueEvents = upsertQueue.getQueueEvents()
const result = await job.waitUntilFinished(queueEvents)
if (!result) {
throw new Error('Job execution failed')
}
return result
}
return await previewChunks(executeData)
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: documentStoreServices.previewChunksMiddleware - ${getErrorMessage(error)}`
)
}
}
export const previewChunks = async ({ appDataSource, componentNodes, data }: IExecutePreviewLoader) => {
try {
if (data.preview) {
if (
@ -585,9 +625,9 @@ const previewChunks = async (data: IDocumentStoreLoaderForPreview) => {
}
}
if (!data.rehydrated) {
await _normalizeFilePaths(data, null)
await _normalizeFilePaths(appDataSource, data, null)
}
let docs = await _splitIntoChunks(data)
let docs = await _splitIntoChunks(appDataSource, componentNodes, data)
const totalChunks = docs.length
// if -1, return all chunks
if (data.previewChunkCount === -1) data.previewChunkCount = totalChunks
@ -605,10 +645,9 @@ const previewChunks = async (data: IDocumentStoreLoaderForPreview) => {
}
}
const saveProcessingLoader = async (data: IDocumentStoreLoaderForPreview): Promise<IDocumentStoreLoader> => {
const saveProcessingLoader = async (appDataSource: DataSource, data: IDocumentStoreLoaderForPreview): Promise<IDocumentStoreLoader> => {
try {
const appServer = getRunningExpressApp()
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId
})
if (!entity) {
@ -670,7 +709,7 @@ const saveProcessingLoader = async (data: IDocumentStoreLoaderForPreview): Promi
existingLoaders.push(loader)
entity.loaders = JSON.stringify(existingLoaders)
}
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
await appDataSource.getRepository(DocumentStore).save(entity)
const newLoaders = JSON.parse(entity.loaders)
const newLoader = newLoaders.find((ldr: IDocumentStoreLoader) => ldr.id === newDocLoaderId)
if (!newLoader) {
@ -686,21 +725,51 @@ const saveProcessingLoader = async (data: IDocumentStoreLoaderForPreview): Promi
}
}
const processLoader = async (data: IDocumentStoreLoaderForPreview, docLoaderId: string) => {
export const processLoader = async ({ appDataSource, componentNodes, data, docLoaderId }: IExecuteProcessLoader) => {
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId
})
if (!entity) {
throw new InternalFlowiseError(
StatusCodes.NOT_FOUND,
`Error: documentStoreServices.processLoader - Document store ${data.storeId} not found`
)
}
await _saveChunksToStorage(appDataSource, componentNodes, data, entity, docLoaderId)
return getDocumentStoreFileChunks(appDataSource, data.storeId as string, docLoaderId)
}
const processLoaderMiddleware = async (data: IDocumentStoreLoaderForPreview, docLoaderId: string) => {
try {
const appServer = getRunningExpressApp()
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId
})
if (!entity) {
throw new InternalFlowiseError(
StatusCodes.NOT_FOUND,
`Error: documentStoreServices.processLoader - Document store ${data.storeId} not found`
)
const appDataSource = appServer.AppDataSource
const componentNodes = appServer.nodesPool.componentNodes
const telemetry = appServer.telemetry
const executeData: IExecuteProcessLoader = {
appDataSource,
componentNodes,
data,
docLoaderId,
isProcessWithoutUpsert: true,
telemetry
}
// this method will run async, will have to be moved to a worker thread
await _saveChunksToStorage(data, entity, docLoaderId)
return getDocumentStoreFileChunks(data.storeId as string, docLoaderId)
if (process.env.MODE === MODE.QUEUE) {
const upsertQueue = appServer.queueManager.getQueue('upsert')
const job = await upsertQueue.addJob(omit(executeData, OMIT_QUEUE_JOB_DATA))
logger.debug(`[server]: Job added to queue: ${job.id}`)
const queueEvents = upsertQueue.getQueueEvents()
const result = await job.waitUntilFinished(queueEvents)
if (!result) {
throw new Error('Job execution failed')
}
return result
}
return await processLoader(executeData)
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
@ -709,16 +778,26 @@ const processLoader = async (data: IDocumentStoreLoaderForPreview, docLoaderId:
}
}
const _saveChunksToStorage = async (data: IDocumentStoreLoaderForPreview, entity: DocumentStore, newLoaderId: string) => {
const _saveChunksToStorage = async (
appDataSource: DataSource,
componentNodes: IComponentNodes,
data: IDocumentStoreLoaderForPreview,
entity: DocumentStore,
newLoaderId: string
) => {
const re = new RegExp('^data.*;base64', 'i')
try {
const appServer = getRunningExpressApp()
//step 1: restore the full paths, if any
await _normalizeFilePaths(data, entity)
await _normalizeFilePaths(appDataSource, data, entity)
//step 2: split the file into chunks
const response = await previewChunks(data)
const response = await previewChunks({
appDataSource,
componentNodes,
data,
isPreviewOnly: false
})
//step 3: remove all files associated with the loader
const existingLoaders = JSON.parse(entity.loaders)
@ -786,7 +865,7 @@ const _saveChunksToStorage = async (data: IDocumentStoreLoaderForPreview, entity
}
//step 7: remove all previous chunks
await appServer.AppDataSource.getRepository(DocumentStoreFileChunk).delete({ docId: newLoaderId })
await appDataSource.getRepository(DocumentStoreFileChunk).delete({ docId: newLoaderId })
if (response.chunks) {
//step 8: now save the new chunks
const totalChars = response.chunks.reduce((acc, chunk) => {
@ -804,8 +883,8 @@ const _saveChunksToStorage = async (data: IDocumentStoreLoaderForPreview, entity
pageContent: chunk.pageContent,
metadata: JSON.stringify(chunk.metadata)
}
const dChunk = appServer.AppDataSource.getRepository(DocumentStoreFileChunk).create(docChunk)
await appServer.AppDataSource.getRepository(DocumentStoreFileChunk).save(dChunk)
const dChunk = appDataSource.getRepository(DocumentStoreFileChunk).create(docChunk)
await appDataSource.getRepository(DocumentStoreFileChunk).save(dChunk)
})
// update the loader with the new metrics
loader.totalChunks = response.totalChunks
@ -818,7 +897,7 @@ const _saveChunksToStorage = async (data: IDocumentStoreLoaderForPreview, entity
entity.loaders = JSON.stringify(existingLoaders)
//step 9: update the entity in the database
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
await appDataSource.getRepository(DocumentStore).save(entity)
return
} catch (error) {
@ -917,10 +996,9 @@ const updateVectorStoreConfigOnly = async (data: ICommonObject) => {
)
}
}
const saveVectorStoreConfig = async (data: ICommonObject, isStrictSave = true) => {
const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObject, isStrictSave = true) => {
try {
const appServer = getRunningExpressApp()
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId
})
if (!entity) {
@ -971,7 +1049,7 @@ const saveVectorStoreConfig = async (data: ICommonObject, isStrictSave = true) =
// this also means that the store is not yet sync'ed to vector store
entity.status = DocumentStoreStatus.SYNC
}
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
await appDataSource.getRepository(DocumentStore).save(entity)
return entity
} catch (error) {
throw new InternalFlowiseError(
@ -981,15 +1059,19 @@ const saveVectorStoreConfig = async (data: ICommonObject, isStrictSave = true) =
}
}
const insertIntoVectorStore = async (data: ICommonObject, isStrictSave = true) => {
export const insertIntoVectorStore = async ({
appDataSource,
componentNodes,
telemetry,
data,
isStrictSave
}: IExecuteVectorStoreInsert) => {
try {
const appServer = getRunningExpressApp()
const entity = await saveVectorStoreConfig(data, isStrictSave)
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave)
entity.status = DocumentStoreStatus.UPSERTING
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
await appDataSource.getRepository(DocumentStore).save(entity)
// TODO: to be moved into a worker thread...
const indexResult = await _insertIntoVectorStoreWorkerThread(data, isStrictSave)
const indexResult = await _insertIntoVectorStoreWorkerThread(appDataSource, componentNodes, telemetry, data, isStrictSave)
return indexResult
} catch (error) {
throw new InternalFlowiseError(
@ -999,16 +1081,60 @@ const insertIntoVectorStore = async (data: ICommonObject, isStrictSave = true) =
}
}
const _insertIntoVectorStoreWorkerThread = async (data: ICommonObject, isStrictSave = true) => {
const insertIntoVectorStoreMiddleware = async (data: ICommonObject, isStrictSave = true) => {
try {
const appServer = getRunningExpressApp()
const entity = await saveVectorStoreConfig(data, isStrictSave)
const appDataSource = appServer.AppDataSource
const componentNodes = appServer.nodesPool.componentNodes
const telemetry = appServer.telemetry
const executeData: IExecuteVectorStoreInsert = {
appDataSource,
componentNodes,
telemetry,
data,
isStrictSave,
isVectorStoreInsert: true
}
if (process.env.MODE === MODE.QUEUE) {
const upsertQueue = appServer.queueManager.getQueue('upsert')
const job = await upsertQueue.addJob(omit(executeData, OMIT_QUEUE_JOB_DATA))
logger.debug(`[server]: Job added to queue: ${job.id}`)
const queueEvents = upsertQueue.getQueueEvents()
const result = await job.waitUntilFinished(queueEvents)
if (!result) {
throw new Error('Job execution failed')
}
return result
} else {
return await insertIntoVectorStore(executeData)
}
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: documentStoreServices.insertIntoVectorStoreMiddleware - ${getErrorMessage(error)}`
)
}
}
const _insertIntoVectorStoreWorkerThread = async (
appDataSource: DataSource,
componentNodes: IComponentNodes,
telemetry: Telemetry,
data: ICommonObject,
isStrictSave = true
) => {
try {
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave)
let upsertHistory: Record<string, any> = {}
const chatflowid = data.storeId // fake chatflowid because this is not tied to any chatflow
const options: ICommonObject = {
chatflowid,
appDataSource: appServer.AppDataSource,
appDataSource,
databaseEntities,
logger
}
@ -1017,14 +1143,14 @@ const _insertIntoVectorStoreWorkerThread = async (data: ICommonObject, isStrictS
// Get Record Manager Instance
if (data.recordManagerName && data.recordManagerConfig) {
recordManagerObj = await _createRecordManagerObject(appServer, data, options, upsertHistory)
recordManagerObj = await _createRecordManagerObject(componentNodes, data, options, upsertHistory)
}
// Get Embeddings Instance
const embeddingObj = await _createEmbeddingsObject(appServer, data, options, upsertHistory)
const embeddingObj = await _createEmbeddingsObject(componentNodes, data, options, upsertHistory)
// Get Vector Store Node Data
const vStoreNodeData = _createVectorStoreNodeData(appServer, data, embeddingObj, recordManagerObj)
const vStoreNodeData = _createVectorStoreNodeData(componentNodes, data, embeddingObj, recordManagerObj)
// Prepare docs for upserting
const filterOptions: ICommonObject = {
@ -1033,7 +1159,7 @@ const _insertIntoVectorStoreWorkerThread = async (data: ICommonObject, isStrictS
if (data.docId) {
filterOptions['docId'] = data.docId
}
const chunks = await appServer.AppDataSource.getRepository(DocumentStoreFileChunk).find({
const chunks = await appDataSource.getRepository(DocumentStoreFileChunk).find({
where: filterOptions
})
const docs: Document[] = chunks.map((chunk: DocumentStoreFileChunk) => {
@ -1045,7 +1171,7 @@ const _insertIntoVectorStoreWorkerThread = async (data: ICommonObject, isStrictS
vStoreNodeData.inputs.document = docs
// Get Vector Store Instance
const vectorStoreObj = await _createVectorStoreObject(appServer, data, vStoreNodeData, upsertHistory)
const vectorStoreObj = await _createVectorStoreObject(componentNodes, data, vStoreNodeData, upsertHistory)
const indexResult = await vectorStoreObj.vectorStoreMethods.upsert(vStoreNodeData, options)
// Save to DB
@ -1056,20 +1182,19 @@ const _insertIntoVectorStoreWorkerThread = async (data: ICommonObject, isStrictS
result.chatflowid = chatflowid
const newUpsertHistory = new UpsertHistory()
Object.assign(newUpsertHistory, result)
const upsertHistoryItem = appServer.AppDataSource.getRepository(UpsertHistory).create(newUpsertHistory)
await appServer.AppDataSource.getRepository(UpsertHistory).save(upsertHistoryItem)
const upsertHistoryItem = appDataSource.getRepository(UpsertHistory).create(newUpsertHistory)
await appDataSource.getRepository(UpsertHistory).save(upsertHistoryItem)
}
await appServer.telemetry.sendTelemetry('vector_upserted', {
await telemetry.sendTelemetry('vector_upserted', {
version: await getAppVersion(),
chatlowId: chatflowid,
type: ChatType.INTERNAL,
flowGraph: omit(indexResult['result'], ['totalKeys', 'addedDocs'])
})
appServer.metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, { status: FLOWISE_COUNTER_STATUS.SUCCESS })
entity.status = DocumentStoreStatus.UPSERTED
await appServer.AppDataSource.getRepository(DocumentStore).save(entity)
await appDataSource.getRepository(DocumentStore).save(entity)
return indexResult ?? { result: 'Successfully Upserted' }
} catch (error) {
@ -1123,6 +1248,8 @@ const getRecordManagerProviders = async () => {
const queryVectorStore = async (data: ICommonObject) => {
try {
const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId
})
@ -1147,7 +1274,7 @@ const queryVectorStore = async (data: ICommonObject) => {
const embeddingConfig = JSON.parse(entity.embeddingConfig)
data.embeddingName = embeddingConfig.name
data.embeddingConfig = embeddingConfig.config
let embeddingObj = await _createEmbeddingsObject(appServer, data, options)
let embeddingObj = await _createEmbeddingsObject(componentNodes, data, options)
const vsConfig = JSON.parse(entity.vectorStoreConfig)
data.vectorStoreName = vsConfig.name
@ -1156,10 +1283,10 @@ const queryVectorStore = async (data: ICommonObject) => {
data.vectorStoreConfig = { ...vsConfig.config, ...data.inputs }
}
const vStoreNodeData = _createVectorStoreNodeData(appServer, data, embeddingObj, undefined)
const vStoreNodeData = _createVectorStoreNodeData(componentNodes, data, embeddingObj, undefined)
// Get Vector Store Instance
const vectorStoreObj = await _createVectorStoreObject(appServer, data, vStoreNodeData)
const vectorStoreObj = await _createVectorStoreObject(componentNodes, data, vStoreNodeData)
const retriever = await vectorStoreObj.init(vStoreNodeData, '', options)
if (!retriever) {
throw new InternalFlowiseError(StatusCodes.INTERNAL_SERVER_ERROR, `Failed to create retriever`)
@ -1208,13 +1335,13 @@ const queryVectorStore = async (data: ICommonObject) => {
}
const _createEmbeddingsObject = async (
appServer: App,
componentNodes: IComponentNodes,
data: ICommonObject,
options: ICommonObject,
upsertHistory?: Record<string, any>
): Promise<any> => {
// prepare embedding node data
const embeddingComponent = appServer.nodesPool.componentNodes[data.embeddingName]
const embeddingComponent = componentNodes[data.embeddingName]
const embeddingNodeData: any = {
inputs: { ...data.embeddingConfig },
outputs: { output: 'document' },
@ -1243,13 +1370,13 @@ const _createEmbeddingsObject = async (
}
const _createRecordManagerObject = async (
appServer: App,
componentNodes: IComponentNodes,
data: ICommonObject,
options: ICommonObject,
upsertHistory?: Record<string, any>
) => {
// prepare record manager node data
const recordManagerComponent = appServer.nodesPool.componentNodes[data.recordManagerName]
const recordManagerComponent = componentNodes[data.recordManagerName]
const rmNodeData: any = {
inputs: { ...data.recordManagerConfig },
id: `${recordManagerComponent.name}_0`,
@ -1276,8 +1403,8 @@ const _createRecordManagerObject = async (
return recordManagerObj
}
const _createVectorStoreNodeData = (appServer: App, data: ICommonObject, embeddingObj: any, recordManagerObj?: any) => {
const vectorStoreComponent = appServer.nodesPool.componentNodes[data.vectorStoreName]
const _createVectorStoreNodeData = (componentNodes: IComponentNodes, data: ICommonObject, embeddingObj: any, recordManagerObj?: any) => {
const vectorStoreComponent = componentNodes[data.vectorStoreName]
const vStoreNodeData: any = {
id: `${vectorStoreComponent.name}_0`,
inputs: { ...data.vectorStoreConfig },
@ -1306,25 +1433,27 @@ const _createVectorStoreNodeData = (appServer: App, data: ICommonObject, embeddi
}
const _createVectorStoreObject = async (
appServer: App,
componentNodes: IComponentNodes,
data: ICommonObject,
vStoreNodeData: INodeData,
upsertHistory?: Record<string, any>
) => {
const vStoreNodeInstanceFilePath = appServer.nodesPool.componentNodes[data.vectorStoreName].filePath as string
const vStoreNodeInstanceFilePath = componentNodes[data.vectorStoreName].filePath as string
const vStoreNodeModule = await import(vStoreNodeInstanceFilePath)
const vStoreNodeInstance = new vStoreNodeModule.nodeClass()
if (upsertHistory) upsertHistory['flowData'] = saveUpsertFlowData(vStoreNodeData, upsertHistory)
return vStoreNodeInstance
}
const upsertDocStoreMiddleware = async (
const upsertDocStore = async (
appDataSource: DataSource,
componentNodes: IComponentNodes,
telemetry: Telemetry,
storeId: string,
data: IDocumentStoreUpsertData,
files: Express.Multer.File[] = [],
isRefreshExisting = false
) => {
const appServer = getRunningExpressApp()
const docId = data.docId
let metadata = {}
if (data.metadata) {
@ -1342,7 +1471,7 @@ const upsertDocStoreMiddleware = async (
const newRecordManager = typeof data.recordManager === 'string' ? JSON.parse(data.recordManager) : data.recordManager
const getComponentLabelFromName = (nodeName: string) => {
const component = Object.values(appServer.nodesPool.componentNodes).find((node) => node.name === nodeName)
const component = Object.values(componentNodes).find((node) => node.name === nodeName)
return component?.label || ''
}
@ -1365,7 +1494,7 @@ const upsertDocStoreMiddleware = async (
// Step 1: Get existing loader
if (docId) {
const entity = await appServer.AppDataSource.getRepository(DocumentStore).findOneBy({ id: storeId })
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({ id: storeId })
if (!entity) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Document store ${storeId} not found`)
}
@ -1527,8 +1656,15 @@ const upsertDocStoreMiddleware = async (
}
try {
const newLoader = await saveProcessingLoader(processData)
const result = await processLoader(processData, newLoader.id || '')
const newLoader = await saveProcessingLoader(appDataSource, processData)
const result = await processLoader({
appDataSource,
componentNodes,
data: processData,
docLoaderId: newLoader.id || '',
isProcessWithoutUpsert: false,
telemetry
})
const newDocId = result.docId
const insertData = {
@ -1542,10 +1678,74 @@ const upsertDocStoreMiddleware = async (
recordManagerConfig
}
const res = await insertIntoVectorStore(insertData, false)
const res = await insertIntoVectorStore({
appDataSource,
componentNodes,
telemetry,
data: insertData,
isStrictSave: false,
isVectorStoreInsert: true
})
res.docId = newDocId
return res
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
`Error: documentStoreServices.upsertDocStore - ${getErrorMessage(error)}`
)
}
}
export const executeDocStoreUpsert = async ({
appDataSource,
componentNodes,
telemetry,
storeId,
totalItems,
files,
isRefreshAPI
}: IExecuteDocStoreUpsert) => {
const results = []
for (const item of totalItems) {
const res = await upsertDocStore(appDataSource, componentNodes, telemetry, storeId, item, files, isRefreshAPI)
results.push(res)
}
return isRefreshAPI ? results : results[0]
}
const upsertDocStoreMiddleware = async (storeId: string, data: IDocumentStoreUpsertData, files: Express.Multer.File[] = []) => {
const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes
const appDataSource = appServer.AppDataSource
const telemetry = appServer.telemetry
try {
const executeData: IExecuteDocStoreUpsert = {
appDataSource,
componentNodes,
telemetry,
storeId,
totalItems: [data],
files,
isRefreshAPI: false
}
if (process.env.MODE === MODE.QUEUE) {
const upsertQueue = appServer.queueManager.getQueue('upsert')
const job = await upsertQueue.addJob(omit(executeData, OMIT_QUEUE_JOB_DATA))
logger.debug(`[server]: Job added to queue: ${job.id}`)
const queueEvents = upsertQueue.getQueueEvents()
const result = await job.waitUntilFinished(queueEvents)
if (!result) {
throw new Error('Job execution failed')
}
return result
} else {
return await executeDocStoreUpsert(executeData)
}
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
@ -1556,9 +1756,11 @@ const upsertDocStoreMiddleware = async (
const refreshDocStoreMiddleware = async (storeId: string, data?: IDocumentStoreRefreshData) => {
const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes
const appDataSource = appServer.AppDataSource
const telemetry = appServer.telemetry
try {
const results = []
let totalItems: IDocumentStoreUpsertData[] = []
if (!data || !data.items || data.items.length === 0) {
@ -1577,12 +1779,31 @@ const refreshDocStoreMiddleware = async (storeId: string, data?: IDocumentStoreR
totalItems = data.items
}
for (const item of totalItems) {
const res = await upsertDocStoreMiddleware(storeId, item, [], true)
results.push(res)
const executeData: IExecuteDocStoreUpsert = {
appDataSource,
componentNodes,
telemetry,
storeId,
totalItems,
files: [],
isRefreshAPI: true
}
return results
if (process.env.MODE === MODE.QUEUE) {
const upsertQueue = appServer.queueManager.getQueue('upsert')
const job = await upsertQueue.addJob(omit(executeData, OMIT_QUEUE_JOB_DATA))
logger.debug(`[server]: Job added to queue: ${job.id}`)
const queueEvents = upsertQueue.getQueueEvents()
const result = await job.waitUntilFinished(queueEvents)
if (!result) {
throw new Error('Job execution failed')
}
return result
} else {
return await executeDocStoreUpsert(executeData)
}
} catch (error) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
@ -1799,13 +2020,13 @@ export default {
getUsedChatflowNames,
getDocumentStoreFileChunks,
updateDocumentStore,
previewChunks,
previewChunksMiddleware,
saveProcessingLoader,
processLoader,
processLoaderMiddleware,
deleteDocumentStoreFileChunk,
editDocumentStoreFileChunk,
getDocumentLoaders,
insertIntoVectorStore,
insertIntoVectorStoreMiddleware,
getEmbeddingProviders,
getVectorStoreProviders,
getRecordManagerProviders,

View File

@ -95,7 +95,6 @@ const buildAndInitTool = async (chatflowid: string, _chatId?: string, _apiMessag
const flowDataObj: ICommonObject = { chatflowid, chatId }
const reactFlowNodeData: INodeData = await resolveVariables(
appServer.AppDataSource,
nodeToExecute.data,
reactFlowNodes,
'',

View File

@ -1,4 +1,3 @@
import express from 'express'
import { Response } from 'express'
import { IServerSideEventStreamer } from 'flowise-components'
@ -13,11 +12,6 @@ type Client = {
export class SSEStreamer implements IServerSideEventStreamer {
clients: { [id: string]: Client } = {}
app: express.Application
constructor(app: express.Application) {
this.app = app
}
addExternalClient(chatId: string, res: Response) {
this.clients[chatId] = { clientType: 'EXTERNAL', response: res, started: false }
@ -40,18 +34,6 @@ export class SSEStreamer implements IServerSideEventStreamer {
}
}
// Send SSE message to a specific client
streamEvent(chatId: string, data: string) {
const client = this.clients[chatId]
if (client) {
const clientResponse = {
event: 'start',
data: data
}
client.response.write('message:\ndata:' + JSON.stringify(clientResponse) + '\n\n')
}
}
streamCustomEvent(chatId: string, eventType: string, data: any) {
const client = this.clients[chatId]
if (client) {

View File

@ -1,3 +1,4 @@
import { DataSource } from 'typeorm'
import { ChatMessage } from '../database/entities/ChatMessage'
import { IChatMessage } from '../Interface'
import { getRunningExpressApp } from '../utils/getRunningExpressApp'
@ -6,14 +7,14 @@ import { getRunningExpressApp } from '../utils/getRunningExpressApp'
* Method that add chat messages.
* @param {Partial<IChatMessage>} chatMessage
*/
export const utilAddChatMessage = async (chatMessage: Partial<IChatMessage>): Promise<ChatMessage> => {
const appServer = getRunningExpressApp()
export const utilAddChatMessage = async (chatMessage: Partial<IChatMessage>, appDataSource?: DataSource): Promise<ChatMessage> => {
const dataSource = appDataSource ?? getRunningExpressApp().AppDataSource
const newChatMessage = new ChatMessage()
Object.assign(newChatMessage, chatMessage)
if (!newChatMessage.createdDate) {
newChatMessage.createdDate = new Date()
}
const chatmessage = await appServer.AppDataSource.getRepository(ChatMessage).create(newChatMessage)
const dbResponse = await appServer.AppDataSource.getRepository(ChatMessage).save(chatmessage)
const chatmessage = await dataSource.getRepository(ChatMessage).create(newChatMessage)
const dbResponse = await dataSource.getRepository(ChatMessage).save(chatmessage)
return dbResponse
}

View File

@ -19,144 +19,77 @@ import { StatusCodes } from 'http-status-codes'
import { v4 as uuidv4 } from 'uuid'
import { StructuredTool } from '@langchain/core/tools'
import { BaseMessage, HumanMessage, AIMessage, AIMessageChunk, ToolMessage } from '@langchain/core/messages'
import {
IChatFlow,
IComponentNodes,
IDepthQueue,
IReactFlowNode,
IReactFlowObject,
IReactFlowEdge,
IMessage,
IncomingInput
} from '../Interface'
import {
buildFlow,
getStartingNodes,
getEndingNodes,
constructGraphs,
databaseEntities,
getSessionChatHistory,
getMemorySessionId,
clearSessionMemory,
getAPIOverrideConfig
} from '../utils'
import { getRunningExpressApp } from './getRunningExpressApp'
import { IChatFlow, IComponentNodes, IDepthQueue, IReactFlowNode, IReactFlowEdge, IMessage, IncomingInput, IFlowConfig } from '../Interface'
import { databaseEntities, clearSessionMemory, getAPIOverrideConfig } from '../utils'
import { replaceInputsWithConfig, resolveVariables } from '.'
import { InternalFlowiseError } from '../errors/internalFlowiseError'
import { getErrorMessage } from '../errors/utils'
import logger from './logger'
import { Variable } from '../database/entities/Variable'
import { DataSource } from 'typeorm'
import { CachePool } from '../CachePool'
/**
* Build Agent Graph
* @param {IChatFlow} chatflow
* @param {string} chatId
* @param {string} sessionId
* @param {ICommonObject} incomingInput
* @param {boolean} isInternal
* @param {string} baseURL
*/
export const buildAgentGraph = async (
chatflow: IChatFlow,
chatId: string,
apiMessageId: string,
sessionId: string,
incomingInput: IncomingInput,
isInternal: boolean,
baseURL?: string,
sseStreamer?: IServerSideEventStreamer,
shouldStreamResponse?: boolean,
uploadedFilesContent?: string
): Promise<any> => {
export const buildAgentGraph = async ({
agentflow,
flowConfig,
incomingInput,
nodes,
edges,
initializedNodes,
endingNodeIds,
startingNodeIds,
depthQueue,
chatHistory,
uploadedFilesContent,
appDataSource,
componentNodes,
sseStreamer,
shouldStreamResponse,
cachePool,
baseURL,
signal
}: {
agentflow: IChatFlow
flowConfig: IFlowConfig
incomingInput: IncomingInput
nodes: IReactFlowNode[]
edges: IReactFlowEdge[]
initializedNodes: IReactFlowNode[]
endingNodeIds: string[]
startingNodeIds: string[]
depthQueue: IDepthQueue
chatHistory: IMessage[]
uploadedFilesContent: string
appDataSource: DataSource
componentNodes: IComponentNodes
sseStreamer: IServerSideEventStreamer
shouldStreamResponse: boolean
cachePool: CachePool
baseURL: string
signal?: AbortController
}): Promise<any> => {
try {
const appServer = getRunningExpressApp()
const chatflowid = chatflow.id
/*** Get chatflows and prepare data ***/
const flowData = chatflow.flowData
const parsedFlowData: IReactFlowObject = JSON.parse(flowData)
const nodes = parsedFlowData.nodes
const edges = parsedFlowData.edges
/*** Get Ending Node with Directed Graph ***/
const { graph, nodeDependencies } = constructGraphs(nodes, edges)
const directedGraph = graph
const endingNodes = getEndingNodes(nodeDependencies, directedGraph, nodes)
/*** Get Starting Nodes with Reversed Graph ***/
const constructedObj = constructGraphs(nodes, edges, { isReversed: true })
const nonDirectedGraph = constructedObj.graph
let startingNodeIds: string[] = []
let depthQueue: IDepthQueue = {}
const endingNodeIds = endingNodes.map((n) => n.id)
for (const endingNodeId of endingNodeIds) {
const resx = getStartingNodes(nonDirectedGraph, endingNodeId)
startingNodeIds.push(...resx.startingNodeIds)
depthQueue = Object.assign(depthQueue, resx.depthQueue)
}
startingNodeIds = [...new Set(startingNodeIds)]
/*** Get Memory Node for Chat History ***/
let chatHistory: IMessage[] = []
const agentMemoryList = ['agentMemory', 'sqliteAgentMemory', 'postgresAgentMemory', 'mySQLAgentMemory']
const memoryNode = nodes.find((node) => agentMemoryList.includes(node.data.name))
if (memoryNode) {
chatHistory = await getSessionChatHistory(
chatflowid,
getMemorySessionId(memoryNode, incomingInput, chatId, isInternal),
memoryNode,
appServer.nodesPool.componentNodes,
appServer.AppDataSource,
databaseEntities,
logger,
incomingInput.history
)
}
/*** Get API Config ***/
const availableVariables = await appServer.AppDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(chatflow)
// Initialize nodes like ChatModels, Tools, etc.
const reactFlowNodes: IReactFlowNode[] = await buildFlow({
startingNodeIds,
reactFlowNodes: nodes,
reactFlowEdges: edges,
apiMessageId,
graph,
depthQueue,
componentNodes: appServer.nodesPool.componentNodes,
question: incomingInput.question,
uploadedFilesContent,
chatHistory,
chatId,
sessionId,
chatflowid,
appDataSource: appServer.AppDataSource,
overrideConfig: incomingInput?.overrideConfig,
apiOverrideStatus,
nodeOverrides,
availableVariables,
variableOverrides,
cachePool: appServer.cachePool,
isUpsert: false,
uploads: incomingInput.uploads,
baseURL
})
const chatflowid = flowConfig.chatflowid
const chatId = flowConfig.chatId
const sessionId = flowConfig.sessionId
const analytic = agentflow.analytic
const uploads = incomingInput.uploads
const options = {
chatId,
sessionId,
chatflowid,
logger,
analytic: chatflow.analytic,
appDataSource: appServer.AppDataSource,
databaseEntities: databaseEntities,
cachePool: appServer.cachePool,
uploads: incomingInput.uploads,
analytic,
appDataSource,
databaseEntities,
cachePool,
uploads,
baseURL,
signal: new AbortController()
signal: signal ?? new AbortController()
}
let streamResults
@ -171,9 +104,9 @@ export const buildAgentGraph = async (
let totalUsedTools: IUsedTool[] = []
let totalArtifacts: ICommonObject[] = []
const workerNodes = reactFlowNodes.filter((node) => node.data.name === 'worker')
const supervisorNodes = reactFlowNodes.filter((node) => node.data.name === 'supervisor')
const seqAgentNodes = reactFlowNodes.filter((node) => node.data.category === 'Sequential Agents')
const workerNodes = initializedNodes.filter((node) => node.data.name === 'worker')
const supervisorNodes = initializedNodes.filter((node) => node.data.name === 'supervisor')
const seqAgentNodes = initializedNodes.filter((node) => node.data.category === 'Sequential Agents')
const mapNameToLabel: Record<string, { label: string; nodeName: string }> = {}
@ -189,11 +122,12 @@ export const buildAgentGraph = async (
try {
if (!seqAgentNodes.length) {
streamResults = await compileMultiAgentsGraph({
chatflow,
agentflow,
appDataSource,
mapNameToLabel,
reactFlowNodes,
reactFlowNodes: initializedNodes,
workerNodeIds: endingNodeIds,
componentNodes: appServer.nodesPool.componentNodes,
componentNodes,
options,
startingNodeIds,
question: incomingInput.question,
@ -208,10 +142,11 @@ export const buildAgentGraph = async (
isSequential = true
streamResults = await compileSeqAgentsGraph({
depthQueue,
chatflow,
reactFlowNodes,
agentflow,
appDataSource,
reactFlowNodes: initializedNodes,
reactFlowEdges: edges,
componentNodes: appServer.nodesPool.componentNodes,
componentNodes,
options,
question: incomingInput.question,
prependHistoryMessages: incomingInput.history,
@ -275,7 +210,7 @@ export const buildAgentGraph = async (
)
inputEdges.forEach((edge) => {
const parentNode = reactFlowNodes.find((nd) => nd.id === edge.source)
const parentNode = initializedNodes.find((nd) => nd.id === edge.source)
if (parentNode) {
if (parentNode.data.name.includes('seqCondition')) {
const newMessages = messages.slice(0, -1)
@ -366,7 +301,7 @@ export const buildAgentGraph = async (
// If last message is an AI Message with tool calls, that means the last node was interrupted
if (lastMessageRaw.tool_calls && lastMessageRaw.tool_calls.length > 0) {
// The last node that got interrupted
const node = reactFlowNodes.find((node) => node.id === lastMessageRaw.additional_kwargs.nodeId)
const node = initializedNodes.find((node) => node.id === lastMessageRaw.additional_kwargs.nodeId)
// Find the next tool node that is connected to the interrupted node, to get the approve/reject button text
const tooNodeId = edges.find(
@ -374,7 +309,7 @@ export const buildAgentGraph = async (
edge.target.includes('seqToolNode') &&
edge.source === (lastMessageRaw.additional_kwargs && lastMessageRaw.additional_kwargs.nodeId)
)?.target
const connectedToolNode = reactFlowNodes.find((node) => node.id === tooNodeId)
const connectedToolNode = initializedNodes.find((node) => node.id === tooNodeId)
// Map raw tool calls to used tools, to be shown on interrupted message
const mappedToolCalls = lastMessageRaw.tool_calls.map((toolCall) => {
@ -449,7 +384,7 @@ export const buildAgentGraph = async (
}
} catch (e) {
// clear agent memory because checkpoints were saved during runtime
await clearSessionMemory(nodes, appServer.nodesPool.componentNodes, chatId, appServer.AppDataSource, sessionId)
await clearSessionMemory(nodes, componentNodes, chatId, appDataSource, sessionId)
if (getErrorMessage(e).includes('Aborted')) {
if (shouldStreamResponse && sseStreamer) {
sseStreamer.streamAbortEvent(chatId)
@ -466,7 +401,8 @@ export const buildAgentGraph = async (
}
type MultiAgentsGraphParams = {
chatflow: IChatFlow
agentflow: IChatFlow
appDataSource: DataSource
mapNameToLabel: Record<string, { label: string; nodeName: string }>
reactFlowNodes: IReactFlowNode[]
workerNodeIds: string[]
@ -484,13 +420,13 @@ type MultiAgentsGraphParams = {
const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
const {
chatflow,
agentflow,
appDataSource,
mapNameToLabel,
reactFlowNodes,
workerNodeIds,
componentNodes,
options,
startingNodeIds,
prependHistoryMessages = [],
chatHistory = [],
overrideConfig = {},
@ -501,7 +437,6 @@ const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
let question = params.question
const appServer = getRunningExpressApp()
const channels: ITeamState = {
messages: {
value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
@ -522,8 +457,8 @@ const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
const workerNodes = reactFlowNodes.filter((node) => workerNodeIds.includes(node.data.id))
/*** Get API Config ***/
const availableVariables = await appServer.AppDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(chatflow)
const availableVariables = await appDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(agentflow)
let supervisorWorkers: { [key: string]: IMultiAgentNode[] } = {}
@ -537,7 +472,6 @@ const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
if (overrideConfig && apiOverrideStatus)
flowNodeData = replaceInputsWithConfig(flowNodeData, overrideConfig, nodeOverrides, variableOverrides)
flowNodeData = await resolveVariables(
appServer.AppDataSource,
flowNodeData,
reactFlowNodes,
question,
@ -579,7 +513,6 @@ const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
if (overrideConfig && apiOverrideStatus)
flowNodeData = replaceInputsWithConfig(flowNodeData, overrideConfig, nodeOverrides, variableOverrides)
flowNodeData = await resolveVariables(
appServer.AppDataSource,
flowNodeData,
reactFlowNodes,
question,
@ -626,15 +559,7 @@ const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
//@ts-ignore
workflowGraph.addEdge(START, supervisorResult.name)
// Add agentflow to pool
;(workflowGraph as any).signal = options.signal
appServer.chatflowPool.add(
`${chatflow.id}_${options.chatId}`,
workflowGraph as any,
reactFlowNodes.filter((node) => startingNodeIds.includes(node.id)),
overrideConfig
)
// Get memory
let memory = supervisorResult?.checkpointMemory
@ -685,7 +610,8 @@ const compileMultiAgentsGraph = async (params: MultiAgentsGraphParams) => {
type SeqAgentsGraphParams = {
depthQueue: IDepthQueue
chatflow: IChatFlow
agentflow: IChatFlow
appDataSource: DataSource
reactFlowNodes: IReactFlowNode[]
reactFlowEdges: IReactFlowEdge[]
componentNodes: IComponentNodes
@ -702,7 +628,8 @@ type SeqAgentsGraphParams = {
const compileSeqAgentsGraph = async (params: SeqAgentsGraphParams) => {
const {
depthQueue,
chatflow,
agentflow,
appDataSource,
reactFlowNodes,
reactFlowEdges,
componentNodes,
@ -717,8 +644,6 @@ const compileSeqAgentsGraph = async (params: SeqAgentsGraphParams) => {
let question = params.question
const appServer = getRunningExpressApp()
let channels: ISeqAgentsState = {
messages: {
value: (x: BaseMessage[], y: BaseMessage[]) => x.concat(y),
@ -761,8 +686,8 @@ const compileSeqAgentsGraph = async (params: SeqAgentsGraphParams) => {
let interruptToolNodeNames = []
/*** Get API Config ***/
const availableVariables = await appServer.AppDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(chatflow)
const availableVariables = await appDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(agentflow)
const initiateNode = async (node: IReactFlowNode) => {
const nodeInstanceFilePath = componentNodes[node.data.name].filePath as string
@ -773,7 +698,6 @@ const compileSeqAgentsGraph = async (params: SeqAgentsGraphParams) => {
if (overrideConfig && apiOverrideStatus)
flowNodeData = replaceInputsWithConfig(flowNodeData, overrideConfig, nodeOverrides, variableOverrides)
flowNodeData = await resolveVariables(
appServer.AppDataSource,
flowNodeData,
reactFlowNodes,
question,
@ -1059,14 +983,8 @@ const compileSeqAgentsGraph = async (params: SeqAgentsGraphParams) => {
routeMessage
)
}
/*** Add agentflow to pool ***/
;(seqGraph as any).signal = options.signal
appServer.chatflowPool.add(
`${chatflow.id}_${options.chatId}`,
seqGraph as any,
reactFlowNodes.filter((node) => startAgentNodes.map((nd) => nd.id).includes(node.id)),
overrideConfig
)
/*** Get memory ***/
const startNode = reactFlowNodes.find((node: IReactFlowNode) => node.data.name === 'seqStart')

File diff suppressed because it is too large Load Diff

View File

@ -20,6 +20,8 @@ export const WHITELIST_URLS = [
'/api/v1/metrics'
]
export const OMIT_QUEUE_JOB_DATA = ['componentNodes', 'appDataSource', 'sseStreamer', 'telemetry', 'cachePool']
export const INPUT_PARAMS_TYPE = [
'asyncOptions',
'options',

View File

@ -560,7 +560,6 @@ export const buildFlow = async ({
if (isUpsert) upsertHistory['flowData'] = saveUpsertFlowData(flowNodeData, upsertHistory)
const reactFlowNodeData: INodeData = await resolveVariables(
appDataSource,
flowNodeData,
flowNodes,
question,
@ -762,10 +761,9 @@ export const clearSessionMemory = async (
}
const getGlobalVariable = async (
appDataSource: DataSource,
overrideConfig?: ICommonObject,
availableVariables: IVariable[] = [],
variableOverrides?: ICommonObject[]
variableOverrides: ICommonObject[] = []
) => {
// override variables defined in overrideConfig
// nodeData.inputs.vars is an Object, check each property and override the variable
@ -826,13 +824,12 @@ const getGlobalVariable = async (
* @returns {string}
*/
export const getVariableValue = async (
appDataSource: DataSource,
paramValue: string | object,
reactFlowNodes: IReactFlowNode[],
question: string,
chatHistory: IMessage[],
isAcceptVariable = false,
flowData?: ICommonObject,
flowConfig?: ICommonObject,
uploadedFilesContent?: string,
availableVariables: IVariable[] = [],
variableOverrides: ICommonObject[] = []
@ -877,7 +874,7 @@ export const getVariableValue = async (
}
if (variableFullPath.startsWith('$vars.')) {
const vars = await getGlobalVariable(appDataSource, flowData, availableVariables, variableOverrides)
const vars = await getGlobalVariable(flowConfig, availableVariables, variableOverrides)
const variableValue = get(vars, variableFullPath.replace('$vars.', ''))
if (variableValue != null) {
variableDict[`{{${variableFullPath}}}`] = variableValue
@ -885,8 +882,8 @@ export const getVariableValue = async (
}
}
if (variableFullPath.startsWith('$flow.') && flowData) {
const variableValue = get(flowData, variableFullPath.replace('$flow.', ''))
if (variableFullPath.startsWith('$flow.') && flowConfig) {
const variableValue = get(flowConfig, variableFullPath.replace('$flow.', ''))
if (variableValue != null) {
variableDict[`{{${variableFullPath}}}`] = variableValue
returnVal = returnVal.split(`{{${variableFullPath}}}`).join(variableValue)
@ -980,12 +977,11 @@ export const getVariableValue = async (
* @returns {INodeData}
*/
export const resolveVariables = async (
appDataSource: DataSource,
reactFlowNodeData: INodeData,
reactFlowNodes: IReactFlowNode[],
question: string,
chatHistory: IMessage[],
flowData?: ICommonObject,
flowConfig?: ICommonObject,
uploadedFilesContent?: string,
availableVariables: IVariable[] = [],
variableOverrides: ICommonObject[] = []
@ -1000,13 +996,12 @@ export const resolveVariables = async (
const resolvedInstances = []
for (const param of paramValue) {
const resolvedInstance = await getVariableValue(
appDataSource,
param,
reactFlowNodes,
question,
chatHistory,
undefined,
flowData,
flowConfig,
uploadedFilesContent,
availableVariables,
variableOverrides
@ -1017,13 +1012,12 @@ export const resolveVariables = async (
} else {
const isAcceptVariable = reactFlowNodeData.inputParams.find((param) => param.name === key)?.acceptVariable ?? false
const resolvedInstance = await getVariableValue(
appDataSource,
paramValue,
reactFlowNodes,
question,
chatHistory,
isAcceptVariable,
flowData,
flowConfig,
uploadedFilesContent,
availableVariables,
variableOverrides

View File

@ -1,55 +1,174 @@
import { NextFunction, Request, Response } from 'express'
import { rateLimit, RateLimitRequestHandler } from 'express-rate-limit'
import { IChatFlow } from '../Interface'
import { IChatFlow, MODE } from '../Interface'
import { Mutex } from 'async-mutex'
import { RedisStore } from 'rate-limit-redis'
import Redis from 'ioredis'
import { QueueEvents, QueueEventsListener, QueueEventsProducer } from 'bullmq'
let rateLimiters: Record<string, RateLimitRequestHandler> = {}
const rateLimiterMutex = new Mutex()
interface CustomListener extends QueueEventsListener {
updateRateLimiter: (args: { limitDuration: number; limitMax: number; limitMsg: string; id: string }) => void
}
async function addRateLimiter(id: string, duration: number, limit: number, message: string) {
const release = await rateLimiterMutex.acquire()
try {
rateLimiters[id] = rateLimit({
windowMs: duration * 1000,
max: limit,
handler: (_, res) => {
res.status(429).send(message)
const QUEUE_NAME = 'ratelimit'
const QUEUE_EVENT_NAME = 'updateRateLimiter'
export class RateLimiterManager {
private rateLimiters: Record<string, RateLimitRequestHandler> = {}
private rateLimiterMutex: Mutex = new Mutex()
private redisClient: Redis
private static instance: RateLimiterManager
private queueEventsProducer: QueueEventsProducer
private queueEvents: QueueEvents
constructor() {
if (process.env.MODE === MODE.QUEUE) {
if (process.env.REDIS_URL) {
this.redisClient = new Redis(process.env.REDIS_URL)
} else {
this.redisClient = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
username: process.env.REDIS_USERNAME || undefined,
password: process.env.REDIS_PASSWORD || undefined,
tls:
process.env.REDIS_TLS === 'true'
? {
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
: undefined
})
}
})
} finally {
release()
this.queueEventsProducer = new QueueEventsProducer(QUEUE_NAME, { connection: this.getConnection() })
this.queueEvents = new QueueEvents(QUEUE_NAME, { connection: this.getConnection() })
}
}
getConnection() {
let tlsOpts = undefined
if (process.env.REDIS_URL && process.env.REDIS_URL.startsWith('rediss://')) {
tlsOpts = {
rejectUnauthorized: false
}
} else if (process.env.REDIS_TLS === 'true') {
tlsOpts = {
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
}
return {
url: process.env.REDIS_URL || undefined,
host: process.env.REDIS_HOST || 'localhost',
port: parseInt(process.env.REDIS_PORT || '6379'),
username: process.env.REDIS_USERNAME || undefined,
password: process.env.REDIS_PASSWORD || undefined,
tls: tlsOpts
}
}
public static getInstance(): RateLimiterManager {
if (!RateLimiterManager.instance) {
RateLimiterManager.instance = new RateLimiterManager()
}
return RateLimiterManager.instance
}
public async addRateLimiter(id: string, duration: number, limit: number, message: string): Promise<void> {
const release = await this.rateLimiterMutex.acquire()
try {
if (process.env.MODE === MODE.QUEUE) {
this.rateLimiters[id] = rateLimit({
windowMs: duration * 1000,
max: limit,
standardHeaders: true,
legacyHeaders: false,
message,
store: new RedisStore({
prefix: `rl:${id}`,
// @ts-expect-error - Known issue: the `call` function is not present in @types/ioredis
sendCommand: (...args: string[]) => this.redisClient.call(...args)
})
})
} else {
this.rateLimiters[id] = rateLimit({
windowMs: duration * 1000,
max: limit,
message
})
}
} finally {
release()
}
}
public removeRateLimiter(id: string): void {
if (this.rateLimiters[id]) {
delete this.rateLimiters[id]
}
}
public getRateLimiter(): (req: Request, res: Response, next: NextFunction) => void {
return (req: Request, res: Response, next: NextFunction) => {
const id = req.params.id
if (!this.rateLimiters[id]) return next()
const idRateLimiter = this.rateLimiters[id]
return idRateLimiter(req, res, next)
}
}
public async updateRateLimiter(chatFlow: IChatFlow, isInitialized?: boolean): Promise<void> {
if (!chatFlow.apiConfig) return
const apiConfig = JSON.parse(chatFlow.apiConfig)
const rateLimit: { limitDuration: number; limitMax: number; limitMsg: string; status?: boolean } = apiConfig.rateLimit
if (!rateLimit) return
const { limitDuration, limitMax, limitMsg, status } = rateLimit
if (!isInitialized && process.env.MODE === MODE.QUEUE && this.queueEventsProducer) {
await this.queueEventsProducer.publishEvent({
eventName: QUEUE_EVENT_NAME,
limitDuration,
limitMax,
limitMsg,
id: chatFlow.id
})
} else {
if (status === false) {
this.removeRateLimiter(chatFlow.id)
} else if (limitMax && limitDuration && limitMsg) {
await this.addRateLimiter(chatFlow.id, limitDuration, limitMax, limitMsg)
}
}
}
public async initializeRateLimiters(chatflows: IChatFlow[]): Promise<void> {
await Promise.all(
chatflows.map(async (chatFlow) => {
await this.updateRateLimiter(chatFlow, true)
})
)
if (process.env.MODE === MODE.QUEUE && this.queueEvents) {
this.queueEvents.on<CustomListener>(
QUEUE_EVENT_NAME,
async ({
limitDuration,
limitMax,
limitMsg,
id
}: {
limitDuration: number
limitMax: number
limitMsg: string
id: string
}) => {
await this.addRateLimiter(id, limitDuration, limitMax, limitMsg)
}
)
}
}
}
function removeRateLimit(id: string) {
if (rateLimiters[id]) {
delete rateLimiters[id]
}
}
export function getRateLimiter(req: Request, res: Response, next: NextFunction) {
const id = req.params.id
if (!rateLimiters[id]) return next()
const idRateLimiter = rateLimiters[id]
return idRateLimiter(req, res, next)
}
export async function updateRateLimiter(chatFlow: IChatFlow) {
if (!chatFlow.apiConfig) return
const apiConfig = JSON.parse(chatFlow.apiConfig)
const rateLimit: { limitDuration: number; limitMax: number; limitMsg: string; status?: boolean } = apiConfig.rateLimit
if (!rateLimit) return
const { limitDuration, limitMax, limitMsg, status } = rateLimit
if (status === false) removeRateLimit(chatFlow.id)
else if (limitMax && limitDuration && limitMsg) await addRateLimiter(chatFlow.id, limitDuration, limitMax, limitMsg)
}
export async function initializeRateLimiter(chatFlowPool: IChatFlow[]) {
await Promise.all(
chatFlowPool.map(async (chatFlow) => {
await updateRateLimiter(chatFlow)
})
)
}

View File

@ -23,7 +23,7 @@ import {
getAPIOverrideConfig
} from '../utils'
import { validateChatflowAPIKey } from './validateKey'
import { IncomingInput, INodeDirectedGraph, IReactFlowObject, ChatType } from '../Interface'
import { IncomingInput, INodeDirectedGraph, IReactFlowObject, ChatType, IExecuteFlowParams, MODE } from '../Interface'
import { ChatFlow } from '../database/entities/ChatFlow'
import { getRunningExpressApp } from '../utils/getRunningExpressApp'
import { UpsertHistory } from '../database/entities/UpsertHistory'
@ -33,17 +33,182 @@ import { getErrorMessage } from '../errors/utils'
import { v4 as uuidv4 } from 'uuid'
import { FLOWISE_COUNTER_STATUS, FLOWISE_METRIC_COUNTERS } from '../Interface.Metrics'
import { Variable } from '../database/entities/Variable'
import { OMIT_QUEUE_JOB_DATA } from './constants'
export const executeUpsert = async ({
componentNodes,
incomingInput,
chatflow,
chatId,
appDataSource,
telemetry,
cachePool,
isInternal,
files
}: IExecuteFlowParams) => {
const question = incomingInput.question
const overrideConfig = incomingInput.overrideConfig ?? {}
let stopNodeId = incomingInput?.stopNodeId ?? ''
const chatHistory: IMessage[] = []
const isUpsert = true
const chatflowid = chatflow.id
const apiMessageId = uuidv4()
if (files?.length) {
const overrideConfig: ICommonObject = { ...incomingInput }
for (const file of files) {
const fileNames: string[] = []
const fileBuffer = await getFileFromUpload(file.path ?? file.key)
// Address file name with special characters: https://github.com/expressjs/multer/issues/1104
file.originalname = Buffer.from(file.originalname, 'latin1').toString('utf8')
const storagePath = await addArrayFilesToStorage(file.mimetype, fileBuffer, file.originalname, fileNames, chatflowid)
const fileInputFieldFromMimeType = mapMimeTypeToInputField(file.mimetype)
const fileExtension = path.extname(file.originalname)
const fileInputFieldFromExt = mapExtToInputField(fileExtension)
let fileInputField = 'txtFile'
if (fileInputFieldFromExt !== 'txtFile') {
fileInputField = fileInputFieldFromExt
} else if (fileInputFieldFromMimeType !== 'txtFile') {
fileInputField = fileInputFieldFromExt
}
if (overrideConfig[fileInputField]) {
const existingFileInputField = overrideConfig[fileInputField].replace('FILE-STORAGE::', '')
const existingFileInputFieldArray = JSON.parse(existingFileInputField)
const newFileInputField = storagePath.replace('FILE-STORAGE::', '')
const newFileInputFieldArray = JSON.parse(newFileInputField)
const updatedFieldArray = existingFileInputFieldArray.concat(newFileInputFieldArray)
overrideConfig[fileInputField] = `FILE-STORAGE::${JSON.stringify(updatedFieldArray)}`
} else {
overrideConfig[fileInputField] = storagePath
}
await removeSpecificFileFromUpload(file.path ?? file.key)
}
if (overrideConfig.vars && typeof overrideConfig.vars === 'string') {
overrideConfig.vars = JSON.parse(overrideConfig.vars)
}
incomingInput = {
...incomingInput,
question: '',
overrideConfig,
stopNodeId,
chatId
}
}
/*** Get chatflows and prepare data ***/
const flowData = chatflow.flowData
const parsedFlowData: IReactFlowObject = JSON.parse(flowData)
const nodes = parsedFlowData.nodes
const edges = parsedFlowData.edges
/*** Get session ID ***/
const memoryNode = findMemoryNode(nodes, edges)
let sessionId = getMemorySessionId(memoryNode, incomingInput, chatId, isInternal)
/*** Find the 1 final vector store will be upserted ***/
const vsNodes = nodes.filter((node) => node.data.category === 'Vector Stores')
const vsNodesWithFileUpload = vsNodes.filter((node) => node.data.inputs?.fileUpload)
if (vsNodesWithFileUpload.length > 1) {
throw new InternalFlowiseError(StatusCodes.INTERNAL_SERVER_ERROR, 'Multiple vector store nodes with fileUpload enabled')
} else if (vsNodesWithFileUpload.length === 1 && !stopNodeId) {
stopNodeId = vsNodesWithFileUpload[0].data.id
}
/*** Check if multiple vector store nodes exist, and if stopNodeId is specified ***/
if (vsNodes.length > 1 && !stopNodeId) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
'There are multiple vector nodes, please provide stopNodeId in body request'
)
} else if (vsNodes.length === 1 && !stopNodeId) {
stopNodeId = vsNodes[0].data.id
} else if (!vsNodes.length && !stopNodeId) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, 'No vector node found')
}
/*** Get Starting Nodes with Reversed Graph ***/
const { graph } = constructGraphs(nodes, edges, { isReversed: true })
const nodeIds = getAllConnectedNodes(graph, stopNodeId)
const filteredGraph: INodeDirectedGraph = {}
for (const key of nodeIds) {
if (Object.prototype.hasOwnProperty.call(graph, key)) {
filteredGraph[key] = graph[key]
}
}
const { startingNodeIds, depthQueue } = getStartingNodes(filteredGraph, stopNodeId)
/*** Get API Config ***/
const availableVariables = await appDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(chatflow)
const upsertedResult = await buildFlow({
startingNodeIds,
reactFlowNodes: nodes,
reactFlowEdges: edges,
apiMessageId,
graph: filteredGraph,
depthQueue,
componentNodes,
question,
chatHistory,
chatId,
sessionId,
chatflowid,
appDataSource,
overrideConfig,
apiOverrideStatus,
nodeOverrides,
availableVariables,
variableOverrides,
cachePool,
isUpsert,
stopNodeId
})
// Save to DB
if (upsertedResult['flowData'] && upsertedResult['result']) {
const result = cloneDeep(upsertedResult)
result['flowData'] = JSON.stringify(result['flowData'])
result['result'] = JSON.stringify(omit(result['result'], ['totalKeys', 'addedDocs']))
result.chatflowid = chatflowid
const newUpsertHistory = new UpsertHistory()
Object.assign(newUpsertHistory, result)
const upsertHistory = appDataSource.getRepository(UpsertHistory).create(newUpsertHistory)
await appDataSource.getRepository(UpsertHistory).save(upsertHistory)
}
await telemetry.sendTelemetry('vector_upserted', {
version: await getAppVersion(),
chatlowId: chatflowid,
type: isInternal ? ChatType.INTERNAL : ChatType.EXTERNAL,
flowGraph: getTelemetryFlowObj(nodes, edges),
stopNodeId
})
return upsertedResult['result'] ?? { result: 'Successfully Upserted' }
}
/**
* Upsert documents
* @param {Request} req
* @param {boolean} isInternal
*/
export const upsertVector = async (req: Request, isInternal: boolean = false) => {
const appServer = getRunningExpressApp()
try {
const appServer = getRunningExpressApp()
const chatflowid = req.params.id
let incomingInput: IncomingInput = req.body
// Check if chatflow exists
const chatflow = await appServer.AppDataSource.getRepository(ChatFlow).findOneBy({
id: chatflowid
})
@ -51,6 +216,12 @@ export const upsertVector = async (req: Request, isInternal: boolean = false) =>
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Chatflow ${chatflowid} not found`)
}
const httpProtocol = req.get('x-forwarded-proto') || req.protocol
const baseURL = `${httpProtocol}://${req.get('host')}`
const incomingInput: IncomingInput = req.body
const chatId = incomingInput.chatId ?? incomingInput.overrideConfig?.sessionId ?? uuidv4()
const files = (req.files as Express.Multer.File[]) || []
if (!isInternal) {
const isKeyValidated = await validateChatflowAPIKey(req, chatflow)
if (!isKeyValidated) {
@ -58,168 +229,50 @@ export const upsertVector = async (req: Request, isInternal: boolean = false) =>
}
}
const files = (req.files as Express.Multer.File[]) || []
if (files.length) {
const overrideConfig: ICommonObject = { ...req.body }
for (const file of files) {
const fileNames: string[] = []
const fileBuffer = await getFileFromUpload(file.path ?? file.key)
// Address file name with special characters: https://github.com/expressjs/multer/issues/1104
file.originalname = Buffer.from(file.originalname, 'latin1').toString('utf8')
const storagePath = await addArrayFilesToStorage(file.mimetype, fileBuffer, file.originalname, fileNames, chatflowid)
const fileInputFieldFromMimeType = mapMimeTypeToInputField(file.mimetype)
const fileExtension = path.extname(file.originalname)
const fileInputFieldFromExt = mapExtToInputField(fileExtension)
let fileInputField = 'txtFile'
if (fileInputFieldFromExt !== 'txtFile') {
fileInputField = fileInputFieldFromExt
} else if (fileInputFieldFromMimeType !== 'txtFile') {
fileInputField = fileInputFieldFromExt
}
if (overrideConfig[fileInputField]) {
const existingFileInputField = overrideConfig[fileInputField].replace('FILE-STORAGE::', '')
const existingFileInputFieldArray = JSON.parse(existingFileInputField)
const newFileInputField = storagePath.replace('FILE-STORAGE::', '')
const newFileInputFieldArray = JSON.parse(newFileInputField)
const updatedFieldArray = existingFileInputFieldArray.concat(newFileInputFieldArray)
overrideConfig[fileInputField] = `FILE-STORAGE::${JSON.stringify(updatedFieldArray)}`
} else {
overrideConfig[fileInputField] = storagePath
}
await removeSpecificFileFromUpload(file.path ?? file.key)
}
if (overrideConfig.vars && typeof overrideConfig.vars === 'string') {
overrideConfig.vars = JSON.parse(overrideConfig.vars)
}
incomingInput = {
question: req.body.question ?? 'hello',
overrideConfig,
stopNodeId: req.body.stopNodeId
}
if (req.body.chatId) {
incomingInput.chatId = req.body.chatId
}
}
/*** Get chatflows and prepare data ***/
const flowData = chatflow.flowData
const parsedFlowData: IReactFlowObject = JSON.parse(flowData)
const nodes = parsedFlowData.nodes
const edges = parsedFlowData.edges
const apiMessageId = req.body.apiMessageId ?? uuidv4()
let stopNodeId = incomingInput?.stopNodeId ?? ''
let chatHistory: IMessage[] = []
let chatId = incomingInput.chatId ?? ''
let isUpsert = true
// Get session ID
const memoryNode = findMemoryNode(nodes, edges)
let sessionId = getMemorySessionId(memoryNode, incomingInput, chatId, isInternal)
const vsNodes = nodes.filter((node) => node.data.category === 'Vector Stores')
// Get StopNodeId for vector store which has fielUpload
const vsNodesWithFileUpload = vsNodes.filter((node) => node.data.inputs?.fileUpload)
if (vsNodesWithFileUpload.length > 1) {
throw new InternalFlowiseError(StatusCodes.INTERNAL_SERVER_ERROR, 'Multiple vector store nodes with fileUpload enabled')
} else if (vsNodesWithFileUpload.length === 1 && !stopNodeId) {
stopNodeId = vsNodesWithFileUpload[0].data.id
}
// Check if multiple vector store nodes exist, and if stopNodeId is specified
if (vsNodes.length > 1 && !stopNodeId) {
throw new InternalFlowiseError(
StatusCodes.INTERNAL_SERVER_ERROR,
'There are multiple vector nodes, please provide stopNodeId in body request'
)
} else if (vsNodes.length === 1 && !stopNodeId) {
stopNodeId = vsNodes[0].data.id
} else if (!vsNodes.length && !stopNodeId) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, 'No vector node found')
}
const { graph } = constructGraphs(nodes, edges, { isReversed: true })
const nodeIds = getAllConnectedNodes(graph, stopNodeId)
const filteredGraph: INodeDirectedGraph = {}
for (const key of nodeIds) {
if (Object.prototype.hasOwnProperty.call(graph, key)) {
filteredGraph[key] = graph[key]
}
}
const { startingNodeIds, depthQueue } = getStartingNodes(filteredGraph, stopNodeId)
/*** Get API Config ***/
const availableVariables = await appServer.AppDataSource.getRepository(Variable).find()
const { nodeOverrides, variableOverrides, apiOverrideStatus } = getAPIOverrideConfig(chatflow)
const upsertedResult = await buildFlow({
startingNodeIds,
reactFlowNodes: nodes,
reactFlowEdges: edges,
apiMessageId,
graph: filteredGraph,
depthQueue,
const executeData: IExecuteFlowParams = {
componentNodes: appServer.nodesPool.componentNodes,
question: incomingInput.question,
chatHistory,
incomingInput,
chatflow,
chatId,
sessionId: sessionId ?? '',
chatflowid,
appDataSource: appServer.AppDataSource,
overrideConfig: incomingInput?.overrideConfig,
apiOverrideStatus,
nodeOverrides,
availableVariables,
variableOverrides,
telemetry: appServer.telemetry,
cachePool: appServer.cachePool,
isUpsert,
stopNodeId
})
const startingNodes = nodes.filter((nd) => startingNodeIds.includes(nd.data.id))
await appServer.chatflowPool.add(chatflowid, undefined, startingNodes, incomingInput?.overrideConfig, chatId)
// Save to DB
if (upsertedResult['flowData'] && upsertedResult['result']) {
const result = cloneDeep(upsertedResult)
result['flowData'] = JSON.stringify(result['flowData'])
result['result'] = JSON.stringify(omit(result['result'], ['totalKeys', 'addedDocs']))
result.chatflowid = chatflowid
const newUpsertHistory = new UpsertHistory()
Object.assign(newUpsertHistory, result)
const upsertHistory = appServer.AppDataSource.getRepository(UpsertHistory).create(newUpsertHistory)
await appServer.AppDataSource.getRepository(UpsertHistory).save(upsertHistory)
sseStreamer: appServer.sseStreamer,
baseURL,
isInternal,
files,
isUpsert: true
}
await appServer.telemetry.sendTelemetry('vector_upserted', {
version: await getAppVersion(),
chatlowId: chatflowid,
type: isInternal ? ChatType.INTERNAL : ChatType.EXTERNAL,
flowGraph: getTelemetryFlowObj(nodes, edges),
stopNodeId
})
appServer.metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, { status: FLOWISE_COUNTER_STATUS.SUCCESS })
if (process.env.MODE === MODE.QUEUE) {
const upsertQueue = appServer.queueManager.getQueue('upsert')
return upsertedResult['result'] ?? { result: 'Successfully Upserted' }
const job = await upsertQueue.addJob(omit(executeData, OMIT_QUEUE_JOB_DATA))
logger.debug(`[server]: Job added to queue: ${job.id}`)
const queueEvents = upsertQueue.getQueueEvents()
const result = await job.waitUntilFinished(queueEvents)
if (!result) {
throw new Error('Job execution failed')
}
appServer.metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.SUCCESS
})
return result
} else {
const result = await executeUpsert(executeData)
appServer.metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, {
status: FLOWISE_COUNTER_STATUS.SUCCESS
})
return result
}
} catch (e) {
logger.error('[server]: Error:', e)
appServer.metricsProvider?.incrementCounter(FLOWISE_METRIC_COUNTERS.VECTORSTORE_UPSERT, { status: FLOWISE_COUNTER_STATUS.FAILURE })
if (e instanceof InternalFlowiseError && e.statusCode === StatusCodes.UNAUTHORIZED) {
throw e
} else {

View File

@ -56,7 +56,6 @@
"rehype-raw": "^7.0.0",
"remark-gfm": "^3.0.1",
"remark-math": "^5.1.1",
"socket.io-client": "^4.6.1",
"uuid": "^9.0.1",
"yup": "^0.32.9"
},

View File

@ -14,10 +14,6 @@ export default defineConfig(async ({ mode }) => {
'^/api(/|$).*': {
target: `http://${serverHost}:${serverPort}`,
changeOrigin: true
},
'/socket.io': {
target: `http://${serverHost}:${serverPort}`,
changeOrigin: true
}
}
}

File diff suppressed because one or more lines are too long