+## ๐ Table of Contents
+
+- [โก Quick Start](#-quick-start)
+- [๐ณ Docker](#-docker)
+- [๐จโ๐ป Developers](#-developers)
+- [๐ฑ Env Variables](#-env-variables)
+- [๐ Documentation](#-documentation)
+- [๐ Self Host](#-self-host)
+- [โ๏ธ Flowise Cloud](#๏ธ-flowise-cloud)
+- [๐ Support](#-support)
+- [๐ Contributing](#-contributing)
+- [๐ License](#-license)
+
## โกQuick Start
Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
@@ -31,12 +48,6 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
npx flowise start
```
- With username & password
-
- ```bash
- npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
- ```
-
3. Open [http://localhost:3000](http://localhost:3000)
## ๐ณ Docker
@@ -53,9 +64,11 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
### Docker Image
1. Build the image locally:
+
```bash
docker build --no-cache -t flowise .
```
+
2. Run image:
```bash
@@ -63,6 +76,7 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
```
3. Stop image:
+
```bash
docker stop flowise
```
@@ -85,13 +99,13 @@ Flowise has 3 different modules in a single mono repository.
### Setup
-1. Clone the repository
+1. Clone the repository:
```bash
git clone https://github.com/FlowiseAI/Flowise.git
```
-2. Go into repository folder
+2. Go into repository folder:
```bash
cd Flowise
@@ -111,10 +125,24 @@ Flowise has 3 different modules in a single mono repository.
Exit code 134 (JavaScript heap out of memory)
- If you get this error when running the above `build` script, try increasing the Node.js heap size and run the script again:
+ If you get this error when running the above `build` script, try increasing the Node.js heap size and run the script again:
- export NODE_OPTIONS="--max-old-space-size=4096"
- pnpm build
+ ```bash
+ # macOS / Linux / Git Bash
+ export NODE_OPTIONS="--max-old-space-size=4096"
+
+ # Windows PowerShell
+ $env:NODE_OPTIONS="--max-old-space-size=4096"
+
+ # Windows CMD
+ set NODE_OPTIONS=--max-old-space-size=4096
+ ```
+
+ Then run:
+
+ ```bash
+ pnpm build
+ ```
@@ -130,7 +158,7 @@ Flowise has 3 different modules in a single mono repository.
- Create `.env` file and specify the `VITE_PORT` (refer to `.env.example`) in `packages/ui`
- Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`
- - Run
+ - Run:
```bash
pnpm dev
@@ -138,22 +166,13 @@ Flowise has 3 different modules in a single mono repository.
Any code changes will reload the app automatically on [http://localhost:8080](http://localhost:8080)
-## ๐ Authentication
-
-To enable app level authentication, add `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `.env` file in `packages/server`:
-
-```
-FLOWISE_USERNAME=user
-FLOWISE_PASSWORD=1234
-```
-
## ๐ฑ Env Variables
-Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
+Flowise supports different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
## ๐ Documentation
-[Flowise Docs](https://docs.flowiseai.com/)
+You can view the Flowise Docs [here](https://docs.flowiseai.com/)
## ๐ Self Host
@@ -171,6 +190,10 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
+ - [Northflank](https://northflank.com/stacks/deploy-flowiseai)
+
+ [](https://northflank.com/stacks/deploy-flowiseai)
+
- [Render](https://docs.flowiseai.com/configuration/deployment/render)
[](https://docs.flowiseai.com/configuration/deployment/render)
@@ -195,11 +218,11 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
## โ๏ธ Flowise Cloud
-[Get Started with Flowise Cloud](https://flowiseai.com/)
+Get Started with [Flowise Cloud](https://flowiseai.com/).
## ๐ Support
-Feel free to ask any questions, raise problems, and request new features in [discussion](https://github.com/FlowiseAI/Flowise/discussions)
+Feel free to ask any questions, raise problems, and request new features in [Discussion](https://github.com/FlowiseAI/Flowise/discussions).
## ๐ Contributing
@@ -207,9 +230,10 @@ Thanks go to these awesome contributors
-
+
+
+See [Contributing Guide](CONTRIBUTING.md). Reach out to us at [Discord](https://discord.gg/jbaHfsRVBW) if you have any questions or issues.
-See [contributing guide](CONTRIBUTING.md). Reach out to us at [Discord](https://discord.gg/jbaHfsRVBW) if you have any questions or issues.
[](https://star-history.com/#FlowiseAI/Flowise&Date)
## ๐ License
diff --git a/SECURITY.md b/SECURITY.md
index 8d7455de9..6d8a12c2d 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -1,40 +1,38 @@
-### Responsible Disclosure Policy
+### Responsible Disclosure Policy
-At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
+At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
-### Vulnerabilities
+### Out of scope vulnerabilities
-The following types of issues are some of the most common vulnerabilities:
+- Clickjacking on pages without sensitive actions
+- CSRF on unauthenticated/logout/login pages
+- Attacks requiring MITM (Man-in-the-Middle) or physical device access
+- Social engineering attacks
+- Activities that cause service disruption (DoS)
+- Content spoofing and text injection without a valid attack vector
+- Email spoofing
+- Absence of DNSSEC, CAA, CSP headers
+- Missing Secure or HTTP-only flag on non-sensitive cookies
+- Deadlinks
+- User enumeration
-- Clickjacking on pages without sensitive actions
-- CSRF on unauthenticated/logout/login pages
-- Attacks requiring MITM (Man-in-the-Middle) or physical device access
-- Social engineering attacks
-- Activities that cause service disruption (DoS)
-- Content spoofing and text injection without a valid attack vector
-- Email spoofing
-- Absence of DNSSEC, CAA, CSP headers
-- Missing Secure or HTTP-only flag on non-sensitive cookies
-- Deadlinks
-- User enumeration
+### Reporting Guidelines
-### Reporting Guidelines
+- Submit your findings to https://github.com/FlowiseAI/Flowise/security
+- Provide clear details to help us reproduce and fix the issue quickly.
-- Submit your findings to https://github.com/FlowiseAI/Flowise/security
-- Provide clear details to help us reproduce and fix the issue quickly.
+### Disclosure Guidelines
-### Disclosure Guidelines
+- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
+- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
+- Avoid including:
+ - Data from any Flowise customer projects
+ - Flowise user/customer information
+ - Details about Flowise employees, contractors, or partners
-- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
-- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
-- Avoid including:
- - Data from any Flowise customer projects
- - Flowise user/customer information
- - Details about Flowise employees, contractors, or partners
+### Response to Reports
-### Response to Reports
+- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
+- Your report will be kept **confidential**, and your details will not be shared without your consent.
-- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
-- Your report will be kept **confidential**, and your details will not be shared without your consent.
-
-We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.
+We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.
diff --git a/docker/.env.example b/docker/.env.example
index 56ac56a80..2240edeb8 100644
--- a/docker/.env.example
+++ b/docker/.env.example
@@ -1,16 +1,12 @@
PORT=3000
+
+# APIKEY_PATH=/your_apikey_path/.flowise # (will be deprecated by end of 2025)
+
+############################################################################################################
+############################################## DATABASE ####################################################
+############################################################################################################
+
DATABASE_PATH=/root/.flowise
-APIKEY_PATH=/root/.flowise
-SECRETKEY_PATH=/root/.flowise
-LOG_PATH=/root/.flowise/logs
-BLOB_STORAGE_PATH=/root/.flowise/storage
-
-# APIKEY_STORAGE_TYPE=json (json | db)
-
-# NUMBER_OF_PROXIES= 1
-# CORS_ORIGINS=*
-# IFRAME_ORIGINS=*
-
# DATABASE_TYPE=postgres
# DATABASE_PORT=5432
# DATABASE_HOST=""
@@ -18,36 +14,43 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# DATABASE_USER=root
# DATABASE_PASSWORD=mypassword
# DATABASE_SSL=true
+# DATABASE_REJECT_UNAUTHORIZED=true
# DATABASE_SSL_KEY_BASE64=
+
+############################################################################################################
+############################################## SECRET KEYS #################################################
+############################################################################################################
+
# SECRETKEY_STORAGE_TYPE=local #(local | aws)
-# SECRETKEY_PATH=/your_api_key_path/.flowise
-# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey
+SECRETKEY_PATH=/root/.flowise
+# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey # (if you want to overwrite the secret key)
# SECRETKEY_AWS_ACCESS_KEY=
# SECRETKEY_AWS_SECRET_KEY=
# SECRETKEY_AWS_REGION=us-west-2
+# SECRETKEY_AWS_NAME=FlowiseEncryptionKey
-# FLOWISE_USERNAME=user
-# FLOWISE_PASSWORD=1234
-# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey
-# FLOWISE_FILE_SIZE_LIMIT=50mb
+
+############################################################################################################
+############################################## LOGGING #####################################################
+############################################################################################################
# DEBUG=true
-# LOG_LEVEL=info (error | warn | info | verbose | debug)
+LOG_PATH=/root/.flowise/logs
+# LOG_LEVEL=info #(error | warn | info | verbose | debug)
+# LOG_SANITIZE_BODY_FIELDS=password,pwd,pass,secret,token,apikey,api_key,accesstoken,access_token,refreshtoken,refresh_token,clientsecret,client_secret,privatekey,private_key,secretkey,secret_key,auth,authorization,credential,credentials
+# LOG_SANITIZE_HEADER_FIELDS=authorization,x-api-key,x-auth-token,cookie
# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash
+# ALLOW_BUILTIN_DEP=false
-# LANGCHAIN_TRACING_V2=true
-# LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
-# LANGCHAIN_API_KEY=your_api_key
-# LANGCHAIN_PROJECT=your_project
-# Uncomment the following line to enable model list config, load the list of models from your local config file
-# see https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json for the format
-# MODEL_LIST_CONFIG_JSON=/your_model_list_config_file_path
+############################################################################################################
+############################################## STORAGE #####################################################
+############################################################################################################
# STORAGE_TYPE=local (local | s3 | gcs)
-# BLOB_STORAGE_PATH=/your_storage_path/.flowise/storage
+BLOB_STORAGE_PATH=/root/.flowise/storage
# S3_STORAGE_BUCKET_NAME=flowise
# S3_STORAGE_ACCESS_KEY_ID=
# S3_STORAGE_SECRET_ACCESS_KEY=
@@ -59,12 +62,70 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# GOOGLE_CLOUD_STORAGE_BUCKET_NAME=
# GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=true
-# SHOW_COMMUNITY_NODES=true
-# DISABLED_NODES=bufferMemory,chatOpenAI (comma separated list of node names to disable)
-######################
-# METRICS COLLECTION
-#######################
+############################################################################################################
+############################################## SETTINGS ####################################################
+############################################################################################################
+
+# NUMBER_OF_PROXIES= 1
+# CORS_ORIGINS=*
+# IFRAME_ORIGINS=*
+# FLOWISE_FILE_SIZE_LIMIT=50mb
+# SHOW_COMMUNITY_NODES=true
+# DISABLE_FLOWISE_TELEMETRY=true
+# DISABLED_NODES=bufferMemory,chatOpenAI (comma separated list of node names to disable)
+# Uncomment the following line to enable model list config, load the list of models from your local config file
+# see https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json for the format
+# MODEL_LIST_CONFIG_JSON=/your_model_list_config_file_path
+
+
+############################################################################################################
+############################################ AUTH PARAMETERS ###############################################
+############################################################################################################
+
+# APP_URL=http://localhost:3000
+
+# SMTP_HOST=smtp.host.com
+# SMTP_PORT=465
+# SMTP_USER=smtp_user
+# SMTP_PASSWORD=smtp_password
+# SMTP_SECURE=true
+# ALLOW_UNAUTHORIZED_CERTS=false
+# SENDER_EMAIL=team@example.com
+
+JWT_AUTH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
+JWT_REFRESH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
+JWT_ISSUER='ISSUER'
+JWT_AUDIENCE='AUDIENCE'
+JWT_TOKEN_EXPIRY_IN_MINUTES=360
+JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=43200
+# EXPIRE_AUTH_TOKENS_ON_RESTART=true # (if you need to expire all tokens on app restart)
+# EXPRESS_SESSION_SECRET=flowise
+# SECURE_COOKIES=
+
+# INVITE_TOKEN_EXPIRY_IN_HOURS=24
+# PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=15
+# PASSWORD_SALT_HASH_ROUNDS=10
+# TOKEN_HASH_SECRET='popcorn'
+
+# WORKSPACE_INVITE_TEMPLATE_PATH=/path/to/custom/workspace_invite.hbs
+
+
+############################################################################################################
+############################################# ENTERPRISE ###################################################
+############################################################################################################
+
+# LICENSE_URL=
+# FLOWISE_EE_LICENSE_KEY=
+# OFFLINE=
+
+
+############################################################################################################
+########################################### METRICS COLLECTION #############################################
+############################################################################################################
+
+# POSTHOG_PUBLIC_API_KEY=your_posthog_public_api_key
+
# ENABLE_METRICS=false
# METRICS_PROVIDER=prometheus # prometheus | open_telemetry
# METRICS_INCLUDE_NODE_METRICS=true # default is true
@@ -75,15 +136,21 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# METRICS_OPEN_TELEMETRY_PROTOCOL=http # http | grpc | proto (default is http)
# METRICS_OPEN_TELEMETRY_DEBUG=true # default is false
-# Uncomment the following lines to enable global agent proxy
-# see https://www.npmjs.com/package/global-agent for more details
+
+############################################################################################################
+############################################### PROXY ######################################################
+############################################################################################################
+
+# Uncomment the following lines to enable global agent proxy, see https://www.npmjs.com/package/global-agent for more details
# GLOBAL_AGENT_HTTP_PROXY=CorporateHttpProxyUrl
# GLOBAL_AGENT_HTTPS_PROXY=CorporateHttpsProxyUrl
# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
-######################
-# QUEUE CONFIGURATION
-#######################
+
+############################################################################################################
+########################################### QUEUE CONFIGURATION ############################################
+############################################################################################################
+
# MODE=queue #(queue | main)
# QUEUE_NAME=flowise-queue
# QUEUE_REDIS_EVENT_STREAM_MAX_LEN=100000
@@ -100,4 +167,14 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# REDIS_KEY=
# REDIS_CA=
# REDIS_KEEP_ALIVE=
-# ENABLE_BULLMQ_DASHBOARD=
\ No newline at end of file
+# ENABLE_BULLMQ_DASHBOARD=
+
+
+############################################################################################################
+############################################## SECURITY ####################################################
+############################################################################################################
+
+# HTTP_DENY_LIST=
+# CUSTOM_MCP_SECURITY_CHECK=true
+# CUSTOM_MCP_PROTOCOL=sse #(stdio | sse)
+# TRUST_PROXY=true #(true | false | 1 | loopback| linklocal | uniquelocal | IP addresses | loopback, IP addresses)
diff --git a/docker/README.md b/docker/README.md
index 35d03142d..bcadd93d5 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -9,28 +9,43 @@ Starts Flowise from [DockerHub Image](https://hub.docker.com/r/flowiseai/flowise
3. Open [http://localhost:3000](http://localhost:3000)
4. You can bring the containers down by `docker compose stop`
-## ๐ Authentication
-
-1. Create `.env` file and specify the `PORT`, `FLOWISE_USERNAME`, and `FLOWISE_PASSWORD` (refer to `.env.example`)
-2. Pass `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `docker-compose.yml` file:
- ```
- environment:
- - PORT=${PORT}
- - FLOWISE_USERNAME=${FLOWISE_USERNAME}
- - FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
- ```
-3. `docker compose up -d`
-4. Open [http://localhost:3000](http://localhost:3000)
-5. You can bring the containers down by `docker compose stop`
-
## ๐ฑ Env Variables
-If you like to persist your data (flows, logs, apikeys, credentials), set these variables in the `.env` file inside `docker` folder:
+If you like to persist your data (flows, logs, credentials, storage), set these variables in the `.env` file inside `docker` folder:
- DATABASE_PATH=/root/.flowise
-- APIKEY_PATH=/root/.flowise
- LOG_PATH=/root/.flowise/logs
- SECRETKEY_PATH=/root/.flowise
- BLOB_STORAGE_PATH=/root/.flowise/storage
-Flowise also support different environment variables to configure your instance. Read [more](https://docs.flowiseai.com/environment-variables)
+Flowise also support different environment variables to configure your instance. Read [more](https://docs.flowiseai.com/configuration/environment-variables)
+
+## Queue Mode:
+
+### Building from source:
+
+You can build the images for worker and main from scratch with:
+
+```
+docker compose -f docker-compose-queue-source.yml up -d
+```
+
+Monitor Health:
+
+```
+docker compose -f docker-compose-queue-source.yml ps
+```
+
+### From pre-built images:
+
+You can also use the pre-built images:
+
+```
+docker compose -f docker-compose-queue-prebuilt.yml up -d
+```
+
+Monitor Health:
+
+```
+docker compose -f docker-compose-queue-prebuilt.yml ps
+```
diff --git a/docker/docker-compose-queue-prebuilt.yml b/docker/docker-compose-queue-prebuilt.yml
new file mode 100644
index 000000000..6d6941590
--- /dev/null
+++ b/docker/docker-compose-queue-prebuilt.yml
@@ -0,0 +1,316 @@
+version: '3.1'
+
+services:
+ redis:
+ image: redis:alpine
+ container_name: flowise-redis
+ ports:
+ - '6379:6379'
+ volumes:
+ - redis_data:/data
+ networks:
+ - flowise-net
+ restart: always
+
+ flowise:
+ image: flowiseai/flowise:latest
+ container_name: flowise-main
+ restart: always
+ ports:
+ - '${PORT:-3000}:${PORT:-3000}'
+ volumes:
+ - ~/.flowise:/root/.flowise
+ environment:
+ # --- Essential Flowise Vars ---
+ - PORT=${PORT:-3000}
+ - DATABASE_PATH=${DATABASE_PATH:-/root/.flowise}
+ - DATABASE_TYPE=${DATABASE_TYPE}
+ - DATABASE_PORT=${DATABASE_PORT}
+ - DATABASE_HOST=${DATABASE_HOST}
+ - DATABASE_NAME=${DATABASE_NAME}
+ - DATABASE_USER=${DATABASE_USER}
+ - DATABASE_PASSWORD=${DATABASE_PASSWORD}
+ - DATABASE_SSL=${DATABASE_SSL}
+ - DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
+
+ # SECRET KEYS
+ - SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
+ - SECRETKEY_PATH=${SECRETKEY_PATH}
+ - FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
+ - SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
+ - SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
+ - SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
+ - SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
+
+ # LOGGING
+ - DEBUG=${DEBUG}
+ - LOG_PATH=${LOG_PATH}
+ - LOG_LEVEL=${LOG_LEVEL}
+ - LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
+ - LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
+
+ # CUSTOM TOOL/FUNCTION DEPENDENCIES
+ - TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
+ - TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
+ - ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
+
+ # STORAGE
+ - STORAGE_TYPE=${STORAGE_TYPE}
+ - BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
+ - S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
+ - S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
+ - S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
+ - S3_STORAGE_REGION=${S3_STORAGE_REGION}
+ - S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
+ - S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
+ - GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
+ - GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
+ - GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
+ - GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
+
+ # SETTINGS
+ - NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
+ - CORS_ORIGINS=${CORS_ORIGINS}
+ - IFRAME_ORIGINS=${IFRAME_ORIGINS}
+ - FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
+ - SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
+ - DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
+ - DISABLED_NODES=${DISABLED_NODES}
+ - MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
+
+ # AUTH PARAMETERS
+ - APP_URL=${APP_URL}
+ - JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
+ - JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
+ - JWT_ISSUER=${JWT_ISSUER}
+ - JWT_AUDIENCE=${JWT_AUDIENCE}
+ - JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
+ - JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
+ - EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
+ - EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
+ - PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
+ - PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
+ - TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
+ - SECURE_COOKIES=${SECURE_COOKIES}
+
+ # EMAIL
+ - SMTP_HOST=${SMTP_HOST}
+ - SMTP_PORT=${SMTP_PORT}
+ - SMTP_USER=${SMTP_USER}
+ - SMTP_PASSWORD=${SMTP_PASSWORD}
+ - SMTP_SECURE=${SMTP_SECURE}
+ - ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
+ - SENDER_EMAIL=${SENDER_EMAIL}
+
+ # ENTERPRISE
+ - LICENSE_URL=${LICENSE_URL}
+ - FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
+ - OFFLINE=${OFFLINE}
+ - INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
+ - WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
+
+ # METRICS COLLECTION
+ - POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
+ - ENABLE_METRICS=${ENABLE_METRICS}
+ - METRICS_PROVIDER=${METRICS_PROVIDER}
+ - METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
+ - METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
+ - METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
+ - METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
+ - METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
+
+ # PROXY
+ - GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
+ - GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
+ - GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
+
+ # --- Queue Configuration (Main Instance) ---
+ - MODE=${MODE:-queue}
+ - QUEUE_NAME=${QUEUE_NAME:-flowise-queue}
+ - QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
+ - WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
+ - REMOVE_ON_AGE=${REMOVE_ON_AGE}
+ - REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
+ - REDIS_URL=${REDIS_URL:-redis://redis:6379}
+ - REDIS_HOST=${REDIS_HOST}
+ - REDIS_PORT=${REDIS_PORT}
+ - REDIS_USERNAME=${REDIS_USERNAME}
+ - REDIS_PASSWORD=${REDIS_PASSWORD}
+ - REDIS_TLS=${REDIS_TLS}
+ - REDIS_CERT=${REDIS_CERT}
+ - REDIS_KEY=${REDIS_KEY}
+ - REDIS_CA=${REDIS_CA}
+ - REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
+ - ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
+
+ # SECURITY
+ - CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
+ - CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
+ - HTTP_DENY_LIST=${HTTP_DENY_LIST}
+ - TRUST_PROXY=${TRUST_PROXY}
+ healthcheck:
+ test: ['CMD', 'curl', '-f', 'http://localhost:${PORT:-3000}/api/v1/ping']
+ interval: 10s
+ timeout: 5s
+ retries: 5
+ start_period: 30s
+ entrypoint: /bin/sh -c "sleep 3; flowise start"
+ depends_on:
+ - redis
+ networks:
+ - flowise-net
+
+ flowise-worker:
+ image: flowiseai/flowise-worker:latest
+ container_name: flowise-worker
+ restart: always
+ volumes:
+ - ~/.flowise:/root/.flowise
+ environment:
+ # --- Essential Flowise Vars ---
+ - WORKER_PORT=${WORKER_PORT:-5566}
+ - DATABASE_PATH=${DATABASE_PATH:-/root/.flowise}
+ - DATABASE_TYPE=${DATABASE_TYPE}
+ - DATABASE_PORT=${DATABASE_PORT}
+ - DATABASE_HOST=${DATABASE_HOST}
+ - DATABASE_NAME=${DATABASE_NAME}
+ - DATABASE_USER=${DATABASE_USER}
+ - DATABASE_PASSWORD=${DATABASE_PASSWORD}
+ - DATABASE_SSL=${DATABASE_SSL}
+ - DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
+
+ # SECRET KEYS
+ - SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
+ - SECRETKEY_PATH=${SECRETKEY_PATH}
+ - FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
+ - SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
+ - SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
+ - SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
+ - SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
+
+ # LOGGING
+ - DEBUG=${DEBUG}
+ - LOG_PATH=${LOG_PATH}
+ - LOG_LEVEL=${LOG_LEVEL}
+ - LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
+ - LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
+
+ # CUSTOM TOOL/FUNCTION DEPENDENCIES
+ - TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
+ - TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
+ - ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
+
+ # STORAGE
+ - STORAGE_TYPE=${STORAGE_TYPE}
+ - BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
+ - S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
+ - S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
+ - S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
+ - S3_STORAGE_REGION=${S3_STORAGE_REGION}
+ - S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
+ - S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
+ - GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
+ - GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
+ - GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
+ - GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
+
+ # SETTINGS
+ - NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
+ - CORS_ORIGINS=${CORS_ORIGINS}
+ - IFRAME_ORIGINS=${IFRAME_ORIGINS}
+ - FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
+ - SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
+ - DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
+ - DISABLED_NODES=${DISABLED_NODES}
+ - MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
+
+ # AUTH PARAMETERS
+ - APP_URL=${APP_URL}
+ - JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
+ - JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
+ - JWT_ISSUER=${JWT_ISSUER}
+ - JWT_AUDIENCE=${JWT_AUDIENCE}
+ - JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
+ - JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
+ - EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
+ - EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
+ - PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
+ - PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
+ - TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
+ - SECURE_COOKIES=${SECURE_COOKIES}
+
+ # EMAIL
+ - SMTP_HOST=${SMTP_HOST}
+ - SMTP_PORT=${SMTP_PORT}
+ - SMTP_USER=${SMTP_USER}
+ - SMTP_PASSWORD=${SMTP_PASSWORD}
+ - SMTP_SECURE=${SMTP_SECURE}
+ - ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
+ - SENDER_EMAIL=${SENDER_EMAIL}
+
+ # ENTERPRISE
+ - LICENSE_URL=${LICENSE_URL}
+ - FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
+ - OFFLINE=${OFFLINE}
+ - INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
+ - WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
+
+ # METRICS COLLECTION
+ - POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
+ - ENABLE_METRICS=${ENABLE_METRICS}
+ - METRICS_PROVIDER=${METRICS_PROVIDER}
+ - METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
+ - METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
+ - METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
+ - METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
+ - METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
+
+ # PROXY
+ - GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
+ - GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
+ - GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
+
+ # --- Queue Configuration (Worker Instance) ---
+ - MODE=${MODE:-queue}
+ - QUEUE_NAME=${QUEUE_NAME:-flowise-queue}
+ - QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
+ - WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
+ - REMOVE_ON_AGE=${REMOVE_ON_AGE}
+ - REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
+ - REDIS_URL=${REDIS_URL:-redis://redis:6379}
+ - REDIS_HOST=${REDIS_HOST}
+ - REDIS_PORT=${REDIS_PORT}
+ - REDIS_USERNAME=${REDIS_USERNAME}
+ - REDIS_PASSWORD=${REDIS_PASSWORD}
+ - REDIS_TLS=${REDIS_TLS}
+ - REDIS_CERT=${REDIS_CERT}
+ - REDIS_KEY=${REDIS_KEY}
+ - REDIS_CA=${REDIS_CA}
+ - REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
+ - ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
+
+ # SECURITY
+ - CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
+ - CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
+ - HTTP_DENY_LIST=${HTTP_DENY_LIST}
+ - TRUST_PROXY=${TRUST_PROXY}
+ healthcheck:
+ test: ['CMD', 'curl', '-f', 'http://localhost:${WORKER_PORT:-5566}/healthz']
+ interval: 10s
+ timeout: 5s
+ retries: 5
+ start_period: 30s
+ entrypoint: /bin/sh -c "node /app/healthcheck/healthcheck.js & sleep 5 && pnpm run start-worker"
+ depends_on:
+ - redis
+ - flowise
+ networks:
+ - flowise-net
+
+volumes:
+ redis_data:
+ driver: local
+
+networks:
+ flowise-net:
+ driver: bridge
diff --git a/docker/docker-compose-queue-source.yml b/docker/docker-compose-queue-source.yml
new file mode 100644
index 000000000..a95608e5c
--- /dev/null
+++ b/docker/docker-compose-queue-source.yml
@@ -0,0 +1,71 @@
+version: '3.1'
+
+services:
+ redis:
+ image: redis:alpine
+ container_name: flowise-redis
+ ports:
+ - '6379:6379'
+ volumes:
+ - redis_data:/data
+ networks:
+ - flowise-net
+
+ flowise:
+ container_name: flowise-main
+ build:
+ context: .. # Build using the Dockerfile in the root directory
+ dockerfile: docker/Dockerfile
+ ports:
+ - '${PORT}:${PORT}'
+ volumes:
+ # Mount local .flowise to container's default location
+ - ../.flowise:/root/.flowise
+ environment:
+ # --- Essential Flowise Vars ---
+ - PORT=${PORT:-3000}
+ - DATABASE_PATH=/root/.flowise
+ - SECRETKEY_PATH=/root/.flowise
+ - LOG_PATH=/root/.flowise/logs
+ - BLOB_STORAGE_PATH=/root/.flowise/storage
+ # --- Queue Vars (Main Instance) ---
+ - MODE=queue
+ - QUEUE_NAME=flowise-queue # Ensure this matches worker
+ - REDIS_URL=redis://redis:6379 # Use service name 'redis'
+ depends_on:
+ - redis
+ networks:
+ - flowise-net
+
+ flowise-worker:
+ container_name: flowise-worker
+ build:
+ context: .. # Build context is still the root
+ dockerfile: docker/worker/Dockerfile # Ensure this path is correct
+ volumes:
+ # Mount same local .flowise to worker
+ - ../.flowise:/root/.flowise
+ environment:
+ # --- Essential Flowise Vars ---
+ - WORKER_PORT=${WORKER_PORT:-5566} # Port for worker healthcheck
+ - DATABASE_PATH=/root/.flowise
+ - SECRETKEY_PATH=/root/.flowise
+ - LOG_PATH=/root/.flowise/logs
+ - BLOB_STORAGE_PATH=/root/.flowise/storage
+ # --- Queue Vars (Main Instance) ---
+ - MODE=queue
+ - QUEUE_NAME=flowise-queue # Ensure this matches worker
+ - REDIS_URL=redis://redis:6379 # Use service name 'redis'
+ depends_on:
+ - redis
+ - flowise
+ networks:
+ - flowise-net
+
+volumes:
+ redis_data:
+ driver: local
+
+networks:
+ flowise-net:
+ driver: bridge
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
index 3e5584863..e43283b15 100644
--- a/docker/docker-compose.yml
+++ b/docker/docker-compose.yml
@@ -2,16 +2,12 @@ version: '3.1'
services:
flowise:
- image: flowiseai/flowise
+ image: flowiseai/flowise:latest
restart: always
environment:
- PORT=${PORT}
- - CORS_ORIGINS=${CORS_ORIGINS}
- - IFRAME_ORIGINS=${IFRAME_ORIGINS}
- - FLOWISE_USERNAME=${FLOWISE_USERNAME}
- - FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
- - FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
- - DEBUG=${DEBUG}
+
+ # DATABASE
- DATABASE_PATH=${DATABASE_PATH}
- DATABASE_TYPE=${DATABASE_TYPE}
- DATABASE_PORT=${DATABASE_PORT}
@@ -21,35 +17,122 @@ services:
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_SSL=${DATABASE_SSL}
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
- - APIKEY_STORAGE_TYPE=${APIKEY_STORAGE_TYPE}
- - APIKEY_PATH=${APIKEY_PATH}
+
+ # SECRET KEYS
+ - SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
- SECRETKEY_PATH=${SECRETKEY_PATH}
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
- - LOG_LEVEL=${LOG_LEVEL}
+ - SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
+ - SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
+ - SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
+ - SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
+
+ # LOGGING
+ - DEBUG=${DEBUG}
- LOG_PATH=${LOG_PATH}
+ - LOG_LEVEL=${LOG_LEVEL}
+ - LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
+ - LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
+
+ # CUSTOM TOOL/FUNCTION DEPENDENCIES
+ - TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
+ - TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
+ - ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
+
+ # STORAGE
+ - STORAGE_TYPE=${STORAGE_TYPE}
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
+ - S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
+ - S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
+ - S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
+ - S3_STORAGE_REGION=${S3_STORAGE_REGION}
+ - S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
+ - S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
+ - GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
+ - GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
+ - GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
+ - GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
+
+ # SETTINGS
+ - NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
+ - CORS_ORIGINS=${CORS_ORIGINS}
+ - IFRAME_ORIGINS=${IFRAME_ORIGINS}
+ - FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
+ - SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
+ - DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
+ - DISABLED_NODES=${DISABLED_NODES}
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
+
+ # AUTH PARAMETERS
+ - APP_URL=${APP_URL}
+ - JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
+ - JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
+ - JWT_ISSUER=${JWT_ISSUER}
+ - JWT_AUDIENCE=${JWT_AUDIENCE}
+ - JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
+ - JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
+ - EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
+ - EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
+ - PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
+ - PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
+ - TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
+ - SECURE_COOKIES=${SECURE_COOKIES}
+
+ # EMAIL
+ - SMTP_HOST=${SMTP_HOST}
+ - SMTP_PORT=${SMTP_PORT}
+ - SMTP_USER=${SMTP_USER}
+ - SMTP_PASSWORD=${SMTP_PASSWORD}
+ - SMTP_SECURE=${SMTP_SECURE}
+ - ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
+ - SENDER_EMAIL=${SENDER_EMAIL}
+
+ # ENTERPRISE
+ - LICENSE_URL=${LICENSE_URL}
+ - FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
+ - OFFLINE=${OFFLINE}
+ - INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
+ - WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
+
+ # METRICS COLLECTION
+ - POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
+ - ENABLE_METRICS=${ENABLE_METRICS}
+ - METRICS_PROVIDER=${METRICS_PROVIDER}
+ - METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
+ - METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
+ - METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
+ - METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
+ - METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
+
+ # PROXY
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
- - DISABLED_NODES=${DISABLED_NODES}
+
+ # QUEUE CONFIGURATION
- MODE=${MODE}
- - WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
- QUEUE_NAME=${QUEUE_NAME}
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
+ - WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
- REMOVE_ON_AGE=${REMOVE_ON_AGE}
- REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
- REDIS_URL=${REDIS_URL}
- REDIS_HOST=${REDIS_HOST}
- REDIS_PORT=${REDIS_PORT}
- - REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_USERNAME=${REDIS_USERNAME}
+ - REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_TLS=${REDIS_TLS}
- REDIS_CERT=${REDIS_CERT}
- REDIS_KEY=${REDIS_KEY}
- REDIS_CA=${REDIS_CA}
- REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
- ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
+
+ # SECURITY
+ - CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
+ - CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
+ - HTTP_DENY_LIST=${HTTP_DENY_LIST}
+ - TRUST_PROXY=${TRUST_PROXY}
ports:
- '${PORT}:${PORT}'
healthcheck:
diff --git a/docker/worker/.env.example b/docker/worker/.env.example
new file mode 100644
index 000000000..0e4b0c0dc
--- /dev/null
+++ b/docker/worker/.env.example
@@ -0,0 +1,180 @@
+WORKER_PORT=5566
+
+# APIKEY_PATH=/your_apikey_path/.flowise # (will be deprecated by end of 2025)
+
+############################################################################################################
+############################################## DATABASE ####################################################
+############################################################################################################
+
+DATABASE_PATH=/root/.flowise
+# DATABASE_TYPE=postgres
+# DATABASE_PORT=5432
+# DATABASE_HOST=""
+# DATABASE_NAME=flowise
+# DATABASE_USER=root
+# DATABASE_PASSWORD=mypassword
+# DATABASE_SSL=true
+# DATABASE_REJECT_UNAUTHORIZED=true
+# DATABASE_SSL_KEY_BASE64=
+
+
+############################################################################################################
+############################################## SECRET KEYS #################################################
+############################################################################################################
+
+# SECRETKEY_STORAGE_TYPE=local #(local | aws)
+SECRETKEY_PATH=/root/.flowise
+# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey # (if you want to overwrite the secret key)
+# SECRETKEY_AWS_ACCESS_KEY=
+# SECRETKEY_AWS_SECRET_KEY=
+# SECRETKEY_AWS_REGION=us-west-2
+# SECRETKEY_AWS_NAME=FlowiseEncryptionKey
+
+
+############################################################################################################
+############################################## LOGGING #####################################################
+############################################################################################################
+
+# DEBUG=true
+LOG_PATH=/root/.flowise/logs
+# LOG_LEVEL=info #(error | warn | info | verbose | debug)
+# LOG_SANITIZE_BODY_FIELDS=password,pwd,pass,secret,token,apikey,api_key,accesstoken,access_token,refreshtoken,refresh_token,clientsecret,client_secret,privatekey,private_key,secretkey,secret_key,auth,authorization,credential,credentials
+# LOG_SANITIZE_HEADER_FIELDS=authorization,x-api-key,x-auth-token,cookie
+# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
+# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash
+# ALLOW_BUILTIN_DEP=false
+
+
+############################################################################################################
+############################################## STORAGE #####################################################
+############################################################################################################
+
+# STORAGE_TYPE=local (local | s3 | gcs)
+BLOB_STORAGE_PATH=/root/.flowise/storage
+# S3_STORAGE_BUCKET_NAME=flowise
+# S3_STORAGE_ACCESS_KEY_ID=
+# S3_STORAGE_SECRET_ACCESS_KEY=
+# S3_STORAGE_REGION=us-west-2
+# S3_ENDPOINT_URL=
+# S3_FORCE_PATH_STYLE=false
+# GOOGLE_CLOUD_STORAGE_CREDENTIAL=/the/keyfilename/path
+# GOOGLE_CLOUD_STORAGE_PROJ_ID=
+# GOOGLE_CLOUD_STORAGE_BUCKET_NAME=
+# GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=true
+
+
+############################################################################################################
+############################################## SETTINGS ####################################################
+############################################################################################################
+
+# NUMBER_OF_PROXIES= 1
+# CORS_ORIGINS=*
+# IFRAME_ORIGINS=*
+# FLOWISE_FILE_SIZE_LIMIT=50mb
+# SHOW_COMMUNITY_NODES=true
+# DISABLE_FLOWISE_TELEMETRY=true
+# DISABLED_NODES=bufferMemory,chatOpenAI (comma separated list of node names to disable)
+# Uncomment the following line to enable model list config, load the list of models from your local config file
+# see https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json for the format
+# MODEL_LIST_CONFIG_JSON=/your_model_list_config_file_path
+
+
+############################################################################################################
+############################################ AUTH PARAMETERS ###############################################
+############################################################################################################
+
+# APP_URL=http://localhost:3000
+
+# SMTP_HOST=smtp.host.com
+# SMTP_PORT=465
+# SMTP_USER=smtp_user
+# SMTP_PASSWORD=smtp_password
+# SMTP_SECURE=true
+# ALLOW_UNAUTHORIZED_CERTS=false
+# SENDER_EMAIL=team@example.com
+
+JWT_AUTH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
+JWT_REFRESH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
+JWT_ISSUER='ISSUER'
+JWT_AUDIENCE='AUDIENCE'
+JWT_TOKEN_EXPIRY_IN_MINUTES=360
+JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=43200
+# EXPIRE_AUTH_TOKENS_ON_RESTART=true # (if you need to expire all tokens on app restart)
+# EXPRESS_SESSION_SECRET=flowise
+# SECURE_COOKIES=
+
+# INVITE_TOKEN_EXPIRY_IN_HOURS=24
+# PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=15
+# PASSWORD_SALT_HASH_ROUNDS=10
+# TOKEN_HASH_SECRET='popcorn'
+
+# WORKSPACE_INVITE_TEMPLATE_PATH=/path/to/custom/workspace_invite.hbs
+
+
+############################################################################################################
+############################################# ENTERPRISE ###################################################
+############################################################################################################
+
+# LICENSE_URL=
+# FLOWISE_EE_LICENSE_KEY=
+# OFFLINE=
+
+
+############################################################################################################
+########################################### METRICS COLLECTION #############################################
+############################################################################################################
+
+# POSTHOG_PUBLIC_API_KEY=your_posthog_public_api_key
+
+# ENABLE_METRICS=false
+# METRICS_PROVIDER=prometheus # prometheus | open_telemetry
+# METRICS_INCLUDE_NODE_METRICS=true # default is true
+# METRICS_SERVICE_NAME=FlowiseAI
+
+# ONLY NEEDED if METRICS_PROVIDER=open_telemetry
+# METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=http://localhost:4318/v1/metrics
+# METRICS_OPEN_TELEMETRY_PROTOCOL=http # http | grpc | proto (default is http)
+# METRICS_OPEN_TELEMETRY_DEBUG=true # default is false
+
+
+############################################################################################################
+############################################### PROXY ######################################################
+############################################################################################################
+
+# Uncomment the following lines to enable global agent proxy, see https://www.npmjs.com/package/global-agent for more details
+# GLOBAL_AGENT_HTTP_PROXY=CorporateHttpProxyUrl
+# GLOBAL_AGENT_HTTPS_PROXY=CorporateHttpsProxyUrl
+# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
+
+
+############################################################################################################
+########################################### QUEUE CONFIGURATION ############################################
+############################################################################################################
+
+# MODE=queue #(queue | main)
+# QUEUE_NAME=flowise-queue
+# QUEUE_REDIS_EVENT_STREAM_MAX_LEN=100000
+# WORKER_CONCURRENCY=100000
+# REMOVE_ON_AGE=86400
+# REMOVE_ON_COUNT=10000
+# REDIS_URL=
+# REDIS_HOST=localhost
+# REDIS_PORT=6379
+# REDIS_USERNAME=
+# REDIS_PASSWORD=
+# REDIS_TLS=
+# REDIS_CERT=
+# REDIS_KEY=
+# REDIS_CA=
+# REDIS_KEEP_ALIVE=
+# ENABLE_BULLMQ_DASHBOARD=
+
+
+############################################################################################################
+############################################## SECURITY ####################################################
+############################################################################################################
+
+# HTTP_DENY_LIST=
+# CUSTOM_MCP_SECURITY_CHECK=true
+# CUSTOM_MCP_PROTOCOL=sse #(stdio | sse)
+# TRUST_PROXY=true #(true | false | 1 | loopback| linklocal | uniquelocal | IP addresses | loopback, IP addresses)
diff --git a/docker/worker/Dockerfile b/docker/worker/Dockerfile
new file mode 100644
index 000000000..8a2c749d4
--- /dev/null
+++ b/docker/worker/Dockerfile
@@ -0,0 +1,49 @@
+FROM node:20-alpine
+
+RUN apk add --update libc6-compat python3 make g++
+# needed for pdfjs-dist
+RUN apk add --no-cache build-base cairo-dev pango-dev
+
+# Install Chromium and curl for container-level health checks
+RUN apk add --no-cache chromium curl
+
+#install PNPM globally
+RUN npm install -g pnpm
+
+ENV PUPPETEER_SKIP_DOWNLOAD=true
+ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
+
+ENV NODE_OPTIONS=--max-old-space-size=8192
+
+WORKDIR /usr/src
+
+# Copy app source
+COPY . .
+
+RUN pnpm install
+
+RUN pnpm build
+
+# --- Healthcheck Setup ---
+
+WORKDIR /app/healthcheck
+
+COPY docker/worker/healthcheck/package.json .
+
+RUN npm install --omit=dev
+
+COPY docker/worker/healthcheck/healthcheck.js .
+
+# --- End Healthcheck Setup ---
+
+# Set the main working directory back
+WORKDIR /usr/src
+
+# Environment variables for port configuration
+ENV WORKER_PORT=5566
+
+# Expose port (can be overridden by env var)
+EXPOSE ${WORKER_PORT}
+
+# Start healthcheck in background and flowise worker in foreground
+CMD ["/bin/sh", "-c", "node /app/healthcheck/healthcheck.js & sleep 5 && pnpm run start-worker"]
diff --git a/docker/worker/README.md b/docker/worker/README.md
index 82769c1e2..b2299cd03 100644
--- a/docker/worker/README.md
+++ b/docker/worker/README.md
@@ -18,7 +18,11 @@ Hereโs an overview of the process:
## Setting up Worker:
-1. Copy paste the same `.env` file used to setup main server. Change the `PORT` to other available port numbers. Ex: 5566
-2. `docker compose up -d`
-3. Open [http://localhost:5566](http://localhost:5566)
+1. Navigate to `docker/worker` folder
+2. In the `.env.example`, setup all the necessary env variables for `QUEUE CONFIGURATION`. Env variables for worker must match the one for main server. Change the `WORKER_PORT` to other available port numbers to listen for healthcheck. Ex: 5566
+3. `docker compose up -d`
4. You can bring the worker container down by `docker compose stop`
+
+## Entrypoint:
+
+Different from main server image which is using `flowise start`, entrypoint for worker is `pnpm run start-worker`. This is because the worker's [Dockerfile](./Dockerfile) build the image from source files via `pnpm build` instead of npm registry via `RUN npm install -g flowise`.
diff --git a/docker/worker/docker-compose.yml b/docker/worker/docker-compose.yml
index 193d9cd0d..da9e05792 100644
--- a/docker/worker/docker-compose.yml
+++ b/docker/worker/docker-compose.yml
@@ -2,16 +2,12 @@ version: '3.1'
services:
flowise:
- image: flowiseai/flowise
+ image: flowiseai/flowise-worker:latest
restart: always
environment:
- - PORT=${PORT}
- - CORS_ORIGINS=${CORS_ORIGINS}
- - IFRAME_ORIGINS=${IFRAME_ORIGINS}
- - FLOWISE_USERNAME=${FLOWISE_USERNAME}
- - FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
- - FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
- - DEBUG=${DEBUG}
+ - WORKER_PORT=${WORKER_PORT:-5566}
+
+ # DATABASE
- DATABASE_PATH=${DATABASE_PATH}
- DATABASE_TYPE=${DATABASE_TYPE}
- DATABASE_PORT=${DATABASE_PORT}
@@ -21,37 +17,130 @@ services:
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
- DATABASE_SSL=${DATABASE_SSL}
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
- - APIKEY_STORAGE_TYPE=${APIKEY_STORAGE_TYPE}
- - APIKEY_PATH=${APIKEY_PATH}
+
+ # SECRET KEYS
+ - SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
- SECRETKEY_PATH=${SECRETKEY_PATH}
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
- - LOG_LEVEL=${LOG_LEVEL}
+ - SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
+ - SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
+ - SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
+ - SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
+
+ # LOGGING
+ - DEBUG=${DEBUG}
- LOG_PATH=${LOG_PATH}
+ - LOG_LEVEL=${LOG_LEVEL}
+ - LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
+ - LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
+
+ # CUSTOM TOOL/FUNCTION DEPENDENCIES
+ - TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
+ - TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
+ - ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
+
+ # STORAGE
+ - STORAGE_TYPE=${STORAGE_TYPE}
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
+ - S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
+ - S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
+ - S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
+ - S3_STORAGE_REGION=${S3_STORAGE_REGION}
+ - S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
+ - S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
+ - GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
+ - GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
+ - GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
+ - GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
+
+ # SETTINGS
+ - NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
+ - CORS_ORIGINS=${CORS_ORIGINS}
+ - IFRAME_ORIGINS=${IFRAME_ORIGINS}
+ - FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
+ - SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
+ - DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
+ - DISABLED_NODES=${DISABLED_NODES}
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
+
+ # AUTH PARAMETERS
+ - APP_URL=${APP_URL}
+ - JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
+ - JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
+ - JWT_ISSUER=${JWT_ISSUER}
+ - JWT_AUDIENCE=${JWT_AUDIENCE}
+ - JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
+ - JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
+ - EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
+ - EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
+ - PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
+ - PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
+ - TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
+ - SECURE_COOKIES=${SECURE_COOKIES}
+
+ # EMAIL
+ - SMTP_HOST=${SMTP_HOST}
+ - SMTP_PORT=${SMTP_PORT}
+ - SMTP_USER=${SMTP_USER}
+ - SMTP_PASSWORD=${SMTP_PASSWORD}
+ - SMTP_SECURE=${SMTP_SECURE}
+ - ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
+ - SENDER_EMAIL=${SENDER_EMAIL}
+
+ # ENTERPRISE
+ - LICENSE_URL=${LICENSE_URL}
+ - FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
+ - OFFLINE=${OFFLINE}
+ - INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
+ - WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
+
+ # METRICS COLLECTION
+ - POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
+ - ENABLE_METRICS=${ENABLE_METRICS}
+ - METRICS_PROVIDER=${METRICS_PROVIDER}
+ - METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
+ - METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
+ - METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
+ - METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
+ - METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
+
+ # PROXY
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
- - DISABLED_NODES=${DISABLED_NODES}
+
+ # QUEUE CONFIGURATION
- MODE=${MODE}
- - WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
- QUEUE_NAME=${QUEUE_NAME}
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
+ - WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
- REMOVE_ON_AGE=${REMOVE_ON_AGE}
- REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
- REDIS_URL=${REDIS_URL}
- REDIS_HOST=${REDIS_HOST}
- REDIS_PORT=${REDIS_PORT}
- - REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_USERNAME=${REDIS_USERNAME}
+ - REDIS_PASSWORD=${REDIS_PASSWORD}
- REDIS_TLS=${REDIS_TLS}
- REDIS_CERT=${REDIS_CERT}
- REDIS_KEY=${REDIS_KEY}
- REDIS_CA=${REDIS_CA}
- REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
- ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
+
+ # SECURITY
+ - CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
+ - CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
+ - HTTP_DENY_LIST=${HTTP_DENY_LIST}
+ - TRUST_PROXY=${TRUST_PROXY}
ports:
- - '${PORT}:${PORT}'
+ - '${WORKER_PORT}:${WORKER_PORT}'
+ healthcheck:
+ test: ['CMD', 'curl', '-f', 'http://localhost:${WORKER_PORT}/healthz']
+ interval: 10s
+ timeout: 5s
+ retries: 5
+ start_period: 30s
volumes:
- ~/.flowise:/root/.flowise
- entrypoint: /bin/sh -c "sleep 3; flowise worker"
+ entrypoint: /bin/sh -c "node /app/healthcheck/healthcheck.js & sleep 5 && pnpm run start-worker"
diff --git a/docker/worker/healthcheck/healthcheck.js b/docker/worker/healthcheck/healthcheck.js
new file mode 100644
index 000000000..fcc204f7d
--- /dev/null
+++ b/docker/worker/healthcheck/healthcheck.js
@@ -0,0 +1,13 @@
+const express = require('express')
+const app = express()
+
+const port = process.env.WORKER_PORT || 5566
+
+app.get('/healthz', (req, res) => {
+ res.status(200).send('OK')
+})
+
+app.listen(port, () => {
+ // eslint-disable-next-line no-console
+ console.log(`Healthcheck server listening on port ${port}`)
+})
diff --git a/docker/worker/healthcheck/package.json b/docker/worker/healthcheck/package.json
new file mode 100644
index 000000000..aa7bfd6be
--- /dev/null
+++ b/docker/worker/healthcheck/package.json
@@ -0,0 +1,13 @@
+{
+ "name": "flowise-worker-healthcheck",
+ "version": "1.0.0",
+ "description": "Simple healthcheck server for Flowise worker",
+ "main": "healthcheck.js",
+ "private": true,
+ "scripts": {
+ "start": "node healthcheck.js"
+ },
+ "dependencies": {
+ "express": "^4.19.2"
+ }
+}
diff --git a/i18n/CONTRIBUTING-ZH.md b/i18n/CONTRIBUTING-ZH.md
index 45626785e..d6b019892 100644
--- a/i18n/CONTRIBUTING-ZH.md
+++ b/i18n/CONTRIBUTING-ZH.md
@@ -112,45 +112,41 @@ Flowise ๅจไธไธชๅไธ็ๅไฝๅญๅจๅบไธญๆ 3 ไธชไธๅ็ๆจกๅใ
pnpm start
```
-11. ๆไบคไปฃ็ ๅนถไปๆๅ [Flowise ไธปๅๆฏ](https://github.com/FlowiseAI/Flowise/tree/master) ็ๅๅๅๆฏไธๆไบค Pull Requestใ
+11. ๆไบคไปฃ็ ๅนถไปๆๅ [Flowise ไธปๅๆฏ](https://github.com/FlowiseAI/Flowise/tree/main) ็ๅๅๅๆฏไธๆไบค Pull Requestใ
## ๐ฑ ็ฏๅขๅ้
Flowise ๆฏๆไธๅ็็ฏๅขๅ้ๆฅ้ ็ฝฎๆจ็ๅฎไพใๆจๅฏไปฅๅจ `packages/server` ๆไปถๅคนไธญ็ `.env` ๆไปถไธญๆๅฎไปฅไธๅ้ใ้ ่ฏป[ๆดๅคไฟกๆฏ](https://docs.flowiseai.com/environment-variables)
-| ๅ้ๅ | ๆ่ฟฐ | ็ฑปๅ | ้ป่ฎคๅผ |
-| ---------------------------- | ------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- | --- |
-| PORT | Flowise ่ฟ่ก็ HTTP ็ซฏๅฃ | ๆฐๅญ | 3000 |
-| FLOWISE_USERNAME | ็ปๅฝ็จๆทๅ | ๅญ็ฌฆไธฒ | |
-| FLOWISE_PASSWORD | ็ปๅฝๅฏ็ | ๅญ็ฌฆไธฒ | |
-| FLOWISE_FILE_SIZE_LIMIT | ไธไผ ๆไปถๅคงๅฐ้ๅถ | ๅญ็ฌฆไธฒ | 50mb | |
-| DEBUG | ๆๅฐ็ปไปถ็ๆฅๅฟ | ๅธๅฐๅผ | |
-| LOG_PATH | ๅญๅจๆฅๅฟๆไปถ็ไฝ็ฝฎ | ๅญ็ฌฆไธฒ | `your-path/Flowise/logs` |
-| LOG_LEVEL | ๆฅๅฟ็ไธๅ็บงๅซ | ๆไธพๅญ็ฌฆไธฒ: `error`, `info`, `verbose`, `debug` | `info` |
-| APIKEY_STORAGE_TYPE | ๅญๅจ API ๅฏ้ฅ็ๅญๅจ็ฑปๅ | ๆไธพๅญ็ฌฆไธฒ: `json`, `db` | `json` |
-| APIKEY_PATH | ๅญๅจ API ๅฏ้ฅ็ไฝ็ฝฎ, ๅฝ`APIKEY_STORAGE_TYPE`ๆฏ`json` | ๅญ็ฌฆไธฒ | `your-path/Flowise/packages/server` |
-| TOOL_FUNCTION_BUILTIN_DEP | ็จไบๅทฅๅ ทๅฝๆฐ็ NodeJS ๅ ็ฝฎๆจกๅ | ๅญ็ฌฆไธฒ | |
-| TOOL_FUNCTION_EXTERNAL_DEP | ็จไบๅทฅๅ ทๅฝๆฐ็ๅค้จๆจกๅ | ๅญ็ฌฆไธฒ | |
-| DATABASE_TYPE | ๅญๅจ flowise ๆฐๆฎ็ๆฐๆฎๅบ็ฑปๅ | ๆไธพๅญ็ฌฆไธฒ: `sqlite`, `mysql`, `postgres` | `sqlite` |
-| DATABASE_PATH | ๆฐๆฎๅบไฟๅญ็ไฝ็ฝฎ๏ผๅฝ DATABASE_TYPE ๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | `your-home-dir/.flowise` |
-| DATABASE_HOST | ไธปๆบ URL ๆ IP ๅฐๅ๏ผๅฝ DATABASE_TYPE ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
-| DATABASE_PORT | ๆฐๆฎๅบ็ซฏๅฃ๏ผๅฝ DATABASE_TYPE ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
-| DATABASE_USERNAME | ๆฐๆฎๅบ็จๆทๅ๏ผๅฝ DATABASE_TYPE ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
-| DATABASE_PASSWORD | ๆฐๆฎๅบๅฏ็ ๏ผๅฝ DATABASE_TYPE ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
-| DATABASE_NAME | ๆฐๆฎๅบๅ็งฐ๏ผๅฝ DATABASE_TYPE ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
-| SECRETKEY_PATH | ไฟๅญๅ ๅฏๅฏ้ฅ๏ผ็จไบๅ ๅฏ/่งฃๅฏๅญๆฎ๏ผ็ไฝ็ฝฎ | ๅญ็ฌฆไธฒ | `your-path/Flowise/packages/server` |
-| FLOWISE_SECRETKEY_OVERWRITE | ๅ ๅฏๅฏ้ฅ็จไบๆฟไปฃๅญๅจๅจ SECRETKEY_PATH ไธญ็ๅฏ้ฅ | ๅญ็ฌฆไธฒ |
-| MODEL_LIST_CONFIG_JSON | ๅ ่ฝฝๆจกๅ็ไฝ็ฝฎ | ๅญ็ฌฆ | `/your_model_list_config_file_path` |
-| STORAGE_TYPE | ไธไผ ๆไปถ็ๅญๅจ็ฑปๅ | ๆไธพๅญ็ฌฆไธฒ: `local`, `s3` | `local` |
-| BLOB_STORAGE_PATH | ไธไผ ๆไปถๅญๅจ็ๆฌๅฐๆไปถๅคน่ทฏๅพ, ๅฝ`STORAGE_TYPE`ๆฏ`local` | ๅญ็ฌฆไธฒ | `your-home-dir/.flowise/storage` |
-| S3_STORAGE_BUCKET_NAME | S3 ๅญๅจๆไปถๅคน่ทฏๅพ, ๅฝ`STORAGE_TYPE`ๆฏ`s3` | ๅญ็ฌฆไธฒ | |
-| S3_STORAGE_ACCESS_KEY_ID | AWS ่ฎฟ้ฎๅฏ้ฅ (Access Key) | ๅญ็ฌฆไธฒ | |
-| S3_STORAGE_SECRET_ACCESS_KEY | AWS ๅฏ้ฅ (Secret Key) | ๅญ็ฌฆไธฒ | |
-| S3_STORAGE_REGION | S3 ๅญๅจๅฐๅบ | ๅญ็ฌฆไธฒ | |
-| S3_ENDPOINT_URL | S3 ็ซฏ็น URL | ๅญ็ฌฆไธฒ | |
-| S3_FORCE_PATH_STYLE | ๅฐๅ ถ่ฎพ็ฝฎไธบ true ไปฅๅผบๅถ่ฏทๆฑไฝฟ็จ่ทฏๅพๆ ทๅผๅฏปๅ | ๅธๅฐๅผ | false |
-| SHOW_COMMUNITY_NODES | ๆพ็คบ็ฑ็คพๅบๅๅปบ็่็น | ๅธๅฐๅผ | |
-| DISABLED_NODES | ไป็้ขไธญ้่่็น๏ผไปฅ้ๅทๅ้็่็นๅ็งฐๅ่กจ๏ผ | ๅญ็ฌฆไธฒ | |
+| ๅ้ๅ | ๆ่ฟฐ | ็ฑปๅ | ้ป่ฎคๅผ |
+| ------------------------------ | -------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- |
+| `PORT` | Flowise ่ฟ่ก็ HTTP ็ซฏๅฃ | ๆฐๅญ | 3000 |
+| `FLOWISE_FILE_SIZE_LIMIT` | ไธไผ ๆไปถๅคงๅฐ้ๅถ | ๅญ็ฌฆไธฒ | 50mb |
+| `DEBUG` | ๆๅฐ็ปไปถ็ๆฅๅฟ | ๅธๅฐๅผ | |
+| `LOG_PATH` | ๅญๅจๆฅๅฟๆไปถ็ไฝ็ฝฎ | ๅญ็ฌฆไธฒ | `your-path/Flowise/logs` |
+| `LOG_LEVEL` | ๆฅๅฟ็ไธๅ็บงๅซ | ๆไธพๅญ็ฌฆไธฒ: `error`, `info`, `verbose`, `debug` | `info` |
+| `TOOL_FUNCTION_BUILTIN_DEP` | ็จไบๅทฅๅ ทๅฝๆฐ็ NodeJS ๅ ็ฝฎๆจกๅ | ๅญ็ฌฆไธฒ | |
+| `TOOL_FUNCTION_EXTERNAL_DEP` | ็จไบๅทฅๅ ทๅฝๆฐ็ๅค้จๆจกๅ | ๅญ็ฌฆไธฒ | |
+| `DATABASE_TYPE` | ๅญๅจ Flowise ๆฐๆฎ็ๆฐๆฎๅบ็ฑปๅ | ๆไธพๅญ็ฌฆไธฒ: `sqlite`, `mysql`, `postgres` | `sqlite` |
+| `DATABASE_PATH` | ๆฐๆฎๅบไฟๅญ็ไฝ็ฝฎ๏ผๅฝ `DATABASE_TYPE` ๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | `your-home-dir/.flowise` |
+| `DATABASE_HOST` | ไธปๆบ URL ๆ IP ๅฐๅ๏ผๅฝ `DATABASE_TYPE` ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
+| `DATABASE_PORT` | ๆฐๆฎๅบ็ซฏๅฃ๏ผๅฝ `DATABASE_TYPE` ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
+| `DATABASE_USERNAME` | ๆฐๆฎๅบ็จๆทๅ๏ผๅฝ `DATABASE_TYPE` ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
+| `DATABASE_PASSWORD` | ๆฐๆฎๅบๅฏ็ ๏ผๅฝ `DATABASE_TYPE` ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
+| `DATABASE_NAME` | ๆฐๆฎๅบๅ็งฐ๏ผๅฝ `DATABASE_TYPE` ไธๆฏ sqlite ๆถ๏ผ | ๅญ็ฌฆไธฒ | |
+| `SECRETKEY_PATH` | ไฟๅญๅ ๅฏๅฏ้ฅ๏ผ็จไบๅ ๅฏ/่งฃๅฏๅญๆฎ๏ผ็ไฝ็ฝฎ | ๅญ็ฌฆไธฒ | `your-path/Flowise/packages/server` |
+| `FLOWISE_SECRETKEY_OVERWRITE` | ๅ ๅฏๅฏ้ฅ็จไบๆฟไปฃๅญๅจๅจ `SECRETKEY_PATH` ไธญ็ๅฏ้ฅ | ๅญ็ฌฆไธฒ | |
+| `MODEL_LIST_CONFIG_JSON` | ๅ ่ฝฝๆจกๅ็ไฝ็ฝฎ | ๅญ็ฌฆไธฒ | `/your_model_list_config_file_path` |
+| `STORAGE_TYPE` | ไธไผ ๆไปถ็ๅญๅจ็ฑปๅ | ๆไธพๅญ็ฌฆไธฒ: `local`, `s3` | `local` |
+| `BLOB_STORAGE_PATH` | ๆฌๅฐไธไผ ๆไปถๅญๅจ่ทฏๅพ๏ผๅฝ `STORAGE_TYPE` ไธบ `local`๏ผ | ๅญ็ฌฆไธฒ | `your-home-dir/.flowise/storage` |
+| `S3_STORAGE_BUCKET_NAME` | S3 ๅญๅจๆไปถๅคน่ทฏๅพ๏ผๅฝ `STORAGE_TYPE` ไธบ `s3`๏ผ | ๅญ็ฌฆไธฒ | |
+| `S3_STORAGE_ACCESS_KEY_ID` | AWS ่ฎฟ้ฎๅฏ้ฅ (Access Key) | ๅญ็ฌฆไธฒ | |
+| `S3_STORAGE_SECRET_ACCESS_KEY` | AWS ๅฏ้ฅ (Secret Key) | ๅญ็ฌฆไธฒ | |
+| `S3_STORAGE_REGION` | S3 ๅญๅจๅฐๅบ | ๅญ็ฌฆไธฒ | |
+| `S3_ENDPOINT_URL` | S3 ็ซฏ็น URL | ๅญ็ฌฆไธฒ | |
+| `S3_FORCE_PATH_STYLE` | ่ฎพ็ฝฎไธบ true ไปฅๅผบๅถ่ฏทๆฑไฝฟ็จ่ทฏๅพๆ ทๅผๅฏปๅ | ๅธๅฐๅผ | false |
+| `SHOW_COMMUNITY_NODES` | ๆพ็คบ็ฑ็คพๅบๅๅปบ็่็น | ๅธๅฐๅผ | |
+| `DISABLED_NODES` | ไป็้ขไธญ้่่็น๏ผไปฅ้ๅทๅ้็่็นๅ็งฐๅ่กจ๏ผ | ๅญ็ฌฆไธฒ | |
ๆจไนๅฏไปฅๅจไฝฟ็จ `npx` ๆถๆๅฎ็ฏๅขๅ้ใไพๅฆ๏ผ
diff --git a/i18n/README-JA.md b/i18n/README-JA.md
index a329059ed..0ea1ae386 100644
--- a/i18n/README-JA.md
+++ b/i18n/README-JA.md
@@ -31,12 +31,6 @@
npx flowise start
```
- ใฆใผใถใผๅใจใในใฏใผใใๅ ฅๅ
-
- ```bash
- npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
- ```
-
3. [http://localhost:3000](http://localhost:3000) ใ้ใ
## ๐ณ Docker
@@ -127,15 +121,6 @@ Flowise ใซใฏใ3 ใคใฎ็ฐใชใใขใธใฅใผใซใ 1 ใคใฎ mono ใชใใธใ
ใณใผใใฎๅคๆดใฏ [http://localhost:8080](http://localhost:8080) ใซ่ชๅ็ใซใขใใชใใชใญใผใใใพใ
-## ๐ ่ช่จผ
-
-ใขใใชใฌใใซใฎ่ช่จผใๆๅนใซใใใซใฏใ `FLOWISE_USERNAME` ใจ `FLOWISE_PASSWORD` ใ `packages/server` ใฎ `.env` ใใกใคใซใซ่ฟฝๅ ใใพใ:
-
-```
-FLOWISE_USERNAME=user
-FLOWISE_PASSWORD=1234
-```
-
## ๐ฑ ็ฐๅขๅคๆฐ
Flowise ใฏใใคใณในใฟใณในใ่จญๅฎใใใใใฎใใพใใพใช็ฐๅขๅคๆฐใใตใใผใใใฆใใพใใ`packages/server` ใใฉใซใๅ ใฎ `.env` ใใกใคใซใงไปฅไธใฎๅคๆฐใๆๅฎใใใใจใใงใใใ[็ถใ](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)ใ่ชญใ
@@ -197,9 +182,9 @@ Flowise ใฏใใคใณในใฟใณในใ่จญๅฎใใใใใฎใใพใใพใช็ฐๅขๅค
-[ใณใณใใชใใฅใผใใฃใณใฐใฌใคใ](CONTRIBUTING.md)ใๅ็ งใใฆใใ ใใใ่ณชๅใๅ้กใใใใฐใ[Discord](https://discord.gg/jbaHfsRVBW) ใพใงใ้ฃ็ตกใใ ใใใ
+[ใณใณใใชใใฅใผใใฃใณใฐใฌใคใ](../CONTRIBUTING.md)ใๅ็ งใใฆใใ ใใใ่ณชๅใๅ้กใใใใฐใ[Discord](https://discord.gg/jbaHfsRVBW) ใพใงใ้ฃ็ตกใใ ใใใ
[](https://star-history.com/#FlowiseAI/Flowise&Date)
## ๐ ใฉใคใปใณใน
-ใใฎใชใใธใใชใฎใฝใผในใณใผใใฏใ[Apache License Version 2.0](LICENSE.md)ใฎไธใงๅฉ็จๅฏ่ฝใงใใ
+ใใฎใชใใธใใชใฎใฝใผในใณใผใใฏใ[Apache License Version 2.0](../LICENSE.md)ใฎไธใงๅฉ็จๅฏ่ฝใงใใ
diff --git a/i18n/README-KR.md b/i18n/README-KR.md
index c02b0b066..7caaa01a4 100644
--- a/i18n/README-KR.md
+++ b/i18n/README-KR.md
@@ -31,12 +31,6 @@
npx flowise start
```
- ์ฌ์ฉ์ ์ด๋ฆ๊ณผ ๋น๋ฐ๋ฒํธ๋ก ์์ํ๊ธฐ
-
- ```bash
- npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
- ```
-
3. [http://localhost:3000](http://localhost:3000) URL ์ด๊ธฐ
## ๐ณ ๋์ปค(Docker)๋ฅผ ํ์ฉํ์ฌ ์์ํ๊ธฐ
@@ -127,15 +121,6 @@ Flowise๋ ๋จ์ผ ๋ฆฌํฌ์งํ ๋ฆฌ์ 3๊ฐ์ ์๋ก ๋ค๋ฅธ ๋ชจ๋์ด ์์ต๋
์ฝ๋๊ฐ ๋ณ๊ฒฝ๋๋ฉด [http://localhost:8080](http://localhost:8080)์์ ์๋์ผ๋ก ์ ํ๋ฆฌ์ผ์ด์ ์ ์๋ก๊ณ ์นจ ํฉ๋๋ค.
-## ๐ ์ธ์ฆ
-
-์ ํ๋ฆฌ์ผ์ด์ ์์ค์ ์ธ์ฆ์ ์ฌ์ฉํ๋ ค๋ฉด `packages/server`์ `.env` ํ์ผ์ `FLOWISE_USERNAME` ๋ฐ `FLOWISE_PASSWORD`๋ฅผ ์ถ๊ฐํฉ๋๋ค:
-
-```
-FLOWISE_USERNAME=user
-FLOWISE_PASSWORD=1234
-```
-
## ๐ฑ ํ๊ฒฝ ๋ณ์
Flowise๋ ์ธ์คํด์ค ๊ตฌ์ฑ์ ์ํ ๋ค์ํ ํ๊ฒฝ ๋ณ์๋ฅผ ์ง์ํฉ๋๋ค. `packages/server` ํด๋ ๋ด `.env` ํ์ผ์ ๋ค์ํ ํ๊ฒฝ ๋ณ์๋ฅผ ์ง์ ํ ์ ์์ต๋๋ค. [์์ธํ ๋ณด๊ธฐ](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
@@ -197,9 +182,9 @@ Flowise๋ ์ธ์คํด์ค ๊ตฌ์ฑ์ ์ํ ๋ค์ํ ํ๊ฒฝ ๋ณ์๋ฅผ ์ง์ํฉ๋
-[contributing guide](CONTRIBUTING.md)๋ฅผ ์ดํด๋ณด์ธ์. ๋์ค์ฝ๋ [Discord](https://discord.gg/jbaHfsRVBW) ์ฑ๋์์๋ ์ด์๋ ์ง์์๋ต์ ์งํํ์ค ์ ์์ต๋๋ค.
+[contributing guide](../CONTRIBUTING.md)๋ฅผ ์ดํด๋ณด์ธ์. ๋์ค์ฝ๋ [Discord](https://discord.gg/jbaHfsRVBW) ์ฑ๋์์๋ ์ด์๋ ์ง์์๋ต์ ์งํํ์ค ์ ์์ต๋๋ค.
[](https://star-history.com/#FlowiseAI/Flowise&Date)
## ๐ ๋ผ์ด์ผ์ค
-๋ณธ ๋ฆฌํฌ์งํ ๋ฆฌ์ ์์ค์ฝ๋๋ [Apache License Version 2.0](LICENSE.md) ๋ผ์ด์ผ์ค๊ฐ ์ ์ฉ๋ฉ๋๋ค.
+๋ณธ ๋ฆฌํฌ์งํ ๋ฆฌ์ ์์ค์ฝ๋๋ [Apache License Version 2.0](../LICENSE.md) ๋ผ์ด์ผ์ค๊ฐ ์ ์ฉ๋ฉ๋๋ค.
diff --git a/i18n/README-TW.md b/i18n/README-TW.md
index f051e844e..c8fbfedbb 100644
--- a/i18n/README-TW.md
+++ b/i18n/README-TW.md
@@ -13,7 +13,7 @@
[English](../README.md) | ็น้ซไธญๆ | [็ฎไฝไธญๆ](./README-ZH.md) | [ๆฅๆฌ่ช](./README-JA.md) | [ํ๊ตญ์ด](./README-KR.md)
-
ๅฏ่ฆๅๅปบๆง AI/LLM ๆต็จ
+
ๅฏ่ฆๅๅปบ็ฝฎ AI/LLM ๆต็จ
@@ -31,28 +31,22 @@
npx flowise start
```
- ไฝฟ็จ็จๆถๅๅๅฏ็ขผ
-
- ```bash
- npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
- ```
-
3. ๆ้ [http://localhost:3000](http://localhost:3000)
## ๐ณ Docker
### Docker Compose
-1. ๅ ้ Flowise ้ ็ฎ
-2. ้ฒๅ ฅ้ ็ฎๆ น็ฎ้็ `docker` ๆไปถๅคพ
-3. ่ค่ฃฝ `.env.example` ๆไปถ๏ผ็ฒ่ฒผๅฐ็ธๅไฝ็ฝฎ๏ผไธฆ้ๅฝๅ็บ `.env` ๆไปถ
+1. ่ค่ฃฝ Flowise ๅฐๆก
+2. ้ฒๅ ฅๅฐๆกๆ น็ฎ้็ `docker` ่ณๆๅคพ
+3. ่ค่ฃฝ `.env.example` ๆไปถ๏ผ่ฒผๅฐ็ธๅไฝ็ฝฎ๏ผไธฆ้ๆฐๅฝๅ็บ `.env` ๆไปถ
4. `docker compose up -d`
5. ๆ้ [http://localhost:3000](http://localhost:3000)
-6. ๆจๅฏไปฅ้้ `docker compose stop` ๅๆญขๅฎนๅจ
+6. ๆจๅฏไปฅ้้ `docker compose stop` ๅๆญขๅฎนๅจ
### Docker ๆ ๅ
-1. ๆฌๅฐๆงๅปบๆ ๅ๏ผ
+1. ๆฌๅฐๅปบ็ฝฎๆ ๅ๏ผ
```bash
docker build --no-cache -t flowise .
```
@@ -69,7 +63,7 @@
## ๐จโ๐ป ้็ผ่
-Flowise ๅจๅฎๅ mono ๅญๅฒๅบซไธญๆ 3 ๅไธๅ็ๆจกๅกใ
+Flowise ๅจๅฎๅ mono ๅฒๅญๅบซไธญๆ 3 ๅไธๅ็ๆจก็ตใ
- `server`: ๆไพ API ้่ผฏ็ Node ๅพ็ซฏ
- `ui`: React ๅ็ซฏ
@@ -85,33 +79,33 @@ Flowise ๅจๅฎๅ mono ๅญๅฒๅบซไธญๆ 3 ๅไธๅ็ๆจกๅกใ
### ่จญ็ฝฎ
-1. ๅ ้ๅญๅฒๅบซ
+1. ่ค่ฃฝๅฒๅญๅบซ
```bash
git clone https://github.com/FlowiseAI/Flowise.git
```
-2. ้ฒๅ ฅๅญๅฒๅบซๆไปถๅคพ
+2. ้ฒๅ ฅๅฒๅญๅบซๆไปถๅคพ
```bash
cd Flowise
```
-3. ๅฎ่ฃๆๆๆจกๅก็ๆๆไพ่ณด้ ๏ผ
+3. ๅฎ่ฃๆๆๆจก็ต็ๆๆไพ่ณด้ ๏ผ
```bash
pnpm install
```
-4. ๆงๅปบๆๆไปฃ็ขผ๏ผ
+4. ๅปบ็ฝฎๆๆ็จๅผ็ขผ๏ผ
```bash
pnpm build
```
- ้ๅบไปฃ็ขผ 134๏ผJavaScript ๅ ๅ งๅญไธ่ถณ๏ผ
- ๅฆๆๅจ้่กไธ่ฟฐ `build` ่ ณๆฌๆ้ๅฐๆญค้ฏ่ชค๏ผ่ซๅ่ฉฆๅขๅ Node.js ๅ ๅคงๅฐไธฆ้ๆฐ้่ก่ ณๆฌ๏ผ
+ Exit code 134๏ผJavaScript heap out of memory๏ผ
+ ๅฆๆๅจ้่กไธ่ฟฐ `build` ่ ณๆฌๆ้ๅฐๆญค้ฏ่ชค๏ผ่ซๅ่ฉฆๅขๅ Node.js ไธญ็ Heap ่จๆถ้ซๅคงๅฐไธฆ้ๆฐ้่ก่ ณๆฌ๏ผ
export NODE_OPTIONS="--max-old-space-size=4096"
pnpm build
@@ -124,9 +118,9 @@ Flowise ๅจๅฎๅ mono ๅญๅฒๅบซไธญๆ 3 ๅไธๅ็ๆจกๅกใ
pnpm start
```
- ๆจ็พๅจๅฏไปฅ่จชๅ [http://localhost:3000](http://localhost:3000)
+ ๆจ็พๅจๅฏไปฅ้ๅ [http://localhost:3000](http://localhost:3000)
-6. ๅฐๆผ้็ผๆงๅปบ๏ผ
+6. ๅฐๆผ้็ผๅปบ็ฝฎ๏ผ
- ๅจ `packages/ui` ไธญๅตๅปบ `.env` ๆไปถไธฆๆๅฎ `VITE_PORT`๏ผๅ่ `.env.example`๏ผ
- ๅจ `packages/server` ไธญๅตๅปบ `.env` ๆไปถไธฆๆๅฎ `PORT`๏ผๅ่ `.env.example`๏ผ
@@ -136,28 +130,19 @@ Flowise ๅจๅฎๅ mono ๅญๅฒๅบซไธญๆ 3 ๅไธๅ็ๆจกๅกใ
pnpm dev
```
- ไปปไฝไปฃ็ขผๆดๆน้ฝๆ่ชๅ้ๆฐๅ ่ผๆ็จ็จๅบ [http://localhost:8080](http://localhost:8080)
+ ไปปไฝ็จๅผ็ขผๆดๆน้ฝๆ่ชๅ้ๆฐๅ ่ผๆ็จ็จๅผ [http://localhost:8080](http://localhost:8080)
-## ๐ ่ช่ญ
+## ๐ฑ ็ฐๅข่ฎๆธ
-่ฆๅ็จๆ็จ็ดๅฅ็่บซไปฝ้ฉ่ญ๏ผ่ซๅจ `packages/server` ไธญ็ `.env` ๆไปถไธญๆทปๅ `FLOWISE_USERNAME` ๅ `FLOWISE_PASSWORD`๏ผ
-
-```
-FLOWISE_USERNAME=user
-FLOWISE_PASSWORD=1234
-```
-
-## ๐ฑ ็ฐๅข่ฎ้
-
-Flowise ๆฏๆไธๅ็็ฐๅข่ฎ้ไพ้ ็ฝฎๆจ็ๅฏฆไพใๆจๅฏไปฅๅจ `packages/server` ๆไปถๅคพไธญ็ `.env` ๆไปถไธญๆๅฎไปฅไธ่ฎ้ใ้ฑ่ฎ [ๆดๅค](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
+Flowise ๆฏๆไธๅ็็ฐๅข่ฎๆธไพ้ ็ฝฎๆจ็ๅฏฆไพใๆจๅฏไปฅๅจ `packages/server` ๆไปถๅคพไธญ็ `.env` ๆไปถไธญๆๅฎไปฅไธ่ฎๆธใ้ฑ่ฎ [ๆดๅค](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
## ๐ ๆๆช
[Flowise ๆๆช](https://docs.flowiseai.com/)
-## ๐ ่ชๆๆ็ฎก
+## ๐ ่ช่กๆถ่จญ
-ๅจๆจ็พๆ็ๅบ็ค่จญๆฝไธญ้จ็ฝฒ Flowise ่ชๆๆ็ฎก๏ผๆๅๆฏๆๅ็จฎ [้จ็ฝฒ](https://docs.flowiseai.com/configuration/deployment)
+ๅจๆจ็พๆ็ๅบ็ค่จญๆฝไธญ้จ็ฝฒ Flowise๏ผๆๅๆฏๆๅ็จฎ่ช่กๆถ่จญ้ธ้ [้จ็ฝฒ](https://docs.flowiseai.com/configuration/deployment)
- [AWS](https://docs.flowiseai.com/configuration/deployment/aws)
- [Azure](https://docs.flowiseai.com/configuration/deployment/azure)
@@ -193,9 +178,9 @@ Flowise ๆฏๆไธๅ็็ฐๅข่ฎ้ไพ้ ็ฝฎๆจ็ๅฏฆไพใๆจๅฏไปฅๅจ `package
-## โ๏ธ Flowise ้ฒ
+## โ๏ธ Flowise ้ฒ็ซฏๅนณๅฐ
-[้ๅงไฝฟ็จ Flowise ้ฒ](https://flowiseai.com/)
+[้ๅงไฝฟ็จ Flowise ้ฒ็ซฏๅนณๅฐ](https://flowiseai.com/)
## ๐ ๆฏๆ
@@ -209,9 +194,9 @@ Flowise ๆฏๆไธๅ็็ฐๅข่ฎ้ไพ้ ็ฝฎๆจ็ๅฏฆไพใๆจๅฏไปฅๅจ `package
-่ซๅ้ฑ [่ฒข็ปๆๅ](CONTRIBUTING.md)ใๅฆๆๆจๆไปปไฝๅ้กๆๅ้ก๏ผ่ซ้้ [Discord](https://discord.gg/jbaHfsRVBW) ่ๆๅ่ฏ็นซใ
+่ซๅ้ฑ [่ฒข็ปๆๅ](../CONTRIBUTING.md)ใๅฆๆๆจๆไปปไฝๅ้กๆๅ้ก๏ผ่ซ้้ [Discord](https://discord.gg/jbaHfsRVBW) ่ๆๅ่ฏ็นซใ
[](https://star-history.com/#FlowiseAI/Flowise&Date)
## ๐ ่จฑๅฏ่ญ
-ๆญคๅญๅฒๅบซไธญ็ๆบไปฃ็ขผๆ นๆ [Apache ่จฑๅฏ่ญ็ๆฌ 2.0](LICENSE.md) ๆไพใ
+ๆญคๅฒๅญๅบซไธญ็ๅๅง็ขผๆ นๆ [Apache 2.0 ๆๆฌๆขๆฌพ](../LICENSE.md) ๆๆฌไฝฟ็จใ
diff --git a/i18n/README-ZH.md b/i18n/README-ZH.md
index 5f313fb32..d744d7392 100644
--- a/i18n/README-ZH.md
+++ b/i18n/README-ZH.md
@@ -31,12 +31,6 @@
npx flowise start
```
- ไฝฟ็จ็จๆทๅๅๅฏ็
-
- ```bash
- npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
- ```
-
3. ๆๅผ [http://localhost:3000](http://localhost:3000)
## ๐ณ Docker
@@ -127,15 +121,6 @@ Flowise ๅจไธไธชๅไธ็ไปฃ็ ๅบไธญๆ 3 ไธชไธๅ็ๆจกๅใ
ไปปไฝไปฃ็ ๆดๆน้ฝไผ่ชๅจ้ๆฐๅ ่ฝฝๅบ็จ็จๅบ๏ผ่ฎฟ้ฎ [http://localhost:8080](http://localhost:8080)
-## ๐ ่ฎค่ฏ
-
-่ฆๅฏ็จๅบ็จ็จๅบ็บง่บซไปฝ้ช่ฏ๏ผๅจ `packages/server` ็ `.env` ๆไปถไธญๆทปๅ `FLOWISE_USERNAME` ๅ `FLOWISE_PASSWORD`๏ผ
-
-```
-FLOWISE_USERNAME=user
-FLOWISE_PASSWORD=1234
-```
-
## ๐ฑ ็ฏๅขๅ้
Flowise ๆฏๆไธๅ็็ฏๅขๅ้ๆฅ้ ็ฝฎๆจ็ๅฎไพใๆจๅฏไปฅๅจ `packages/server` ๆไปถๅคนไธญ็ `.env` ๆไปถไธญๆๅฎไปฅไธๅ้ใไบ่งฃๆดๅคไฟกๆฏ๏ผ่ฏท้ ่ฏป[ๆๆกฃ](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
@@ -197,8 +182,8 @@ Flowise ๆฏๆไธๅ็็ฏๅขๅ้ๆฅ้ ็ฝฎๆจ็ๅฎไพใๆจๅฏไปฅๅจ `package
-ๅ่ง[่ดก็ฎๆๅ](CONTRIBUTING.md)ใๅฆๆๆจๆไปปไฝ้ฎ้ขๆ้ฎ้ข๏ผ่ฏทๅจ[Discord](https://discord.gg/jbaHfsRVBW)ไธไธๆไปฌ่็ณปใ
+ๅ่ง[่ดก็ฎๆๅ](CONTRIBUTING-ZH.md)ใๅฆๆๆจๆไปปไฝ้ฎ้ขๆ้ฎ้ข๏ผ่ฏทๅจ[Discord](https://discord.gg/jbaHfsRVBW)ไธไธๆไปฌ่็ณปใ
## ๐ ่ฎธๅฏ่ฏ
-ๆญคไปฃ็ ๅบไธญ็ๆบไปฃ็ ๅจ[Apache License Version 2.0 ่ฎธๅฏ่ฏ](LICENSE.md)ไธๆไพใ
+ๆญคไปฃ็ ๅบไธญ็ๆบไปฃ็ ๅจ[Apache License Version 2.0 ่ฎธๅฏ่ฏ](../LICENSE.md)ไธๆไพใ
diff --git a/metrics/otel/compose.yaml b/metrics/otel/compose.yaml
index 4567588ff..974081979 100644
--- a/metrics/otel/compose.yaml
+++ b/metrics/otel/compose.yaml
@@ -1,15 +1,17 @@
-version: "2"
+version: '2'
services:
- otel-collector:
- image: otel/opentelemetry-collector-contrib
- command: ["--config=/etc/otelcol-contrib/config.yaml", "--feature-gates=-exporter.datadogexporter.DisableAPMStats", "${OTELCOL_ARGS}"]
- volumes:
- - ./otel.config.yml:/etc/otelcol-contrib/config.yaml
- ports:
- - 1888:1888 # pprof extension
- - 8888:8888 # Prometheus metrics exposed by the Collector
- - 8889:8889 # Prometheus exporter metrics
- - 13133:13133 # health_check extension
- - 4317:4317 # OTLP gRPC receiver
- - 4318:4318 # OTLP http receiver
- - 55679:55679 # zpages extension
+ otel-collector:
+ read_only: true
+ image: otel/opentelemetry-collector-contrib
+ command:
+ ['--config=/etc/otelcol-contrib/config.yaml', '--feature-gates=-exporter.datadogexporter.DisableAPMStats', '${OTELCOL_ARGS}']
+ volumes:
+ - ./otel.config.yml:/etc/otelcol-contrib/config.yaml
+ ports:
+ - 1888:1888 # pprof extension
+ - 8888:8888 # Prometheus metrics exposed by the Collector
+ - 8889:8889 # Prometheus exporter metrics
+ - 13133:13133 # health_check extension
+ - 4317:4317 # OTLP gRPC receiver
+ - 4318:4318 # OTLP http receiver
+ - 55679:55679 # zpages extension
diff --git a/package.json b/package.json
index f7855fef5..9ee93d127 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "flowise",
- "version": "3.0.0",
+ "version": "3.0.11",
"private": true,
"homepage": "https://flowiseai.com",
"workspaces": [
@@ -20,6 +20,10 @@
"start-worker": "run-script-os",
"start-worker:windows": "cd packages/server/bin && run worker",
"start-worker:default": "cd packages/server/bin && ./run worker",
+ "user": "run-script-os",
+ "user:windows": "cd packages/server/bin && run user",
+ "user:default": "cd packages/server/bin && ./run user",
+ "test": "turbo run test",
"clean": "pnpm --filter \"./packages/**\" clean",
"nuke": "pnpm --filter \"./packages/**\" nuke && rimraf node_modules .turbo",
"format": "prettier --write \"**/*.{ts,tsx,md}\"",
@@ -62,20 +66,26 @@
"sqlite3"
],
"overrides": {
- "axios": "1.7.9",
+ "axios": "1.12.0",
"body-parser": "2.0.2",
"braces": "3.0.3",
"cross-spawn": "7.0.6",
+ "form-data": "4.0.4",
"glob-parent": "6.0.2",
"http-proxy-middleware": "3.0.3",
"json5": "2.2.3",
"nth-check": "2.1.1",
"path-to-regexp": "0.1.12",
"prismjs": "1.29.0",
+ "rollup": "4.45.0",
"semver": "7.7.1",
"set-value": "4.1.0",
+ "solid-js": "1.9.7",
+ "tar-fs": "3.1.0",
"unset-value": "2.0.1",
- "webpack-dev-middleware": "7.4.2"
+ "webpack-dev-middleware": "7.4.2",
+ "ws": "8.18.3",
+ "xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz"
}
},
"engines": {
@@ -85,7 +95,7 @@
"resolutions": {
"@google/generative-ai": "^0.24.0",
"@grpc/grpc-js": "^1.10.10",
- "@langchain/core": "0.3.37",
+ "@langchain/core": "0.3.61",
"@qdrant/openapi-typescript-fetch": "1.2.6",
"openai": "4.96.0",
"protobufjs": "7.4.0"
diff --git a/packages/api-documentation/package.json b/packages/api-documentation/package.json
index 780920f7c..849a10a29 100644
--- a/packages/api-documentation/package.json
+++ b/packages/api-documentation/package.json
@@ -1,6 +1,6 @@
{
"name": "flowise-api",
- "version": "1.0.2",
+ "version": "1.0.3",
"description": "Flowise API documentation server",
"scripts": {
"build": "tsc",
diff --git a/packages/api-documentation/src/yml/swagger.yml b/packages/api-documentation/src/yml/swagger.yml
index 21b8f1dd0..00cf975a1 100644
--- a/packages/api-documentation/src/yml/swagger.yml
+++ b/packages/api-documentation/src/yml/swagger.yml
@@ -1216,15 +1216,18 @@ paths:
security:
- bearerAuth: []
operationId: createPrediction
- summary: Create a new prediction
- description: Create a new prediction
+ summary: Send message to flow and get AI response
+ description: |
+ Send a message to your flow and receive an AI-generated response. This is the primary endpoint for interacting with your flows and assistants.
+ **Authentication**: API key may be required depending on flow settings.
parameters:
- in: path
name: id
required: true
schema:
type: string
- description: Chatflow ID
+ description: Flow ID - the unique identifier of your flow
+ example: 'your-flow-id'
requestBody:
content:
application/json:
@@ -1236,24 +1239,36 @@ paths:
properties:
question:
type: string
- description: Question to ask during the prediction process
+ description: Question/message to send to the flow
+ example: 'Analyze this uploaded file and summarize its contents'
files:
type: array
items:
type: string
format: binary
- description: Files to be uploaded
- modelName:
+ description: Files to be uploaded (images, audio, documents, etc.)
+ streaming:
+ type: boolean
+ description: Enable streaming responses
+ default: false
+ overrideConfig:
type: string
- nullable: true
- example: ''
- description: Other override configurations
+ description: JSON string of configuration overrides
+ example: '{"sessionId":"user-123","temperature":0.7}'
+ history:
+ type: string
+ description: JSON string of conversation history
+ example: '[{"role":"userMessage","content":"Hello"},{"role":"apiMessage","content":"Hi there!"}]'
+ humanInput:
+ type: string
+ description: JSON string of human input for resuming execution
+ example: '{"type":"proceed","feedback":"Continue with the plan"}'
required:
- question
required: true
responses:
'200':
- description: Prediction created successfully
+ description: Successful prediction response
content:
application/json:
schema:
@@ -1261,45 +1276,106 @@ paths:
properties:
text:
type: string
- description: The result of the prediction
+ description: The AI-generated response text
+ example: 'Artificial intelligence (AI) is a branch of computer science that focuses on creating systems capable of performing tasks that typically require human intelligence.'
json:
type: object
- description: The result of the prediction in JSON format if available
+ description: The result in JSON format if available (for structured outputs)
+ nullable: true
question:
type: string
- description: The question asked during the prediction process
+ description: The original question/message sent to the flow
+ example: 'What is artificial intelligence?'
chatId:
type: string
- description: The chat ID associated with the prediction
+ description: Unique identifier for the chat session
+ example: 'chat-12345'
chatMessageId:
type: string
- description: The chat message ID associated with the prediction
+ description: Unique identifier for this specific message
+ example: 'msg-67890'
sessionId:
type: string
- description: The session ID associated with the prediction
+ description: Session identifier for conversation continuity
+ example: 'user-session-123'
+ nullable: true
memoryType:
type: string
- description: The memory type associated with the prediction
+ description: Type of memory used for conversation context
+ example: 'Buffer Memory'
+ nullable: true
sourceDocuments:
type: array
+ description: Documents retrieved from vector store (if RAG is enabled)
items:
$ref: '#/components/schemas/Document'
+ nullable: true
usedTools:
type: array
+ description: Tools that were invoked during the response generation
items:
$ref: '#/components/schemas/UsedTool'
- fileAnnotations:
- type: array
- items:
- $ref: '#/components/schemas/FileAnnotation'
+ nullable: true
'400':
- description: Invalid input provided
+ description: Bad Request - Invalid input provided or request format is incorrect
+ content:
+ application/json:
+ schema:
+ type: object
+ properties:
+ error:
+ type: string
+ example: 'Invalid request format. Check required fields and parameter types.'
+ '401':
+ description: Unauthorized - API key required or invalid
+ content:
+ application/json:
+ schema:
+ type: object
+ properties:
+ error:
+ type: string
+ example: 'Unauthorized access. Please verify your API key.'
'404':
- description: Chatflow not found
+ description: Not Found - Chatflow with specified ID does not exist
+ content:
+ application/json:
+ schema:
+ type: object
+ properties:
+ error:
+ type: string
+ example: 'Chatflow not found. Please verify the chatflow ID.'
+ '413':
+ description: Payload Too Large - Request payload exceeds size limits
+ content:
+ application/json:
+ schema:
+ type: object
+ properties:
+ error:
+ type: string
+ example: 'Request payload too large. Please reduce file sizes or split large requests.'
'422':
- description: Validation error
+ description: Validation Error - Request validation failed
+ content:
+ application/json:
+ schema:
+ type: object
+ properties:
+ error:
+ type: string
+ example: 'Validation failed. Check parameter requirements and data types.'
'500':
- description: Internal server error
+ description: Internal Server Error - Flow configuration or execution error
+ content:
+ application/json:
+ schema:
+ type: object
+ properties:
+ error:
+ type: string
+ example: 'Internal server error. Check flow configuration and node settings.'
/tools:
post:
tags:
@@ -2011,13 +2087,33 @@ components:
properties:
question:
type: string
- description: The question being asked
+ description: The question/message to send to the flow
+ example: 'What is artificial intelligence?'
+ form:
+ type: object
+ description: The form object to send to the flow (alternative to question for Agentflow V2)
+ additionalProperties: true
+ example:
+ title: 'Example'
+ count: 1
+ streaming:
+ type: boolean
+ description: Enable streaming responses for real-time output
+ default: false
+ example: false
overrideConfig:
type: object
- description: The configuration to override the default prediction settings (optional)
+ description: Override flow configuration and pass variables at runtime
+ additionalProperties: true
+ example:
+ sessionId: 'user-session-123'
+ temperature: 0.7
+ maxTokens: 500
+ vars:
+ user_name: 'Alice'
history:
type: array
- description: The history messages to be prepended (optional)
+ description: Previous conversation messages for context
items:
type: object
properties:
@@ -2030,8 +2126,14 @@ components:
type: string
description: The content of the message
example: 'Hello, how can I help you?'
+ example:
+ - role: 'apiMessage'
+ content: "Hello! I'm an AI assistant. How can I help you today?"
+ - role: 'userMessage'
+ content: "Hi, my name is Sarah and I'm learning about AI"
uploads:
type: array
+ description: Files to upload (images, audio, documents, etc.)
items:
type: object
properties:
@@ -2051,7 +2153,42 @@ components:
mime:
type: string
description: The MIME type of the file or resource
+ enum:
+ [
+ 'image/png',
+ 'image/jpeg',
+ 'image/jpg',
+ 'image/gif',
+ 'image/webp',
+ 'audio/mp4',
+ 'audio/webm',
+ 'audio/wav',
+ 'audio/mpeg',
+ 'audio/ogg',
+ 'audio/aac'
+ ]
example: 'image/png'
+ example:
+ - type: 'file'
+ name: 'example.png'
+ data: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAABjElEQVRIS+2Vv0oDQRDG'
+ mime: 'image/png'
+ humanInput:
+ type: object
+ description: Return human feedback and resume execution from a stopped checkpoint
+ properties:
+ type:
+ type: string
+ enum: [proceed, reject]
+ description: Type of human input response
+ example: 'reject'
+ feedback:
+ type: string
+ description: Feedback to the last output
+ example: 'Include more emoji'
+ example:
+ type: 'reject'
+ feedback: 'Include more emoji'
Tool:
type: object
diff --git a/packages/components/credentials/AgentflowApi.credential.ts b/packages/components/credentials/AgentflowApi.credential.ts
new file mode 100644
index 000000000..72f4cefe5
--- /dev/null
+++ b/packages/components/credentials/AgentflowApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class AgentflowApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Agentflow API'
+ this.name = 'agentflowApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Agentflow Api Key',
+ name: 'agentflowApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: AgentflowApi }
diff --git a/packages/components/credentials/CometApi.credential.ts b/packages/components/credentials/CometApi.credential.ts
new file mode 100644
index 000000000..58ec60610
--- /dev/null
+++ b/packages/components/credentials/CometApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeCredential, INodeParams } from '../src/Interface'
+
+class CometApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Comet API'
+ this.name = 'cometApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Comet API Key',
+ name: 'cometApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: CometApi }
diff --git a/packages/components/credentials/ElevenLabsApi.credential.ts b/packages/components/credentials/ElevenLabsApi.credential.ts
new file mode 100644
index 000000000..89a8b275d
--- /dev/null
+++ b/packages/components/credentials/ElevenLabsApi.credential.ts
@@ -0,0 +1,26 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class ElevenLabsApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Eleven Labs API'
+ this.name = 'elevenLabsApi'
+ this.version = 1.0
+ this.description =
+ 'Sign up for a Eleven Labs account and create an API Key.'
+ this.inputs = [
+ {
+ label: 'Eleven Labs API Key',
+ name: 'elevenLabsApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: ElevenLabsApi }
diff --git a/packages/components/credentials/GmailOAuth2.credential.ts b/packages/components/credentials/GmailOAuth2.credential.ts
new file mode 100644
index 000000000..38d23a154
--- /dev/null
+++ b/packages/components/credentials/GmailOAuth2.credential.ts
@@ -0,0 +1,63 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+const scopes = [
+ 'https://www.googleapis.com/auth/gmail.readonly',
+ 'https://www.googleapis.com/auth/gmail.compose',
+ 'https://www.googleapis.com/auth/gmail.modify',
+ 'https://www.googleapis.com/auth/gmail.labels'
+]
+
+class GmailOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+ description: string
+
+ constructor() {
+ this.label = 'Gmail OAuth2'
+ this.name = 'gmailOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://accounts.google.com/o/oauth2/v2/auth'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://oauth2.googleapis.com/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Additional Parameters',
+ name: 'additionalParameters',
+ type: 'string',
+ default: 'access_type=offline&prompt=consent',
+ hidden: true
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GmailOAuth2 }
diff --git a/packages/components/credentials/GoogleCalendarOAuth2.credential.ts b/packages/components/credentials/GoogleCalendarOAuth2.credential.ts
new file mode 100644
index 000000000..5792067a3
--- /dev/null
+++ b/packages/components/credentials/GoogleCalendarOAuth2.credential.ts
@@ -0,0 +1,58 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+const scopes = ['https://www.googleapis.com/auth/calendar', 'https://www.googleapis.com/auth/calendar.events']
+
+class GoogleCalendarOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+ description: string
+
+ constructor() {
+ this.label = 'Google Calendar OAuth2'
+ this.name = 'googleCalendarOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://accounts.google.com/o/oauth2/v2/auth'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://oauth2.googleapis.com/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Additional Parameters',
+ name: 'additionalParameters',
+ type: 'string',
+ default: 'access_type=offline&prompt=consent',
+ hidden: true
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GoogleCalendarOAuth2 }
diff --git a/packages/components/credentials/GoogleDocsOAuth2.credential.ts b/packages/components/credentials/GoogleDocsOAuth2.credential.ts
new file mode 100644
index 000000000..24cb5d6d5
--- /dev/null
+++ b/packages/components/credentials/GoogleDocsOAuth2.credential.ts
@@ -0,0 +1,62 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+const scopes = [
+ 'https://www.googleapis.com/auth/documents',
+ 'https://www.googleapis.com/auth/drive',
+ 'https://www.googleapis.com/auth/drive.file'
+]
+
+class GoogleDocsOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+ description: string
+
+ constructor() {
+ this.label = 'Google Docs OAuth2'
+ this.name = 'googleDocsOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://accounts.google.com/o/oauth2/v2/auth'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://oauth2.googleapis.com/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Additional Parameters',
+ name: 'additionalParameters',
+ type: 'string',
+ default: 'access_type=offline&prompt=consent',
+ hidden: true
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GoogleDocsOAuth2 }
diff --git a/packages/components/credentials/GoogleDriveOAuth2.credential.ts b/packages/components/credentials/GoogleDriveOAuth2.credential.ts
new file mode 100644
index 000000000..de027a8e4
--- /dev/null
+++ b/packages/components/credentials/GoogleDriveOAuth2.credential.ts
@@ -0,0 +1,62 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+const scopes = [
+ 'https://www.googleapis.com/auth/drive',
+ 'https://www.googleapis.com/auth/drive.appdata',
+ 'https://www.googleapis.com/auth/drive.photos.readonly'
+]
+
+class GoogleDriveOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+ description: string
+
+ constructor() {
+ this.label = 'Google Drive OAuth2'
+ this.name = 'googleDriveOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://accounts.google.com/o/oauth2/v2/auth'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://oauth2.googleapis.com/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Additional Parameters',
+ name: 'additionalParameters',
+ type: 'string',
+ default: 'access_type=offline&prompt=consent',
+ hidden: true
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GoogleDriveOAuth2 }
diff --git a/packages/components/credentials/GoogleSheetsOAuth2.credential.ts b/packages/components/credentials/GoogleSheetsOAuth2.credential.ts
new file mode 100644
index 000000000..3e2147922
--- /dev/null
+++ b/packages/components/credentials/GoogleSheetsOAuth2.credential.ts
@@ -0,0 +1,62 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+const scopes = [
+ 'https://www.googleapis.com/auth/drive.file',
+ 'https://www.googleapis.com/auth/spreadsheets',
+ 'https://www.googleapis.com/auth/drive.metadata'
+]
+
+class GoogleSheetsOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+ description: string
+
+ constructor() {
+ this.label = 'Google Sheets OAuth2'
+ this.name = 'googleSheetsOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://accounts.google.com/o/oauth2/v2/auth'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://oauth2.googleapis.com/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Additional Parameters',
+ name: 'additionalParameters',
+ type: 'string',
+ default: 'access_type=offline&prompt=consent',
+ hidden: true
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: GoogleSheetsOAuth2 }
diff --git a/packages/components/credentials/MicrosoftOutlookOAuth2.credential.ts b/packages/components/credentials/MicrosoftOutlookOAuth2.credential.ts
new file mode 100644
index 000000000..0308969a4
--- /dev/null
+++ b/packages/components/credentials/MicrosoftOutlookOAuth2.credential.ts
@@ -0,0 +1,66 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+const scopes = [
+ 'openid',
+ 'offline_access',
+ 'Contacts.Read',
+ 'Contacts.ReadWrite',
+ 'Calendars.Read',
+ 'Calendars.Read.Shared',
+ 'Calendars.ReadWrite',
+ 'Mail.Read',
+ 'Mail.ReadWrite',
+ 'Mail.ReadWrite.Shared',
+ 'Mail.Send',
+ 'Mail.Send.Shared',
+ 'MailboxSettings.Read'
+]
+
+class MsoftOutlookOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Microsoft Outlook OAuth2'
+ this.name = 'microsoftOutlookOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://login.microsoftonline.com//oauth2/v2.0/authorize'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://login.microsoftonline.com//oauth2/v2.0/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: MsoftOutlookOAuth2 }
diff --git a/packages/components/credentials/MicrosoftTeamsOAuth2.credential.ts b/packages/components/credentials/MicrosoftTeamsOAuth2.credential.ts
new file mode 100644
index 000000000..ffda846ae
--- /dev/null
+++ b/packages/components/credentials/MicrosoftTeamsOAuth2.credential.ts
@@ -0,0 +1,87 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+// Comprehensive scopes for Microsoft Teams operations
+const scopes = [
+ // Basic authentication
+ 'openid',
+ 'offline_access',
+
+ // User permissions
+ 'User.Read',
+ 'User.ReadWrite.All',
+
+ // Teams and Groups
+ 'Group.ReadWrite.All',
+ 'Team.ReadBasic.All',
+ 'Team.Create',
+ 'TeamMember.ReadWrite.All',
+
+ // Channels
+ 'Channel.ReadBasic.All',
+ 'Channel.Create',
+ 'Channel.Delete.All',
+ 'ChannelMember.ReadWrite.All',
+
+ // Chat operations
+ 'Chat.ReadWrite',
+ 'Chat.Create',
+ 'ChatMember.ReadWrite',
+
+ // Messages
+ 'ChatMessage.Send',
+ 'ChatMessage.Read',
+ 'ChannelMessage.Send',
+ 'ChannelMessage.Read.All',
+
+ // Reactions and advanced features
+ 'TeamsActivity.Send'
+]
+
+class MsoftTeamsOAuth2 implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+ description: string
+
+ constructor() {
+ this.label = 'Microsoft Teams OAuth2'
+ this.name = 'microsoftTeamsOAuth2'
+ this.version = 1.0
+ this.description =
+ 'You can find the setup instructions here'
+ this.inputs = [
+ {
+ label: 'Authorization URL',
+ name: 'authorizationUrl',
+ type: 'string',
+ default: 'https://login.microsoftonline.com//oauth2/v2.0/authorize'
+ },
+ {
+ label: 'Access Token URL',
+ name: 'accessTokenUrl',
+ type: 'string',
+ default: 'https://login.microsoftonline.com//oauth2/v2.0/token'
+ },
+ {
+ label: 'Client ID',
+ name: 'clientId',
+ type: 'string'
+ },
+ {
+ label: 'Client Secret',
+ name: 'clientSecret',
+ type: 'password'
+ },
+ {
+ label: 'Scope',
+ name: 'scope',
+ type: 'string',
+ hidden: true,
+ default: scopes.join(' ')
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: MsoftTeamsOAuth2 }
diff --git a/packages/components/credentials/OxylabsApi.credential.ts b/packages/components/credentials/OxylabsApi.credential.ts
new file mode 100644
index 000000000..4ecce3c8e
--- /dev/null
+++ b/packages/components/credentials/OxylabsApi.credential.ts
@@ -0,0 +1,30 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class OxylabsApiCredential implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ description: string
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Oxylabs API'
+ this.name = 'oxylabsApi'
+ this.version = 1.0
+ this.description = 'Oxylabs API credentials description, to add more info'
+ this.inputs = [
+ {
+ label: 'Oxylabs Username',
+ name: 'username',
+ type: 'string'
+ },
+ {
+ label: 'Oxylabs Password',
+ name: 'password',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: OxylabsApiCredential }
diff --git a/packages/components/credentials/SambanovaApi.credential.ts b/packages/components/credentials/SambanovaApi.credential.ts
new file mode 100644
index 000000000..60a7e13d8
--- /dev/null
+++ b/packages/components/credentials/SambanovaApi.credential.ts
@@ -0,0 +1,23 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class SambanovaApi implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Sambanova API'
+ this.name = 'sambanovaApi'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Sambanova Api Key',
+ name: 'sambanovaApiKey',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: SambanovaApi }
diff --git a/packages/components/credentials/TeradataBearerToken.credential.ts b/packages/components/credentials/TeradataBearerToken.credential.ts
new file mode 100644
index 000000000..d0a863041
--- /dev/null
+++ b/packages/components/credentials/TeradataBearerToken.credential.ts
@@ -0,0 +1,26 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class TeradataBearerTokenCredential implements INodeCredential {
+ label: string
+ name: string
+ description: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Teradata Bearer Token'
+ this.name = 'teradataBearerToken'
+ this.version = 1.0
+ this.description =
+ 'Refer to official guide on how to get Teradata Bearer Token'
+ this.inputs = [
+ {
+ label: 'Token',
+ name: 'token',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: TeradataBearerTokenCredential }
diff --git a/packages/components/credentials/TeradataTD2.credential.ts b/packages/components/credentials/TeradataTD2.credential.ts
new file mode 100644
index 000000000..ae3d8f042
--- /dev/null
+++ b/packages/components/credentials/TeradataTD2.credential.ts
@@ -0,0 +1,28 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class TeradataTD2Credential implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Teradata TD2 Auth'
+ this.name = 'teradataTD2Auth'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Teradata TD2 Auth Username',
+ name: 'tdUsername',
+ type: 'string'
+ },
+ {
+ label: 'Teradata TD2 Auth Password',
+ name: 'tdPassword',
+ type: 'password'
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: TeradataTD2Credential }
diff --git a/packages/components/credentials/TeradataVectorStoreApi.credential.ts b/packages/components/credentials/TeradataVectorStoreApi.credential.ts
new file mode 100644
index 000000000..9f613cf6c
--- /dev/null
+++ b/packages/components/credentials/TeradataVectorStoreApi.credential.ts
@@ -0,0 +1,47 @@
+import { INodeParams, INodeCredential } from '../src/Interface'
+
+class TeradataVectorStoreApiCredentials implements INodeCredential {
+ label: string
+ name: string
+ version: number
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'Teradata Vector Store API Credentials'
+ this.name = 'teradataVectorStoreApiCredentials'
+ this.version = 1.0
+ this.inputs = [
+ {
+ label: 'Teradata Host IP',
+ name: 'tdHostIp',
+ type: 'string'
+ },
+ {
+ label: 'Username',
+ name: 'tdUsername',
+ type: 'string'
+ },
+ {
+ label: 'Password',
+ name: 'tdPassword',
+ type: 'password'
+ },
+ {
+ label: 'Vector_Store_Base_URL',
+ name: 'baseURL',
+ description: 'Teradata Vector Store Base URL',
+ placeholder: `Base_URL`,
+ type: 'string'
+ },
+ {
+ label: 'JWT Token',
+ name: 'jwtToken',
+ type: 'password',
+ description: 'Bearer token for JWT authentication',
+ optional: true
+ }
+ ]
+ }
+}
+
+module.exports = { credClass: TeradataVectorStoreApiCredentials }
diff --git a/packages/components/evaluation/EvaluationRunTracer.ts b/packages/components/evaluation/EvaluationRunTracer.ts
new file mode 100644
index 000000000..ce286eb52
--- /dev/null
+++ b/packages/components/evaluation/EvaluationRunTracer.ts
@@ -0,0 +1,165 @@
+import { RunCollectorCallbackHandler } from '@langchain/core/tracers/run_collector'
+import { Run } from '@langchain/core/tracers/base'
+import { EvaluationRunner } from './EvaluationRunner'
+import { encoding_for_model, get_encoding } from '@dqbd/tiktoken'
+
+export class EvaluationRunTracer extends RunCollectorCallbackHandler {
+ evaluationRunId: string
+ model: string
+
+ constructor(id: string) {
+ super()
+ this.evaluationRunId = id
+ }
+
+ async persistRun(run: Run): Promise {
+ return super.persistRun(run)
+ }
+
+ countPromptTokens = (encoding: any, run: Run): number => {
+ let promptTokenCount = 0
+ if (encoding) {
+ if (run.inputs?.messages?.length > 0 && run.inputs?.messages[0]?.length > 0) {
+ run.inputs.messages[0].map((message: any) => {
+ let content = message.content
+ ? message.content
+ : message.SystemMessage?.content
+ ? message.SystemMessage.content
+ : message.HumanMessage?.content
+ ? message.HumanMessage.content
+ : message.AIMessage?.content
+ ? message.AIMessage.content
+ : undefined
+ promptTokenCount += content ? encoding.encode(content).length : 0
+ })
+ }
+ if (run.inputs?.prompts?.length > 0) {
+ const content = run.inputs.prompts[0]
+ promptTokenCount += content ? encoding.encode(content).length : 0
+ }
+ }
+ return promptTokenCount
+ }
+
+ countCompletionTokens = (encoding: any, run: Run): number => {
+ let completionTokenCount = 0
+ if (encoding) {
+ if (run.outputs?.generations?.length > 0 && run.outputs?.generations[0]?.length > 0) {
+ run.outputs?.generations[0].map((chunk: any) => {
+ let content = chunk.text ? chunk.text : chunk.message?.content ? chunk.message?.content : undefined
+ completionTokenCount += content ? encoding.encode(content).length : 0
+ })
+ }
+ }
+ return completionTokenCount
+ }
+
+ extractModelName = (run: Run): string => {
+ return (
+ (run?.serialized as any)?.kwargs?.model ||
+ (run?.serialized as any)?.kwargs?.model_name ||
+ (run?.extra as any)?.metadata?.ls_model_name ||
+ (run?.extra as any)?.metadata?.fw_model_name
+ )
+ }
+
+ onLLMEnd?(run: Run): void | Promise {
+ if (run.name) {
+ let provider = run.name
+ if (provider === 'BedrockChat') {
+ provider = 'awsChatBedrock'
+ }
+ EvaluationRunner.addMetrics(
+ this.evaluationRunId,
+ JSON.stringify({
+ provider: provider
+ })
+ )
+ }
+
+ let model = this.extractModelName(run)
+ if (run.outputs?.llmOutput?.tokenUsage) {
+ const tokenUsage = run.outputs?.llmOutput?.tokenUsage
+ if (tokenUsage) {
+ const metric = {
+ completionTokens: tokenUsage.completionTokens,
+ promptTokens: tokenUsage.promptTokens,
+ model: model,
+ totalTokens: tokenUsage.totalTokens
+ }
+ EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify(metric))
+ }
+ } else if (
+ run.outputs?.generations?.length > 0 &&
+ run.outputs?.generations[0].length > 0 &&
+ run.outputs?.generations[0][0]?.message?.usage_metadata?.total_tokens
+ ) {
+ const usage_metadata = run.outputs?.generations[0][0]?.message?.usage_metadata
+ if (usage_metadata) {
+ const metric = {
+ completionTokens: usage_metadata.output_tokens,
+ promptTokens: usage_metadata.input_tokens,
+ model: model || this.model,
+ totalTokens: usage_metadata.total_tokens
+ }
+ EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify(metric))
+ }
+ } else {
+ let encoding: any = undefined
+ let promptInputTokens = 0
+ let completionTokenCount = 0
+ try {
+ encoding = encoding_for_model(model as any)
+ promptInputTokens = this.countPromptTokens(encoding, run)
+ completionTokenCount = this.countCompletionTokens(encoding, run)
+ } catch (e) {
+ try {
+ // as tiktoken will fail for non openai models, assume that is 'cl100k_base'
+ encoding = get_encoding('cl100k_base')
+ promptInputTokens = this.countPromptTokens(encoding, run)
+ completionTokenCount = this.countCompletionTokens(encoding, run)
+ } catch (e) {
+ // stay silent
+ }
+ }
+ const metric = {
+ completionTokens: completionTokenCount,
+ promptTokens: promptInputTokens,
+ model: model,
+ totalTokens: promptInputTokens + completionTokenCount
+ }
+ EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify(metric))
+ //cleanup
+ this.model = ''
+ }
+ }
+
+ async onRunUpdate(run: Run): Promise {
+ const json = {
+ [run.run_type]: elapsed(run)
+ }
+ let metric = JSON.stringify(json)
+ if (metric) {
+ EvaluationRunner.addMetrics(this.evaluationRunId, metric)
+ }
+
+ if (run.run_type === 'llm') {
+ let model = this.extractModelName(run)
+ if (model) {
+ EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify({ model: model }))
+ this.model = model
+ }
+ // OpenAI non streaming models
+ const estimatedTokenUsage = run.outputs?.llmOutput?.estimatedTokenUsage
+ if (estimatedTokenUsage && typeof estimatedTokenUsage === 'object' && Object.keys(estimatedTokenUsage).length > 0) {
+ EvaluationRunner.addMetrics(this.evaluationRunId, estimatedTokenUsage)
+ }
+ }
+ }
+}
+
+function elapsed(run: Run) {
+ if (!run.end_time) return ''
+ const elapsed = run.end_time - run.start_time
+ return `${elapsed.toFixed(2)}`
+}
diff --git a/packages/components/evaluation/EvaluationRunTracerLlama.ts b/packages/components/evaluation/EvaluationRunTracerLlama.ts
new file mode 100644
index 000000000..872b16e35
--- /dev/null
+++ b/packages/components/evaluation/EvaluationRunTracerLlama.ts
@@ -0,0 +1,186 @@
+import { ChatMessage, LLMEndEvent, LLMStartEvent, LLMStreamEvent, MessageContentTextDetail, RetrievalEndEvent, Settings } from 'llamaindex'
+import { EvaluationRunner } from './EvaluationRunner'
+import { additionalCallbacks, ICommonObject, INodeData } from '../src'
+import { RetrievalStartEvent } from 'llamaindex/dist/type/llm/types'
+import { AgentEndEvent, AgentStartEvent } from 'llamaindex/dist/type/agent/types'
+import { encoding_for_model } from '@dqbd/tiktoken'
+import { MessageContent } from '@langchain/core/messages'
+
+export class EvaluationRunTracerLlama {
+ evaluationRunId: string
+ static cbInit = false
+ static startTimes = new Map()
+ static models = new Map()
+ static tokenCounts = new Map()
+
+ constructor(id: string) {
+ this.evaluationRunId = id
+ EvaluationRunTracerLlama.constructCallBacks()
+ }
+
+ static constructCallBacks = () => {
+ if (!EvaluationRunTracerLlama.cbInit) {
+ Settings.callbackManager.on('llm-start', (event: LLMStartEvent) => {
+ const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
+ if (!evalID) return
+ const model = (event as any).reason?.caller?.model
+ if (model) {
+ EvaluationRunTracerLlama.models.set(evalID, model)
+ try {
+ const encoding = encoding_for_model(model)
+ if (encoding) {
+ const { messages } = event.detail.payload
+ let tokenCount = messages.reduce((count: number, message: ChatMessage) => {
+ return count + encoding.encode(extractText(message.content)).length
+ }, 0)
+ EvaluationRunTracerLlama.tokenCounts.set(evalID + '_promptTokens', tokenCount)
+ EvaluationRunTracerLlama.tokenCounts.set(evalID + '_outputTokens', 0)
+ }
+ } catch (e) {
+ // catch the error and continue to work.
+ }
+ }
+ EvaluationRunTracerLlama.startTimes.set(evalID + '_llm', event.timeStamp)
+ })
+ Settings.callbackManager.on('llm-end', (event: LLMEndEvent) => {
+ this.calculateAndSetMetrics(event, 'llm')
+ })
+ Settings.callbackManager.on('llm-stream', (event: LLMStreamEvent) => {
+ const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
+ if (!evalID) return
+ const { chunk } = event.detail.payload
+ const { delta } = chunk
+ const model = (event as any).reason?.caller?.model
+ try {
+ const encoding = encoding_for_model(model)
+ if (encoding) {
+ let tokenCount = EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens') || 0
+ tokenCount += encoding.encode(extractText(delta)).length
+ EvaluationRunTracerLlama.tokenCounts.set(evalID + '_outputTokens', tokenCount)
+ }
+ } catch (e) {
+ // catch the error and continue to work.
+ }
+ })
+ Settings.callbackManager.on('retrieve-start', (event: RetrievalStartEvent) => {
+ const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
+ if (evalID) {
+ EvaluationRunTracerLlama.startTimes.set(evalID + '_retriever', event.timeStamp)
+ }
+ })
+ Settings.callbackManager.on('retrieve-end', (event: RetrievalEndEvent) => {
+ this.calculateAndSetMetrics(event, 'retriever')
+ })
+ Settings.callbackManager.on('agent-start', (event: AgentStartEvent) => {
+ const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
+ if (evalID) {
+ EvaluationRunTracerLlama.startTimes.set(evalID + '_agent', event.timeStamp)
+ }
+ })
+ Settings.callbackManager.on('agent-end', (event: AgentEndEvent) => {
+ this.calculateAndSetMetrics(event, 'agent')
+ })
+ EvaluationRunTracerLlama.cbInit = true
+ }
+ }
+
+ private static calculateAndSetMetrics(event: any, label: string) {
+ const evalID = event.reason.parent?.caller?.evaluationRunId || event.reason.caller?.evaluationRunId
+ if (!evalID) return
+ const startTime = EvaluationRunTracerLlama.startTimes.get(evalID + '_' + label) as number
+ let model =
+ (event as any).reason?.caller?.model || (event as any).reason?.caller?.llm?.model || EvaluationRunTracerLlama.models.get(evalID)
+
+ if (event.detail.payload?.response?.message && model) {
+ try {
+ const encoding = encoding_for_model(model)
+ if (encoding) {
+ let tokenCount = EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens') || 0
+ tokenCount += encoding.encode(event.detail.payload.response?.message?.content || '').length
+ EvaluationRunTracerLlama.tokenCounts.set(evalID + '_outputTokens', tokenCount)
+ }
+ } catch (e) {
+ // catch the error and continue to work.
+ }
+ }
+
+ // Anthropic
+ if (event.detail?.payload?.response?.raw?.usage) {
+ const usage = event.detail.payload.response.raw.usage
+ if (usage.output_tokens) {
+ const metric = {
+ completionTokens: usage.output_tokens,
+ promptTokens: usage.input_tokens,
+ model: model,
+ totalTokens: usage.input_tokens + usage.output_tokens
+ }
+ EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
+ } else if (usage.completion_tokens) {
+ const metric = {
+ completionTokens: usage.completion_tokens,
+ promptTokens: usage.prompt_tokens,
+ model: model,
+ totalTokens: usage.total_tokens
+ }
+ EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
+ }
+ } else if (event.detail?.payload?.response?.raw['amazon-bedrock-invocationMetrics']) {
+ const usage = event.detail?.payload?.response?.raw['amazon-bedrock-invocationMetrics']
+ const metric = {
+ completionTokens: usage.outputTokenCount,
+ promptTokens: usage.inputTokenCount,
+ model: event.detail?.payload?.response?.raw.model,
+ totalTokens: usage.inputTokenCount + usage.outputTokenCount
+ }
+ EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
+ } else {
+ const metric = {
+ [label]: (event.timeStamp - startTime).toFixed(2),
+ completionTokens: EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens'),
+ promptTokens: EvaluationRunTracerLlama.tokenCounts.get(evalID + '_promptTokens'),
+ model: model || EvaluationRunTracerLlama.models.get(evalID) || '',
+ totalTokens:
+ (EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens') || 0) +
+ (EvaluationRunTracerLlama.tokenCounts.get(evalID + '_promptTokens') || 0)
+ }
+ EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
+ }
+
+ //cleanup
+ EvaluationRunTracerLlama.startTimes.delete(evalID + '_' + label)
+ EvaluationRunTracerLlama.startTimes.delete(evalID + '_outputTokens')
+ EvaluationRunTracerLlama.startTimes.delete(evalID + '_promptTokens')
+ EvaluationRunTracerLlama.models.delete(evalID)
+ }
+
+ static async injectEvaluationMetadata(nodeData: INodeData, options: ICommonObject, callerObj: any) {
+ if (options.evaluationRunId && callerObj) {
+ // these are needed for evaluation runs
+ options.llamaIndex = true
+ await additionalCallbacks(nodeData, options)
+ Object.defineProperty(callerObj, 'evaluationRunId', {
+ enumerable: true,
+ configurable: true,
+ writable: true,
+ value: options.evaluationRunId
+ })
+ }
+ }
+}
+
+// from https://github.com/run-llama/LlamaIndexTS/blob/main/packages/core/src/llm/utils.ts
+export function extractText(message: MessageContent): string {
+ if (typeof message !== 'string' && !Array.isArray(message)) {
+ console.warn('extractText called with non-MessageContent message, this is likely a bug.')
+ return `${message}`
+ } else if (typeof message !== 'string' && Array.isArray(message)) {
+ // message is of type MessageContentDetail[] - retrieve just the text parts and concatenate them
+ // so we can pass them to the context generator
+ return message
+ .filter((c): c is MessageContentTextDetail => c.type === 'text')
+ .map((c) => c.text)
+ .join('\n\n')
+ } else {
+ return message
+ }
+}
diff --git a/packages/components/evaluation/EvaluationRunner.ts b/packages/components/evaluation/EvaluationRunner.ts
new file mode 100644
index 000000000..acde79446
--- /dev/null
+++ b/packages/components/evaluation/EvaluationRunner.ts
@@ -0,0 +1,226 @@
+import axios from 'axios'
+import { v4 as uuidv4 } from 'uuid'
+import { ICommonObject } from '../src'
+
+import { getModelConfigByModelName, MODEL_TYPE } from '../src/modelLoader'
+
+export class EvaluationRunner {
+ static metrics = new Map()
+
+ static getCostMetrics = async (selectedProvider: string, selectedModel: string) => {
+ let modelConfig = await getModelConfigByModelName(MODEL_TYPE.CHAT, selectedProvider, selectedModel)
+ if (modelConfig) {
+ if (modelConfig['cost_values']) {
+ return modelConfig.cost_values
+ }
+ return { cost_values: modelConfig }
+ } else {
+ modelConfig = await getModelConfigByModelName(MODEL_TYPE.LLM, selectedProvider, selectedModel)
+ if (modelConfig) {
+ if (modelConfig['cost_values']) {
+ return modelConfig.cost_values
+ }
+ return { cost_values: modelConfig }
+ }
+ }
+ return undefined
+ }
+
+ static async getAndDeleteMetrics(id: string) {
+ const val = EvaluationRunner.metrics.get(id)
+ if (val) {
+ try {
+ //first lets get the provider and model
+ let selectedModel = undefined
+ let selectedProvider = undefined
+ if (val && val.length > 0) {
+ let modelName = ''
+ let providerName = ''
+ for (let i = 0; i < val.length; i++) {
+ const metric = val[i]
+ if (typeof metric === 'object') {
+ modelName = metric['model']
+ providerName = metric['provider']
+ } else {
+ modelName = JSON.parse(metric)['model']
+ providerName = JSON.parse(metric)['provider']
+ }
+
+ if (modelName) {
+ selectedModel = modelName
+ }
+ if (providerName) {
+ selectedProvider = providerName
+ }
+ }
+ }
+ if (selectedProvider && selectedModel) {
+ const modelConfig = await EvaluationRunner.getCostMetrics(selectedProvider, selectedModel)
+ if (modelConfig) {
+ val.push(JSON.stringify({ cost_values: modelConfig }))
+ }
+ }
+ } catch (error) {
+ //stay silent
+ }
+ }
+ EvaluationRunner.metrics.delete(id)
+ return val
+ }
+
+ static addMetrics(id: string, metric: string) {
+ if (EvaluationRunner.metrics.has(id)) {
+ EvaluationRunner.metrics.get(id)?.push(metric)
+ } else {
+ EvaluationRunner.metrics.set(id, [metric])
+ }
+ }
+
+ baseURL = ''
+
+ constructor(baseURL: string) {
+ this.baseURL = baseURL
+ }
+
+ getChatflowApiKey(chatflowId: string, apiKeys: { chatflowId: string; apiKey: string }[] = []) {
+ return apiKeys.find((item) => item.chatflowId === chatflowId)?.apiKey || ''
+ }
+
+ public async runEvaluations(data: ICommonObject) {
+ const chatflowIds = JSON.parse(data.chatflowId)
+ const returnData: ICommonObject = {}
+ returnData.evaluationId = data.evaluationId
+ returnData.runDate = new Date()
+ returnData.rows = []
+ for (let i = 0; i < data.dataset.rows.length; i++) {
+ returnData.rows.push({
+ input: data.dataset.rows[i].input,
+ expectedOutput: data.dataset.rows[i].output,
+ itemNo: data.dataset.rows[i].sequenceNo,
+ evaluations: [],
+ status: 'pending'
+ })
+ }
+ for (let i = 0; i < chatflowIds.length; i++) {
+ const chatflowId = chatflowIds[i]
+ await this.evaluateChatflow(chatflowId, this.getChatflowApiKey(chatflowId, data.apiKeys), data, returnData)
+ }
+ return returnData
+ }
+
+ async evaluateChatflow(chatflowId: string, apiKey: string, data: any, returnData: any) {
+ for (let i = 0; i < data.dataset.rows.length; i++) {
+ const item = data.dataset.rows[i]
+ const uuid = uuidv4()
+
+ const headers: any = {
+ 'X-Request-ID': uuid,
+ 'X-Flowise-Evaluation': 'true'
+ }
+ if (apiKey) {
+ headers['Authorization'] = `Bearer ${apiKey}`
+ }
+ let axiosConfig = {
+ headers: headers
+ }
+ let startTime = performance.now()
+ const runData: any = {}
+ runData.chatflowId = chatflowId
+ runData.startTime = startTime
+ const postData: any = { question: item.input, evaluationRunId: uuid, evaluation: true }
+ if (data.sessionId) {
+ postData.overrideConfig = { sessionId: data.sessionId }
+ }
+ try {
+ let response = await axios.post(`${this.baseURL}/api/v1/prediction/${chatflowId}`, postData, axiosConfig)
+ let agentFlowMetrics: any[] = []
+ if (response?.data?.agentFlowExecutedData) {
+ for (let i = 0; i < response.data.agentFlowExecutedData.length; i++) {
+ const agentFlowExecutedData = response.data.agentFlowExecutedData[i]
+ const input_tokens = agentFlowExecutedData?.data?.output?.usageMetadata?.input_tokens || 0
+ const output_tokens = agentFlowExecutedData?.data?.output?.usageMetadata?.output_tokens || 0
+ const total_tokens =
+ agentFlowExecutedData?.data?.output?.usageMetadata?.total_tokens || input_tokens + output_tokens
+ const metrics: any = {
+ promptTokens: input_tokens,
+ completionTokens: output_tokens,
+ totalTokens: total_tokens,
+ provider:
+ agentFlowExecutedData.data?.input?.llmModelConfig?.llmModel ||
+ agentFlowExecutedData.data?.input?.agentModelConfig?.agentModel,
+ model:
+ agentFlowExecutedData.data?.input?.llmModelConfig?.modelName ||
+ agentFlowExecutedData.data?.input?.agentModelConfig?.modelName,
+ nodeLabel: agentFlowExecutedData?.nodeLabel,
+ nodeId: agentFlowExecutedData?.nodeId
+ }
+ if (metrics.provider && metrics.model) {
+ const modelConfig = await EvaluationRunner.getCostMetrics(metrics.provider, metrics.model)
+ if (modelConfig) {
+ metrics.cost_values = {
+ input_cost: (modelConfig.cost_values.input_cost || 0) * (input_tokens / 1000),
+ output_cost: (modelConfig.cost_values.output_cost || 0) * (output_tokens / 1000)
+ }
+ metrics.cost_values.total_cost = metrics.cost_values.input_cost + metrics.cost_values.output_cost
+ }
+ }
+ agentFlowMetrics.push(metrics)
+ }
+ }
+ const endTime = performance.now()
+ const timeTaken = (endTime - startTime).toFixed(2)
+ if (response?.data?.metrics) {
+ runData.metrics = response.data.metrics
+ runData.metrics.push({
+ apiLatency: timeTaken
+ })
+ } else {
+ runData.metrics = [
+ {
+ apiLatency: timeTaken
+ }
+ ]
+ }
+ if (agentFlowMetrics.length > 0) {
+ runData.nested_metrics = agentFlowMetrics
+ }
+ runData.status = 'complete'
+ let resultText = ''
+ if (response.data.text) resultText = response.data.text
+ else if (response.data.json) resultText = '```json\n' + JSON.stringify(response.data.json, null, 2)
+ else resultText = JSON.stringify(response.data, null, 2)
+
+ runData.actualOutput = resultText
+ runData.latency = timeTaken
+ runData.error = ''
+ } catch (error: any) {
+ runData.status = 'error'
+ runData.actualOutput = ''
+ runData.error = error?.response?.data?.message
+ ? error.response.data.message
+ : error?.message
+ ? error.message
+ : 'Unknown error'
+ try {
+ if (runData.error.indexOf('-') > -1) {
+ // if there is a dash, remove all content before
+ runData.error = 'Error: ' + runData.error.substr(runData.error.indexOf('-') + 1).trim()
+ }
+ } catch (error) {
+ //stay silent
+ }
+ const endTime = performance.now()
+ const timeTaken = (endTime - startTime).toFixed(2)
+ runData.metrics = [
+ {
+ apiLatency: timeTaken
+ }
+ ]
+ runData.latency = timeTaken
+ }
+ runData.uuid = uuid
+ returnData.rows[i].evaluations.push(runData)
+ }
+ return returnData
+ }
+}
diff --git a/packages/components/jest.config.js b/packages/components/jest.config.js
new file mode 100644
index 000000000..deffa4d4b
--- /dev/null
+++ b/packages/components/jest.config.js
@@ -0,0 +1,15 @@
+module.exports = {
+ preset: 'ts-jest',
+ testEnvironment: 'node',
+ roots: ['/nodes'],
+ transform: {
+ '^.+\\.tsx?$': 'ts-jest'
+ },
+ testRegex: '(/__tests__/.*|(\\.|/)(test|spec))\\.tsx?$',
+ moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
+ verbose: true,
+ testPathIgnorePatterns: ['/node_modules/', '/dist/'],
+ moduleNameMapper: {
+ '^../../../src/(.*)$': '/src/$1'
+ }
+}
diff --git a/packages/components/models.json b/packages/components/models.json
index 60d0173c4..7551c6097 100644
--- a/packages/components/models.json
+++ b/packages/components/models.json
@@ -3,6 +3,41 @@
{
"name": "awsChatBedrock",
"models": [
+ {
+ "label": "anthropic.claude-sonnet-4-5-20250929-v1:0",
+ "name": "anthropic.claude-sonnet-4-5-20250929-v1:0",
+ "description": "Claude 4.5 Sonnet",
+ "input_cost": 0.000003,
+ "output_cost": 0.000015
+ },
+ {
+ "label": "anthropic.claude-haiku-4-5-20251001-v1:0",
+ "name": "anthropic.claude-haiku-4-5-20251001-v1:0",
+ "description": "Claude 4.5 Haiku",
+ "input_cost": 0.000001,
+ "output_cost": 0.000005
+ },
+ {
+ "label": "openai.gpt-oss-20b-1:0",
+ "name": "openai.gpt-oss-20b-1:0",
+ "description": "21B parameters model optimized for lower latency, local, and specialized use cases",
+ "input_cost": 0.00007,
+ "output_cost": 0.0003
+ },
+ {
+ "label": "openai.gpt-oss-120b-1:0",
+ "name": "openai.gpt-oss-120b-1:0",
+ "description": "120B parameters model optimized for production, general purpose, and high-reasoning use cases",
+ "input_cost": 0.00015,
+ "output_cost": 0.0006
+ },
+ {
+ "label": "anthropic.claude-opus-4-1-20250805-v1:0",
+ "name": "anthropic.claude-opus-4-1-20250805-v1:0",
+ "description": "Claude 4.1 Opus",
+ "input_cost": 0.000015,
+ "output_cost": 0.000075
+ },
{
"label": "anthropic.claude-sonnet-4-20250514-v1:0",
"name": "anthropic.claude-sonnet-4-20250514-v1:0",
@@ -280,6 +315,30 @@
{
"name": "azureChatOpenAI",
"models": [
+ {
+ "label": "gpt-5.1",
+ "name": "gpt-5.1",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
+ },
+ {
+ "label": "gpt-5",
+ "name": "gpt-5",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
+ },
+ {
+ "label": "gpt-5-mini",
+ "name": "gpt-5-mini",
+ "input_cost": 0.00000025,
+ "output_cost": 0.000002
+ },
+ {
+ "label": "gpt-5-nano",
+ "name": "gpt-5-nano",
+ "input_cost": 0.00000005,
+ "output_cost": 0.0000004
+ },
{
"label": "gpt-4.1",
"name": "gpt-4.1",
@@ -357,6 +416,18 @@
"name": "gpt-4.5-preview",
"input_cost": 0.000075,
"output_cost": 0.00015
+ },
+ {
+ "label": "gpt-4.1-mini",
+ "name": "gpt-4.1-mini",
+ "input_cost": 0.0000004,
+ "output_cost": 0.0000016
+ },
+ {
+ "label": "gpt-5-chat-latest",
+ "name": "gpt-5-chat-latest",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
}
]
},
@@ -416,12 +487,38 @@
"name": "gpt-4-1106-preview",
"input_cost": 0.00001,
"output_cost": 0.00003
+ },
+ {
+ "label": "gpt-4.1-mini",
+ "name": "gpt-4.1-mini",
+ "input_cost": 0.0000004,
+ "output_cost": 0.0000016
+ },
+ {
+ "label": "gpt-5-chat-latest",
+ "name": "gpt-5-chat-latest",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
}
]
},
{
"name": "chatAnthropic",
"models": [
+ {
+ "label": "claude-sonnet-4-5",
+ "name": "claude-sonnet-4-5",
+ "description": "Claude 4.5 Sonnet",
+ "input_cost": 0.000003,
+ "output_cost": 0.000015
+ },
+ {
+ "label": "claude-haiku-4-5",
+ "name": "claude-haiku-4-5",
+ "description": "Claude 4.5 Haiku",
+ "input_cost": 0.000001,
+ "output_cost": 0.000005
+ },
{
"label": "claude-sonnet-4-0",
"name": "claude-sonnet-4-0",
@@ -429,6 +526,13 @@
"input_cost": 0.000003,
"output_cost": 0.000015
},
+ {
+ "label": "claude-opus-4-1",
+ "name": "claude-opus-4-1",
+ "description": "Claude 4.1 Opus",
+ "input_cost": 0.000015,
+ "output_cost": 0.000075
+ },
{
"label": "claude-opus-4-0",
"name": "claude-opus-4-0",
@@ -524,17 +628,29 @@
"name": "chatGoogleGenerativeAI",
"models": [
{
- "label": "gemini-2.5-flash-preview-05-20",
- "name": "gemini-2.5-flash-preview-05-20",
- "input_cost": 0.15e-6,
- "output_cost": 6e-7
+ "label": "gemini-3-pro-preview",
+ "name": "gemini-3-pro-preview",
+ "input_cost": 0.00002,
+ "output_cost": 0.00012
},
{
- "label": "gemini-2.5-pro-preview-03-25",
- "name": "gemini-2.5-pro-preview-03-25",
+ "label": "gemini-2.5-pro",
+ "name": "gemini-2.5-pro",
+ "input_cost": 0.3e-6,
+ "output_cost": 0.000025
+ },
+ {
+ "label": "gemini-2.5-flash",
+ "name": "gemini-2.5-flash",
"input_cost": 1.25e-6,
"output_cost": 0.00001
},
+ {
+ "label": "gemini-2.5-flash-lite",
+ "name": "gemini-2.5-flash-lite",
+ "input_cost": 1e-7,
+ "output_cost": 4e-7
+ },
{
"label": "gemini-2.0-flash",
"name": "gemini-2.0-flash",
@@ -581,6 +697,42 @@
{
"name": "chatGoogleVertexAI",
"models": [
+ {
+ "label": "gemini-3-pro-preview",
+ "name": "gemini-3-pro-preview",
+ "input_cost": 0.00002,
+ "output_cost": 0.00012
+ },
+ {
+ "label": "gemini-2.5-pro",
+ "name": "gemini-2.5-pro",
+ "input_cost": 0.3e-6,
+ "output_cost": 0.000025
+ },
+ {
+ "label": "gemini-2.5-flash",
+ "name": "gemini-2.5-flash",
+ "input_cost": 1.25e-6,
+ "output_cost": 0.00001
+ },
+ {
+ "label": "gemini-2.5-flash-lite",
+ "name": "gemini-2.5-flash-lite",
+ "input_cost": 1e-7,
+ "output_cost": 4e-7
+ },
+ {
+ "label": "gemini-2.0-flash",
+ "name": "gemini-2.0-flash-001",
+ "input_cost": 1e-7,
+ "output_cost": 4e-7
+ },
+ {
+ "label": "gemini-2.0-flash-lite",
+ "name": "gemini-2.0-flash-lite-001",
+ "input_cost": 7.5e-8,
+ "output_cost": 3e-7
+ },
{
"label": "gemini-1.5-flash-002",
"name": "gemini-1.5-flash-002",
@@ -617,6 +769,27 @@
"input_cost": 1.25e-7,
"output_cost": 3.75e-7
},
+ {
+ "label": "claude-sonnet-4-5@20250929",
+ "name": "claude-sonnet-4-5@20250929",
+ "description": "Claude 4.5 Sonnet",
+ "input_cost": 0.000003,
+ "output_cost": 0.000015
+ },
+ {
+ "label": "claude-haiku-4-5@20251001",
+ "name": "claude-haiku-4-5@20251001",
+ "description": "Claude 4.5 Haiku",
+ "input_cost": 0.000001,
+ "output_cost": 0.000005
+ },
+ {
+ "label": "claude-opus-4-1@20250805",
+ "name": "claude-opus-4-1@20250805",
+ "description": "Claude 4.1 Opus",
+ "input_cost": 0.000015,
+ "output_cost": 0.000075
+ },
{
"label": "claude-sonnet-4@20250514",
"name": "claude-sonnet-4@20250514",
@@ -673,11 +846,63 @@
"input_cost": 2.5e-7,
"output_cost": 1.25e-6
}
+ ],
+ "regions": [
+ { "label": "us-east1", "name": "us-east1" },
+ { "label": "us-east4", "name": "us-east4" },
+ { "label": "us-central1", "name": "us-central1" },
+ { "label": "us-west1", "name": "us-west1" },
+ { "label": "europe-west4", "name": "europe-west4" },
+ { "label": "europe-west1", "name": "europe-west1" },
+ { "label": "europe-west3", "name": "europe-west3" },
+ { "label": "europe-west2", "name": "europe-west2" },
+ { "label": "asia-east1", "name": "asia-east1" },
+ { "label": "asia-southeast1", "name": "asia-southeast1" },
+ { "label": "asia-northeast1", "name": "asia-northeast1" },
+ { "label": "asia-south1", "name": "asia-south1" },
+ { "label": "australia-southeast1", "name": "australia-southeast1" },
+ { "label": "southamerica-east1", "name": "southamerica-east1" },
+ { "label": "africa-south1", "name": "africa-south1" },
+ { "label": "asia-east2", "name": "asia-east2" },
+ { "label": "asia-northeast2", "name": "asia-northeast2" },
+ { "label": "asia-northeast3", "name": "asia-northeast3" },
+ { "label": "asia-south2", "name": "asia-south2" },
+ { "label": "asia-southeast2", "name": "asia-southeast2" },
+ { "label": "australia-southeast2", "name": "australia-southeast2" },
+ { "label": "europe-central2", "name": "europe-central2" },
+ { "label": "europe-north1", "name": "europe-north1" },
+ { "label": "europe-north2", "name": "europe-north2" },
+ { "label": "europe-southwest1", "name": "europe-southwest1" },
+ { "label": "europe-west10", "name": "europe-west10" },
+ { "label": "europe-west12", "name": "europe-west12" },
+ { "label": "europe-west6", "name": "europe-west6" },
+ { "label": "europe-west8", "name": "europe-west8" },
+ { "label": "europe-west9", "name": "europe-west9" },
+ { "label": "me-central1", "name": "me-central1" },
+ { "label": "me-central2", "name": "me-central2" },
+ { "label": "me-west1", "name": "me-west1" },
+ { "label": "northamerica-northeast1", "name": "northamerica-northeast1" },
+ { "label": "northamerica-northeast2", "name": "northamerica-northeast2" },
+ { "label": "northamerica-south1", "name": "northamerica-south1" },
+ { "label": "southamerica-west1", "name": "southamerica-west1" },
+ { "label": "us-east5", "name": "us-east5" },
+ { "label": "us-south1", "name": "us-south1" },
+ { "label": "us-west2", "name": "us-west2" },
+ { "label": "us-west3", "name": "us-west3" },
+ { "label": "us-west4", "name": "us-west4" }
]
},
{
"name": "groqChat",
"models": [
+ {
+ "label": "openai/gpt-oss-20b",
+ "name": "openai/gpt-oss-20b"
+ },
+ {
+ "label": "openai/gpt-oss-120b",
+ "name": "openai/gpt-oss-120b"
+ },
{
"label": "meta-llama/llama-4-maverick-17b-128e-instruct",
"name": "meta-llama/llama-4-maverick-17b-128e-instruct"
@@ -789,6 +1014,30 @@
{
"name": "chatOpenAI",
"models": [
+ {
+ "label": "gpt-5.1",
+ "name": "gpt-5.1",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
+ },
+ {
+ "label": "gpt-5",
+ "name": "gpt-5",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
+ },
+ {
+ "label": "gpt-5-mini",
+ "name": "gpt-5-mini",
+ "input_cost": 0.00000025,
+ "output_cost": 0.000002
+ },
+ {
+ "label": "gpt-5-nano",
+ "name": "gpt-5-nano",
+ "input_cost": 0.00000005,
+ "output_cost": 0.0000004
+ },
{
"label": "gpt-4.1",
"name": "gpt-4.1",
@@ -1217,6 +1466,18 @@
"name": "mistral-large-2402",
"input_cost": 0.002,
"output_cost": 0.006
+ },
+ {
+ "label": "codestral-latsest",
+ "name": "codestral-latest",
+ "input_cost": 0.0002,
+ "output_cost": 0.0006
+ },
+ {
+ "label": "devstral-small-2505",
+ "name": "devstral-small-2505",
+ "input_cost": 0.0001,
+ "output_cost": 0.0003
}
]
},
@@ -1511,6 +1772,18 @@
"name": "gpt-4-32k",
"input_cost": 0.00006,
"output_cost": 0.00012
+ },
+ {
+ "label": "gpt-4.1-mini",
+ "name": "gpt-4.1-mini",
+ "input_cost": 0.0000004,
+ "output_cost": 0.0000016
+ },
+ {
+ "label": "gpt-5-chat-latest",
+ "name": "gpt-5-chat-latest",
+ "input_cost": 0.00000125,
+ "output_cost": 0.00001
}
]
},
@@ -1711,29 +1984,65 @@
"name": "googlevertexaiEmbeddings",
"models": [
{
- "label": "multimodalembedding",
- "name": "multimodalembedding"
+ "label": "gemini-embedding-001",
+ "name": "gemini-embedding-001"
},
{
"label": "text-embedding-004",
"name": "text-embedding-004"
},
+ {
+ "label": "text-embedding-005",
+ "name": "text-embedding-005"
+ },
{
"label": "text-multilingual-embedding-002",
"name": "text-multilingual-embedding-002"
- },
- {
- "label": "textembedding-gecko@001",
- "name": "textembedding-gecko@001"
- },
- {
- "label": "textembedding-gecko@latest",
- "name": "textembedding-gecko@latest"
- },
- {
- "label": "textembedding-gecko-multilingual@latest",
- "name": "textembedding-gecko-multilingual@latest"
}
+ ],
+ "regions": [
+ { "label": "us-east1", "name": "us-east1" },
+ { "label": "us-east4", "name": "us-east4" },
+ { "label": "us-central1", "name": "us-central1" },
+ { "label": "us-west1", "name": "us-west1" },
+ { "label": "europe-west4", "name": "europe-west4" },
+ { "label": "europe-west1", "name": "europe-west1" },
+ { "label": "europe-west3", "name": "europe-west3" },
+ { "label": "europe-west2", "name": "europe-west2" },
+ { "label": "asia-east1", "name": "asia-east1" },
+ { "label": "asia-southeast1", "name": "asia-southeast1" },
+ { "label": "asia-northeast1", "name": "asia-northeast1" },
+ { "label": "asia-south1", "name": "asia-south1" },
+ { "label": "australia-southeast1", "name": "australia-southeast1" },
+ { "label": "southamerica-east1", "name": "southamerica-east1" },
+ { "label": "africa-south1", "name": "africa-south1" },
+ { "label": "asia-east2", "name": "asia-east2" },
+ { "label": "asia-northeast2", "name": "asia-northeast2" },
+ { "label": "asia-northeast3", "name": "asia-northeast3" },
+ { "label": "asia-south2", "name": "asia-south2" },
+ { "label": "asia-southeast2", "name": "asia-southeast2" },
+ { "label": "australia-southeast2", "name": "australia-southeast2" },
+ { "label": "europe-central2", "name": "europe-central2" },
+ { "label": "europe-north1", "name": "europe-north1" },
+ { "label": "europe-north2", "name": "europe-north2" },
+ { "label": "europe-southwest1", "name": "europe-southwest1" },
+ { "label": "europe-west10", "name": "europe-west10" },
+ { "label": "europe-west12", "name": "europe-west12" },
+ { "label": "europe-west6", "name": "europe-west6" },
+ { "label": "europe-west8", "name": "europe-west8" },
+ { "label": "europe-west9", "name": "europe-west9" },
+ { "label": "me-central1", "name": "me-central1" },
+ { "label": "me-central2", "name": "me-central2" },
+ { "label": "me-west1", "name": "me-west1" },
+ { "label": "northamerica-northeast1", "name": "northamerica-northeast1" },
+ { "label": "northamerica-northeast2", "name": "northamerica-northeast2" },
+ { "label": "northamerica-south1", "name": "northamerica-south1" },
+ { "label": "southamerica-west1", "name": "southamerica-west1" },
+ { "label": "us-east5", "name": "us-east5" },
+ { "label": "us-south1", "name": "us-south1" },
+ { "label": "us-west2", "name": "us-west2" },
+ { "label": "us-west3", "name": "us-west3" },
+ { "label": "us-west4", "name": "us-west4" }
]
},
{
diff --git a/packages/components/nodes/agentflow/Agent/Agent.ts b/packages/components/nodes/agentflow/Agent/Agent.ts
index 849a2e3e5..b8aa80222 100644
--- a/packages/components/nodes/agentflow/Agent/Agent.ts
+++ b/packages/components/nodes/agentflow/Agent/Agent.ts
@@ -3,6 +3,7 @@ import {
ICommonObject,
IDatabaseEntity,
IHumanInput,
+ IMessage,
INode,
INodeData,
INodeOptionsValue,
@@ -15,7 +16,7 @@ import { AnalyticHandler } from '../../../src/handler'
import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
import { ILLMMessage } from '../Interface.Agentflow'
import { Tool } from '@langchain/core/tools'
-import { ARTIFACTS_PREFIX, SOURCE_DOCUMENTS_PREFIX } from '../../../src/agents'
+import { ARTIFACTS_PREFIX, SOURCE_DOCUMENTS_PREFIX, TOOL_ARGS_PREFIX } from '../../../src/agents'
import { flatten } from 'lodash'
import zodToJsonSchema from 'zod-to-json-schema'
import { getErrorMessage } from '../../../src/error'
@@ -27,6 +28,15 @@ import {
replaceBase64ImagesWithFileReferences,
updateFlowState
} from '../utils'
+import {
+ convertMultiOptionsToStringArray,
+ getCredentialData,
+ getCredentialParam,
+ processTemplateVariables,
+ configureStructuredOutput
+} from '../../../src/utils'
+import { addSingleFileToStorage } from '../../../src/storageUtils'
+import fetch from 'node-fetch'
interface ITool {
agentSelectedTool: string
@@ -77,7 +87,7 @@ class Agent_Agentflow implements INode {
constructor() {
this.label = 'Agent'
this.name = 'agentAgentflow'
- this.version = 1.0
+ this.version = 2.2
this.type = 'Agent'
this.category = 'Agent Flows'
this.description = 'Dynamically choose and utilize tools during runtime, enabling multi-step reasoning'
@@ -131,6 +141,82 @@ class Agent_Agentflow implements INode {
}
]
},
+ {
+ label: 'OpenAI Built-in Tools',
+ name: 'agentToolsBuiltInOpenAI',
+ type: 'multiOptions',
+ optional: true,
+ options: [
+ {
+ label: 'Web Search',
+ name: 'web_search_preview',
+ description: 'Search the web for the latest information'
+ },
+ {
+ label: 'Code Interpreter',
+ name: 'code_interpreter',
+ description: 'Write and run Python code in a sandboxed environment'
+ },
+ {
+ label: 'Image Generation',
+ name: 'image_generation',
+ description: 'Generate images based on a text prompt'
+ }
+ ],
+ show: {
+ agentModel: 'chatOpenAI'
+ }
+ },
+ {
+ label: 'Gemini Built-in Tools',
+ name: 'agentToolsBuiltInGemini',
+ type: 'multiOptions',
+ optional: true,
+ options: [
+ {
+ label: 'URL Context',
+ name: 'urlContext',
+ description: 'Extract content from given URLs'
+ },
+ {
+ label: 'Google Search',
+ name: 'googleSearch',
+ description: 'Search real-time web content'
+ }
+ ],
+ show: {
+ agentModel: 'chatGoogleGenerativeAI'
+ }
+ },
+ {
+ label: 'Anthropic Built-in Tools',
+ name: 'agentToolsBuiltInAnthropic',
+ type: 'multiOptions',
+ optional: true,
+ options: [
+ {
+ label: 'Web Search',
+ name: 'web_search_20250305',
+ description: 'Search the web for the latest information'
+ },
+ {
+ label: 'Web Fetch',
+ name: 'web_fetch_20250910',
+ description: 'Retrieve full content from specified web pages'
+ }
+ /*
+ * Not supported yet as we need to get bash_code_execution_tool_result from content:
+ https://docs.claude.com/en/docs/agents-and-tools/tool-use/code-execution-tool#retrieve-generated-files
+ {
+ label: 'Code Interpreter',
+ name: 'code_execution_20250825',
+ description: 'Write and run Python code in a sandboxed environment'
+ }*/
+ ],
+ show: {
+ agentModel: 'chatAnthropic'
+ }
+ },
{
label: 'Tools',
name: 'agentTools',
@@ -314,6 +400,108 @@ class Agent_Agentflow implements INode {
],
default: 'userMessage'
},
+ {
+ label: 'JSON Structured Output',
+ name: 'agentStructuredOutput',
+ description: 'Instruct the Agent to give output in a JSON structured schema',
+ type: 'array',
+ optional: true,
+ acceptVariable: true,
+ array: [
+ {
+ label: 'Key',
+ name: 'key',
+ type: 'string'
+ },
+ {
+ label: 'Type',
+ name: 'type',
+ type: 'options',
+ options: [
+ {
+ label: 'String',
+ name: 'string'
+ },
+ {
+ label: 'String Array',
+ name: 'stringArray'
+ },
+ {
+ label: 'Number',
+ name: 'number'
+ },
+ {
+ label: 'Boolean',
+ name: 'boolean'
+ },
+ {
+ label: 'Enum',
+ name: 'enum'
+ },
+ {
+ label: 'JSON Array',
+ name: 'jsonArray'
+ }
+ ]
+ },
+ {
+ label: 'Enum Values',
+ name: 'enumValues',
+ type: 'string',
+ placeholder: 'value1, value2, value3',
+ description: 'Enum values. Separated by comma',
+ optional: true,
+ show: {
+ 'agentStructuredOutput[$index].type': 'enum'
+ }
+ },
+ {
+ label: 'JSON Schema',
+ name: 'jsonSchema',
+ type: 'code',
+ placeholder: `{
+ "answer": {
+ "type": "string",
+ "description": "Value of the answer"
+ },
+ "reason": {
+ "type": "string",
+ "description": "Reason for the answer"
+ },
+ "optional": {
+ "type": "boolean"
+ },
+ "count": {
+ "type": "number"
+ },
+ "children": {
+ "type": "array",
+ "items": {
+ "type": "object",
+ "properties": {
+ "value": {
+ "type": "string",
+ "description": "Value of the children's answer"
+ }
+ }
+ }
+ }
+}`,
+ description: 'JSON schema for the structured output',
+ optional: true,
+ hideCodeExecute: true,
+ show: {
+ 'agentStructuredOutput[$index].type': 'jsonArray'
+ }
+ },
+ {
+ label: 'Description',
+ name: 'description',
+ type: 'string',
+ placeholder: 'Description of the key'
+ }
+ ]
+ },
{
label: 'Update Flow State',
name: 'agentUpdateState',
@@ -427,7 +615,8 @@ class Agent_Agentflow implements INode {
return returnData
}
- const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).find()
+ const searchOptions = options.searchOptions || {}
+ const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).findBy(searchOptions)
for (const store of stores) {
if (store.status === 'UPSERTED') {
const obj = {
@@ -495,18 +684,21 @@ class Agent_Agentflow implements INode {
}
}
const toolInstance = await newToolNodeInstance.init(newNodeData, '', options)
- if (tool.agentSelectedToolRequiresHumanInput) {
- toolInstance.requiresHumanInput = true
- }
// toolInstance might returns a list of tools like MCP tools
if (Array.isArray(toolInstance)) {
for (const subTool of toolInstance) {
const subToolInstance = subTool as Tool
;(subToolInstance as any).agentSelectedTool = tool.agentSelectedTool
+ if (tool.agentSelectedToolRequiresHumanInput) {
+ ;(subToolInstance as any).requiresHumanInput = true
+ }
toolsInstance.push(subToolInstance)
}
} else {
+ if (tool.agentSelectedToolRequiresHumanInput) {
+ toolInstance.requiresHumanInput = true
+ }
toolsInstance.push(toolInstance as Tool)
}
}
@@ -519,7 +711,7 @@ class Agent_Agentflow implements INode {
}
const componentNode = options.componentNodes[agentSelectedTool]
- const jsonSchema = zodToJsonSchema(tool.schema)
+ const jsonSchema = zodToJsonSchema(tool.schema as any)
if (jsonSchema.$schema) {
delete jsonSchema.$schema
}
@@ -686,12 +878,14 @@ class Agent_Agentflow implements INode {
const memoryType = nodeData.inputs?.agentMemoryType as string
const userMessage = nodeData.inputs?.agentUserMessage as string
const _agentUpdateState = nodeData.inputs?.agentUpdateState
+ const _agentStructuredOutput = nodeData.inputs?.agentStructuredOutput
const agentMessages = (nodeData.inputs?.agentMessages as unknown as ILLMMessage[]) ?? []
// Extract runtime state and history
const state = options.agentflowRuntime?.state as ICommonObject
const pastChatHistory = (options.pastChatHistory as BaseMessageLike[]) ?? []
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
+ const prependedChatHistory = options.prependedChatHistory as IMessage[]
const chatId = options.chatId as string
// Initialize the LLM model instance
@@ -710,6 +904,82 @@ class Agent_Agentflow implements INode {
const llmWithoutToolsBind = (await newLLMNodeInstance.init(newNodeData, '', options)) as BaseChatModel
let llmNodeInstance = llmWithoutToolsBind
+ const isStructuredOutput = _agentStructuredOutput && Array.isArray(_agentStructuredOutput) && _agentStructuredOutput.length > 0
+
+ const agentToolsBuiltInOpenAI = convertMultiOptionsToStringArray(nodeData.inputs?.agentToolsBuiltInOpenAI)
+ if (agentToolsBuiltInOpenAI && agentToolsBuiltInOpenAI.length > 0) {
+ for (const tool of agentToolsBuiltInOpenAI) {
+ const builtInTool: ICommonObject = {
+ type: tool
+ }
+ if (tool === 'code_interpreter') {
+ builtInTool.container = { type: 'auto' }
+ }
+ ;(toolsInstance as any).push(builtInTool)
+ ;(availableTools as any).push({
+ name: tool,
+ toolNode: {
+ label: tool,
+ name: tool
+ }
+ })
+ }
+ }
+
+ const agentToolsBuiltInGemini = convertMultiOptionsToStringArray(nodeData.inputs?.agentToolsBuiltInGemini)
+ if (agentToolsBuiltInGemini && agentToolsBuiltInGemini.length > 0) {
+ for (const tool of agentToolsBuiltInGemini) {
+ const builtInTool: ICommonObject = {
+ [tool]: {}
+ }
+ ;(toolsInstance as any).push(builtInTool)
+ ;(availableTools as any).push({
+ name: tool,
+ toolNode: {
+ label: tool,
+ name: tool
+ }
+ })
+ }
+ }
+
+ const agentToolsBuiltInAnthropic = convertMultiOptionsToStringArray(nodeData.inputs?.agentToolsBuiltInAnthropic)
+ if (agentToolsBuiltInAnthropic && agentToolsBuiltInAnthropic.length > 0) {
+ for (const tool of agentToolsBuiltInAnthropic) {
+ // split _ to get the tool name by removing the last part (date)
+ const toolName = tool.split('_').slice(0, -1).join('_')
+
+ if (tool === 'code_execution_20250825') {
+ ;(llmNodeInstance as any).clientOptions = {
+ defaultHeaders: {
+ 'anthropic-beta': ['code-execution-2025-08-25', 'files-api-2025-04-14']
+ }
+ }
+ }
+
+ if (tool === 'web_fetch_20250910') {
+ ;(llmNodeInstance as any).clientOptions = {
+ defaultHeaders: {
+ 'anthropic-beta': ['web-fetch-2025-09-10']
+ }
+ }
+ }
+
+ const builtInTool: ICommonObject = {
+ type: tool,
+ name: toolName
+ }
+ ;(toolsInstance as any).push(builtInTool)
+ ;(availableTools as any).push({
+ name: tool,
+ toolNode: {
+ label: tool,
+ name: tool
+ }
+ })
+ }
+ }
+
if (llmNodeInstance && toolsInstance.length > 0) {
if (llmNodeInstance.bindTools === undefined) {
throw new Error(`Agent needs to have a function calling capable models.`)
@@ -726,11 +996,27 @@ class Agent_Agentflow implements INode {
// Use to keep track of past messages with image file references
let pastImageMessagesWithFileRef: BaseMessageLike[] = []
+ // Prepend history ONLY if it is the first node
+ if (prependedChatHistory.length > 0 && !runtimeChatHistory.length) {
+ for (const msg of prependedChatHistory) {
+ const role: string = msg.role === 'apiMessage' ? 'assistant' : 'user'
+ const content: string = msg.content ?? ''
+ messages.push({
+ role,
+ content
+ })
+ }
+ }
+
for (const msg of agentMessages) {
const role = msg.role
const content = msg.content
if (role && content) {
- messages.push({ role, content })
+ if (role === 'system') {
+ messages.unshift({ role, content })
+ } else {
+ messages.push({ role, content })
+ }
}
}
@@ -755,7 +1041,7 @@ class Agent_Agentflow implements INode {
/*
* If this is the first node:
* - Add images to messages if exist
- * - Add user message
+ * - Add user message if it does not exist in the agentMessages array
*/
if (options.uploads) {
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
@@ -766,7 +1052,7 @@ class Agent_Agentflow implements INode {
}
}
- if (input && typeof input === 'string') {
+ if (input && typeof input === 'string' && !agentMessages.some((msg) => msg.role === 'user')) {
messages.push({
role: 'user',
content: input
@@ -778,7 +1064,7 @@ class Agent_Agentflow implements INode {
// Initialize response and determine if streaming is possible
let response: AIMessageChunk = new AIMessageChunk('')
const isLastNode = options.isLastNode as boolean
- const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false
+ const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false && !isStructuredOutput
// Start analytics
if (analyticHandlers && options.parentTraceIds) {
@@ -796,6 +1082,7 @@ class Agent_Agentflow implements INode {
let usedTools: IUsedTool[] = []
let sourceDocuments: Array = []
let artifacts: any[] = []
+ let fileAnnotations: any[] = []
let additionalTokens = 0
let isWaitingForHumanInput = false
@@ -826,7 +1113,8 @@ class Agent_Agentflow implements INode {
llmWithoutToolsBind,
isStreamable,
isLastNode,
- iterationContext
+ iterationContext,
+ isStructuredOutput
})
response = result.response
@@ -855,12 +1143,22 @@ class Agent_Agentflow implements INode {
}
} else {
if (isStreamable) {
- response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
+ response = await this.handleStreamingResponse(
+ sseStreamer,
+ llmNodeInstance,
+ messages,
+ chatId,
+ abortController,
+ isStructuredOutput
+ )
} else {
response = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
}
}
+ // Address built in tools (after artifacts are processed)
+ const builtInUsedTools: IUsedTool[] = await this.extractBuiltInUsedTools(response, [])
+
if (!humanInput && response.tool_calls && response.tool_calls.length > 0) {
const result = await this.handleToolCalls({
response,
@@ -874,7 +1172,8 @@ class Agent_Agentflow implements INode {
llmNodeInstance,
isStreamable,
isLastNode,
- iterationContext
+ iterationContext,
+ isStructuredOutput
})
response = result.response
@@ -901,13 +1200,18 @@ class Agent_Agentflow implements INode {
sseStreamer.streamArtifactsEvent(chatId, flatten(artifacts))
}
}
- } else if (!humanInput && !isStreamable && isLastNode && sseStreamer) {
+ } else if (!humanInput && !isStreamable && isLastNode && sseStreamer && !isStructuredOutput) {
// Stream whole response back to UI if not streaming and no tool calls
- let responseContent = JSON.stringify(response, null, 2)
- if (typeof response.content === 'string') {
- responseContent = response.content
+ // Skip this if structured output is enabled - it will be streamed after conversion
+ let finalResponse = ''
+ if (response.content && Array.isArray(response.content)) {
+ finalResponse = response.content.map((item: any) => item.text).join('\n')
+ } else if (response.content && typeof response.content === 'string') {
+ finalResponse = response.content
+ } else {
+ finalResponse = JSON.stringify(response, null, 2)
}
- sseStreamer.streamTokenEvent(chatId, responseContent)
+ sseStreamer.streamTokenEvent(chatId, finalResponse)
}
// Calculate execution time
@@ -928,7 +1232,71 @@ class Agent_Agentflow implements INode {
}
// Prepare final response and output object
- const finalResponse = (response.content as string) ?? JSON.stringify(response, null, 2)
+ let finalResponse = ''
+ if (response.content && Array.isArray(response.content)) {
+ finalResponse = response.content.map((item: any) => item.text).join('\n')
+ } else if (response.content && typeof response.content === 'string') {
+ finalResponse = response.content
+ } else {
+ finalResponse = JSON.stringify(response, null, 2)
+ }
+
+ // Address built in tools
+ const additionalBuiltInUsedTools: IUsedTool[] = await this.extractBuiltInUsedTools(response, builtInUsedTools)
+ if (additionalBuiltInUsedTools.length > 0) {
+ usedTools = [...new Set([...usedTools, ...additionalBuiltInUsedTools])]
+
+ // Stream used tools if this is the last node
+ if (isLastNode && sseStreamer) {
+ sseStreamer.streamUsedToolsEvent(chatId, flatten(usedTools))
+ }
+ }
+
+ // Extract artifacts from annotations in response metadata
+ if (response.response_metadata) {
+ const { artifacts: extractedArtifacts, fileAnnotations: extractedFileAnnotations } =
+ await this.extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
+ if (extractedArtifacts.length > 0) {
+ artifacts = [...artifacts, ...extractedArtifacts]
+
+ // Stream artifacts if this is the last node
+ if (isLastNode && sseStreamer) {
+ sseStreamer.streamArtifactsEvent(chatId, extractedArtifacts)
+ }
+ }
+
+ if (extractedFileAnnotations.length > 0) {
+ fileAnnotations = [...fileAnnotations, ...extractedFileAnnotations]
+
+ // Stream file annotations if this is the last node
+ if (isLastNode && sseStreamer) {
+ sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
+ }
+ }
+ }
+
+ // Replace sandbox links with proper download URLs. Example: [Download the script](sandbox:/mnt/data/dummy_bar_graph.py)
+ if (finalResponse.includes('sandbox:/')) {
+ finalResponse = await this.processSandboxLinks(finalResponse, options.baseURL, options.chatflowid, chatId)
+ }
+
+ // If is structured output, then invoke LLM again with structured output at the very end after all tool calls
+ if (isStructuredOutput) {
+ llmNodeInstance = configureStructuredOutput(llmNodeInstance, _agentStructuredOutput)
+ const prompt = 'Convert the following response to the structured output format: ' + finalResponse
+ response = await llmNodeInstance.invoke(prompt, { signal: abortController?.signal })
+
+ if (typeof response === 'object') {
+ finalResponse = '```json\n' + JSON.stringify(response, null, 2) + '\n```'
+ } else {
+ finalResponse = response
+ }
+
+ if (isLastNode && sseStreamer) {
+ sseStreamer.streamTokenEvent(chatId, finalResponse)
+ }
+ }
+
const output = this.prepareOutputObject(
response,
availableTools,
@@ -940,7 +1308,9 @@ class Agent_Agentflow implements INode {
sourceDocuments,
artifacts,
additionalTokens,
- isWaitingForHumanInput
+ isWaitingForHumanInput,
+ fileAnnotations,
+ isStructuredOutput
)
// End analytics tracking
@@ -953,15 +1323,14 @@ class Agent_Agentflow implements INode {
this.sendStreamingEvents(options, chatId, response)
}
- // Process template variables in state
- if (newState && Object.keys(newState).length > 0) {
- for (const key in newState) {
- if (newState[key].toString().includes('{{ output }}')) {
- newState[key] = finalResponse
- }
- }
+ // Stream file annotations if any were extracted
+ if (fileAnnotations.length > 0 && isLastNode && sseStreamer) {
+ sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
}
+ // Process template variables in state
+ newState = processTemplateVariables(newState, finalResponse)
+
// Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messages,
@@ -976,7 +1345,19 @@ class Agent_Agentflow implements INode {
inputMessages.push(...runtimeImageMessagesWithFileRef)
}
if (input && typeof input === 'string') {
- inputMessages.push({ role: 'user', content: input })
+ if (!enableMemory) {
+ if (!agentMessages.some((msg) => msg.role === 'user')) {
+ inputMessages.push({ role: 'user', content: input })
+ } else {
+ agentMessages.map((msg) => {
+ if (msg.role === 'user') {
+ inputMessages.push({ role: 'user', content: msg.content })
+ }
+ })
+ }
+ } else {
+ inputMessages.push({ role: 'user', content: input })
+ }
}
}
@@ -1006,7 +1387,16 @@ class Agent_Agentflow implements INode {
{
role: returnRole,
content: finalResponse,
- name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id
+ name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id,
+ ...(((artifacts && artifacts.length > 0) ||
+ (fileAnnotations && fileAnnotations.length > 0) ||
+ (usedTools && usedTools.length > 0)) && {
+ additional_kwargs: {
+ ...(artifacts && artifacts.length > 0 && { artifacts }),
+ ...(fileAnnotations && fileAnnotations.length > 0 && { fileAnnotations }),
+ ...(usedTools && usedTools.length > 0 && { usedTools })
+ }
+ })
}
]
}
@@ -1022,6 +1412,132 @@ class Agent_Agentflow implements INode {
}
}
+ /**
+ * Extracts built-in used tools from response metadata and processes image generation results
+ */
+ private async extractBuiltInUsedTools(response: AIMessageChunk, builtInUsedTools: IUsedTool[] = []): Promise {
+ if (!response.response_metadata) {
+ return builtInUsedTools
+ }
+
+ const { output, tools, groundingMetadata, urlContextMetadata } = response.response_metadata
+
+ // Handle OpenAI built-in tools
+ if (output && Array.isArray(output) && output.length > 0 && tools && Array.isArray(tools) && tools.length > 0) {
+ for (const outputItem of output) {
+ if (outputItem.type && outputItem.type.endsWith('_call')) {
+ let toolInput = outputItem.action ?? outputItem.code
+ let toolOutput = outputItem.status === 'completed' ? 'Success' : outputItem.status
+
+ // Handle image generation calls specially
+ if (outputItem.type === 'image_generation_call') {
+ // Create input summary for image generation
+ toolInput = {
+ prompt: outputItem.revised_prompt || 'Image generation request',
+ size: outputItem.size || '1024x1024',
+ quality: outputItem.quality || 'standard',
+ output_format: outputItem.output_format || 'png'
+ }
+
+ // Check if image has been processed (base64 replaced with file path)
+ if (outputItem.result && !outputItem.result.startsWith('data:') && !outputItem.result.includes('base64')) {
+ toolOutput = `Image generated and saved`
+ } else {
+ toolOutput = `Image generated (base64)`
+ }
+ }
+
+ // Remove "_call" suffix to get the base tool name
+ const baseToolName = outputItem.type.replace('_call', '')
+
+ // Find matching tool that includes the base name in its type
+ const matchingTool = tools.find((tool) => tool.type && tool.type.includes(baseToolName))
+
+ if (matchingTool) {
+ // Check for duplicates
+ if (builtInUsedTools.find((tool) => tool.tool === matchingTool.type)) {
+ continue
+ }
+
+ builtInUsedTools.push({
+ tool: matchingTool.type,
+ toolInput,
+ toolOutput
+ })
+ }
+ }
+ }
+ }
+
+ // Handle Gemini googleSearch tool
+ if (groundingMetadata && groundingMetadata.webSearchQueries && Array.isArray(groundingMetadata.webSearchQueries)) {
+ // Check for duplicates
+ if (!builtInUsedTools.find((tool) => tool.tool === 'googleSearch')) {
+ builtInUsedTools.push({
+ tool: 'googleSearch',
+ toolInput: {
+ queries: groundingMetadata.webSearchQueries
+ },
+ toolOutput: `Searched for: ${groundingMetadata.webSearchQueries.join(', ')}`
+ })
+ }
+ }
+
+ // Handle Gemini urlContext tool
+ if (urlContextMetadata && urlContextMetadata.urlMetadata && Array.isArray(urlContextMetadata.urlMetadata)) {
+ // Check for duplicates
+ if (!builtInUsedTools.find((tool) => tool.tool === 'urlContext')) {
+ builtInUsedTools.push({
+ tool: 'urlContext',
+ toolInput: {
+ urlMetadata: urlContextMetadata.urlMetadata
+ },
+ toolOutput: `Processed ${urlContextMetadata.urlMetadata.length} URL(s)`
+ })
+ }
+ }
+
+ return builtInUsedTools
+ }
+
+ /**
+ * Saves base64 image data to storage and returns file information
+ */
+ private async saveBase64Image(
+ outputItem: any,
+ options: ICommonObject
+ ): Promise<{ filePath: string; fileName: string; totalSize: number } | null> {
+ try {
+ if (!outputItem.result) {
+ return null
+ }
+
+ // Extract base64 data and create buffer
+ const base64Data = outputItem.result
+ const imageBuffer = Buffer.from(base64Data, 'base64')
+
+ // Determine file extension and MIME type
+ const outputFormat = outputItem.output_format || 'png'
+ const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
+ const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
+
+ // Save the image using the existing storage utility
+ const { path, totalSize } = await addSingleFileToStorage(
+ mimeType,
+ imageBuffer,
+ fileName,
+ options.orgId,
+ options.chatflowid,
+ options.chatId
+ )
+
+ return { filePath: path, fileName, totalSize }
+ } catch (error) {
+ console.error('Error saving base64 image:', error)
+ return null
+ }
+ }
+
/**
* Handles memory management based on the specified memory type
*/
@@ -1184,24 +1700,29 @@ class Agent_Agentflow implements INode {
llmNodeInstance: BaseChatModel,
messages: BaseMessageLike[],
chatId: string,
- abortController: AbortController
+ abortController: AbortController,
+ isStructuredOutput: boolean = false
): Promise {
let response = new AIMessageChunk('')
try {
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
- if (sseStreamer) {
+ if (sseStreamer && !isStructuredOutput) {
let content = ''
- if (Array.isArray(chunk.content) && chunk.content.length > 0) {
+
+ if (typeof chunk === 'string') {
+ content = chunk
+ } else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
const contents = chunk.content as MessageContentText[]
content = contents.map((item) => item.text).join('')
- } else {
+ } else if (chunk.content) {
content = chunk.content.toString()
}
sseStreamer.streamTokenEvent(chatId, content)
}
- response = response.concat(chunk)
+ const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
+ response = response.concat(messageChunk)
}
} catch (error) {
console.error('Error during streaming:', error)
@@ -1228,7 +1749,9 @@ class Agent_Agentflow implements INode {
sourceDocuments: Array,
artifacts: any[],
additionalTokens: number = 0,
- isWaitingForHumanInput: boolean = false
+ isWaitingForHumanInput: boolean = false,
+ fileAnnotations: any[] = [],
+ isStructuredOutput: boolean = false
): any {
const output: any = {
content: finalResponse,
@@ -1259,6 +1782,19 @@ class Agent_Agentflow implements INode {
}
}
+ if (response.response_metadata) {
+ output.responseMetadata = response.response_metadata
+ }
+
+ if (isStructuredOutput && typeof response === 'object') {
+ const structuredOutput = response as Record
+ for (const key in structuredOutput) {
+ if (structuredOutput[key] !== undefined && structuredOutput[key] !== null) {
+ output[key] = structuredOutput[key]
+ }
+ }
+ }
+
// Add used tools, source documents and artifacts to output
if (usedTools && usedTools.length > 0) {
output.usedTools = flatten(usedTools)
@@ -1280,6 +1816,10 @@ class Agent_Agentflow implements INode {
output.isWaitingForHumanInput = isWaitingForHumanInput
}
+ if (fileAnnotations && fileAnnotations.length > 0) {
+ output.fileAnnotations = fileAnnotations
+ }
+
return output
}
@@ -1290,7 +1830,12 @@ class Agent_Agentflow implements INode {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
if (response.tool_calls) {
- sseStreamer.streamCalledToolsEvent(chatId, response.tool_calls)
+ const formattedToolCalls = response.tool_calls.map((toolCall: any) => ({
+ tool: toolCall.name || 'tool',
+ toolInput: toolCall.args,
+ toolOutput: ''
+ }))
+ sseStreamer.streamCalledToolsEvent(chatId, flatten(formattedToolCalls))
}
if (response.usage_metadata) {
@@ -1315,7 +1860,8 @@ class Agent_Agentflow implements INode {
llmNodeInstance,
isStreamable,
isLastNode,
- iterationContext
+ iterationContext,
+ isStructuredOutput = false
}: {
response: AIMessageChunk
messages: BaseMessageLike[]
@@ -1329,6 +1875,7 @@ class Agent_Agentflow implements INode {
isStreamable: boolean
isLastNode: boolean
iterationContext: ICommonObject
+ isStructuredOutput?: boolean
}): Promise<{
response: AIMessageChunk
usedTools: IUsedTool[]
@@ -1339,6 +1886,10 @@ class Agent_Agentflow implements INode {
}> {
// Track total tokens used throughout this process
let totalTokens = response.usage_metadata?.total_tokens || 0
+ const usedTools: IUsedTool[] = []
+ let sourceDocuments: Array = []
+ let artifacts: any[] = []
+ let isWaitingForHumanInput: boolean | undefined
if (!response.tool_calls || response.tool_calls.length === 0) {
return { response, usedTools: [], sourceDocuments: [], artifacts: [], totalTokens }
@@ -1346,7 +1897,30 @@ class Agent_Agentflow implements INode {
// Stream tool calls if available
if (sseStreamer) {
- sseStreamer.streamCalledToolsEvent(chatId, JSON.stringify(response.tool_calls))
+ const formattedToolCalls = response.tool_calls.map((toolCall: any) => ({
+ tool: toolCall.name || 'tool',
+ toolInput: toolCall.args,
+ toolOutput: ''
+ }))
+ sseStreamer.streamCalledToolsEvent(chatId, flatten(formattedToolCalls))
+ }
+
+ // Remove tool calls with no id
+ const toBeRemovedToolCalls = []
+ for (let i = 0; i < response.tool_calls.length; i++) {
+ const toolCall = response.tool_calls[i]
+ if (!toolCall.id) {
+ toBeRemovedToolCalls.push(toolCall)
+ usedTools.push({
+ tool: toolCall.name || 'tool',
+ toolInput: toolCall.args,
+ toolOutput: response.content
+ })
+ }
+ }
+
+ for (const toolCall of toBeRemovedToolCalls) {
+ response.tool_calls.splice(response.tool_calls.indexOf(toolCall), 1)
}
// Add LLM response with tool calls to messages
@@ -1358,10 +1932,6 @@ class Agent_Agentflow implements INode {
usage_metadata: response.usage_metadata
})
- const usedTools: IUsedTool[] = []
- let sourceDocuments: Array = []
- let artifacts: any[] = []
-
// Process each tool call
for (let i = 0; i < response.tool_calls.length; i++) {
const toolCall = response.tool_calls[i]
@@ -1374,6 +1944,7 @@ class Agent_Agentflow implements INode {
(selectedTool as any).requiresHumanInput && (!iterationContext || Object.keys(iterationContext).length === 0)
const flowConfig = {
+ chatflowId: options.chatflowid,
sessionId: options.sessionId,
chatId: options.chatId,
input: input,
@@ -1384,14 +1955,25 @@ class Agent_Agentflow implements INode {
const toolCallDetails = '```json\n' + JSON.stringify(toolCall, null, 2) + '\n```'
const responseContent = response.content + `\nAttempting to use tool:\n${toolCallDetails}`
response.content = responseContent
- sseStreamer?.streamTokenEvent(chatId, responseContent)
+ if (!isStructuredOutput) {
+ sseStreamer?.streamTokenEvent(chatId, responseContent)
+ }
return { response, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput: true }
}
+ let toolIds: ICommonObject | undefined
+ if (options.analyticHandlers) {
+ toolIds = await options.analyticHandlers.onToolStart(toolCall.name, toolCall.args, options.parentTraceIds)
+ }
+
try {
//@ts-ignore
let toolOutput = await selectedTool.call(toolCall.args, { signal: abortController?.signal }, undefined, flowConfig)
+ if (options.analyticHandlers && toolIds) {
+ await options.analyticHandlers.onToolEnd(toolIds, toolOutput)
+ }
+
// Extract source documents if present
if (typeof toolOutput === 'string' && toolOutput.includes(SOURCE_DOCUMENTS_PREFIX)) {
const [output, docs] = toolOutput.split(SOURCE_DOCUMENTS_PREFIX)
@@ -1416,6 +1998,17 @@ class Agent_Agentflow implements INode {
}
}
+ let toolInput
+ if (typeof toolOutput === 'string' && toolOutput.includes(TOOL_ARGS_PREFIX)) {
+ const [output, args] = toolOutput.split(TOOL_ARGS_PREFIX)
+ toolOutput = output
+ try {
+ toolInput = JSON.parse(args)
+ } catch (e) {
+ console.error('Error parsing tool input from tool:', e)
+ }
+ }
+
// Add tool message to conversation
messages.push({
role: 'tool',
@@ -1431,14 +2024,29 @@ class Agent_Agentflow implements INode {
// Track used tools
usedTools.push({
tool: toolCall.name,
- toolInput: toolCall.args,
+ toolInput: toolInput ?? toolCall.args,
toolOutput
})
} catch (e) {
+ if (options.analyticHandlers && toolIds) {
+ await options.analyticHandlers.onToolEnd(toolIds, e)
+ }
+
console.error('Error invoking tool:', e)
+ const errMsg = getErrorMessage(e)
+ let toolInput = toolCall.args
+ if (typeof errMsg === 'string' && errMsg.includes(TOOL_ARGS_PREFIX)) {
+ const [_, args] = errMsg.split(TOOL_ARGS_PREFIX)
+ try {
+ toolInput = JSON.parse(args)
+ } catch (e) {
+ console.error('Error parsing tool input from tool:', e)
+ }
+ }
+
usedTools.push({
tool: selectedTool.name,
- toolInput: toolCall.args,
+ toolInput,
toolOutput: '',
error: getErrorMessage(e)
})
@@ -1455,7 +2063,7 @@ class Agent_Agentflow implements INode {
const lastToolOutput = usedTools[0]?.toolOutput || ''
const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
- if (sseStreamer) {
+ if (sseStreamer && !isStructuredOutput) {
sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
}
@@ -1469,264 +2077,34 @@ class Agent_Agentflow implements INode {
}
}
+ if (response.tool_calls.length === 0) {
+ const responseContent = typeof response.content === 'string' ? response.content : JSON.stringify(response.content, null, 2)
+ return {
+ response: new AIMessageChunk(responseContent),
+ usedTools,
+ sourceDocuments,
+ artifacts,
+ totalTokens
+ }
+ }
+
// Get LLM response after tool calls
let newResponse: AIMessageChunk
if (isStreamable) {
- newResponse = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
- } else {
- newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
-
- // Stream non-streaming response if this is the last node
- if (isLastNode && sseStreamer) {
- let responseContent = JSON.stringify(newResponse, null, 2)
- if (typeof newResponse.content === 'string') {
- responseContent = newResponse.content
- }
- sseStreamer.streamTokenEvent(chatId, responseContent)
- }
- }
-
- // Add tokens from this response
- if (newResponse.usage_metadata?.total_tokens) {
- totalTokens += newResponse.usage_metadata.total_tokens
- }
-
- // Check for recursive tool calls and handle them
- if (newResponse.tool_calls && newResponse.tool_calls.length > 0) {
- const {
- response: recursiveResponse,
- usedTools: recursiveUsedTools,
- sourceDocuments: recursiveSourceDocuments,
- artifacts: recursiveArtifacts,
- totalTokens: recursiveTokens
- } = await this.handleToolCalls({
- response: newResponse,
- messages,
- toolsInstance,
+ newResponse = await this.handleStreamingResponse(
sseStreamer,
- chatId,
- input,
- options,
- abortController,
llmNodeInstance,
- isStreamable,
- isLastNode,
- iterationContext
- })
-
- // Merge results from recursive tool calls
- newResponse = recursiveResponse
- usedTools.push(...recursiveUsedTools)
- sourceDocuments = [...sourceDocuments, ...recursiveSourceDocuments]
- artifacts = [...artifacts, ...recursiveArtifacts]
- totalTokens += recursiveTokens
- }
-
- return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens }
- }
-
- /**
- * Handles tool calls and their responses, with support for recursive tool calling
- */
- private async handleResumedToolCalls({
- humanInput,
- humanInputAction,
- messages,
- toolsInstance,
- sseStreamer,
- chatId,
- input,
- options,
- abortController,
- llmWithoutToolsBind,
- isStreamable,
- isLastNode,
- iterationContext
- }: {
- humanInput: IHumanInput
- humanInputAction: Record | undefined
- messages: BaseMessageLike[]
- toolsInstance: Tool[]
- sseStreamer: IServerSideEventStreamer | undefined
- chatId: string
- input: string | Record
- options: ICommonObject
- abortController: AbortController
- llmWithoutToolsBind: BaseChatModel
- isStreamable: boolean
- isLastNode: boolean
- iterationContext: ICommonObject
- }): Promise<{
- response: AIMessageChunk
- usedTools: IUsedTool[]
- sourceDocuments: Array
- artifacts: any[]
- totalTokens: number
- isWaitingForHumanInput?: boolean
- }> {
- let llmNodeInstance = llmWithoutToolsBind
-
- const lastCheckpointMessages = humanInputAction?.data?.input?.messages ?? []
- if (!lastCheckpointMessages.length) {
- return { response: new AIMessageChunk(''), usedTools: [], sourceDocuments: [], artifacts: [], totalTokens: 0 }
- }
-
- // Use the last message as the response
- const response = lastCheckpointMessages[lastCheckpointMessages.length - 1] as AIMessageChunk
-
- // Replace messages array
- messages.length = 0
- messages.push(...lastCheckpointMessages.slice(0, lastCheckpointMessages.length - 1))
-
- // Track total tokens used throughout this process
- let totalTokens = response.usage_metadata?.total_tokens || 0
-
- if (!response.tool_calls || response.tool_calls.length === 0) {
- return { response, usedTools: [], sourceDocuments: [], artifacts: [], totalTokens }
- }
-
- // Stream tool calls if available
- if (sseStreamer) {
- sseStreamer.streamCalledToolsEvent(chatId, JSON.stringify(response.tool_calls))
- }
-
- // Add LLM response with tool calls to messages
- messages.push({
- id: response.id,
- role: 'assistant',
- content: response.content,
- tool_calls: response.tool_calls,
- usage_metadata: response.usage_metadata
- })
-
- const usedTools: IUsedTool[] = []
- let sourceDocuments: Array = []
- let artifacts: any[] = []
- let isWaitingForHumanInput: boolean | undefined
-
- // Process each tool call
- for (let i = 0; i < response.tool_calls.length; i++) {
- const toolCall = response.tool_calls[i]
-
- const selectedTool = toolsInstance.find((tool) => tool.name === toolCall.name)
- if (selectedTool) {
- let parsedDocs
- let parsedArtifacts
-
- const flowConfig = {
- sessionId: options.sessionId,
- chatId: options.chatId,
- input: input,
- state: options.agentflowRuntime?.state
- }
-
- if (humanInput.type === 'reject') {
- messages.pop()
- toolsInstance = toolsInstance.filter((tool) => tool.name !== toolCall.name)
- }
- if (humanInput.type === 'proceed') {
- try {
- //@ts-ignore
- let toolOutput = await selectedTool.call(toolCall.args, { signal: abortController?.signal }, undefined, flowConfig)
-
- // Extract source documents if present
- if (typeof toolOutput === 'string' && toolOutput.includes(SOURCE_DOCUMENTS_PREFIX)) {
- const [output, docs] = toolOutput.split(SOURCE_DOCUMENTS_PREFIX)
- toolOutput = output
- try {
- parsedDocs = JSON.parse(docs)
- sourceDocuments.push(parsedDocs)
- } catch (e) {
- console.error('Error parsing source documents from tool:', e)
- }
- }
-
- // Extract artifacts if present
- if (typeof toolOutput === 'string' && toolOutput.includes(ARTIFACTS_PREFIX)) {
- const [output, artifact] = toolOutput.split(ARTIFACTS_PREFIX)
- toolOutput = output
- try {
- parsedArtifacts = JSON.parse(artifact)
- artifacts.push(parsedArtifacts)
- } catch (e) {
- console.error('Error parsing artifacts from tool:', e)
- }
- }
-
- // Add tool message to conversation
- messages.push({
- role: 'tool',
- content: toolOutput,
- tool_call_id: toolCall.id,
- name: toolCall.name,
- additional_kwargs: {
- artifacts: parsedArtifacts,
- sourceDocuments: parsedDocs
- }
- })
-
- // Track used tools
- usedTools.push({
- tool: toolCall.name,
- toolInput: toolCall.args,
- toolOutput
- })
- } catch (e) {
- console.error('Error invoking tool:', e)
- usedTools.push({
- tool: selectedTool.name,
- toolInput: toolCall.args,
- toolOutput: '',
- error: getErrorMessage(e)
- })
- sseStreamer?.streamUsedToolsEvent(chatId, flatten(usedTools))
- throw new Error(getErrorMessage(e))
- }
- }
- }
- }
-
- // Return direct tool output if there's exactly one tool with returnDirect
- if (response.tool_calls.length === 1) {
- const selectedTool = toolsInstance.find((tool) => tool.name === response.tool_calls?.[0]?.name)
- if (selectedTool && selectedTool.returnDirect) {
- const lastToolOutput = usedTools[0]?.toolOutput || ''
- const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
-
- if (sseStreamer) {
- sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
- }
-
- return {
- response: new AIMessageChunk(lastToolOutputString),
- usedTools,
- sourceDocuments,
- artifacts,
- totalTokens
- }
- }
- }
-
- // Get LLM response after tool calls
- let newResponse: AIMessageChunk
-
- if (llmNodeInstance && toolsInstance.length > 0) {
- if (llmNodeInstance.bindTools === undefined) {
- throw new Error(`Agent needs to have a function calling capable models.`)
- }
-
- // @ts-ignore
- llmNodeInstance = llmNodeInstance.bindTools(toolsInstance)
- }
-
- if (isStreamable) {
- newResponse = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
+ messages,
+ chatId,
+ abortController,
+ isStructuredOutput
+ )
} else {
newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
// Stream non-streaming response if this is the last node
- if (isLastNode && sseStreamer) {
+ if (isLastNode && sseStreamer && !isStructuredOutput) {
let responseContent = JSON.stringify(newResponse, null, 2)
if (typeof newResponse.content === 'string') {
responseContent = newResponse.content
@@ -1761,7 +2139,8 @@ class Agent_Agentflow implements INode {
llmNodeInstance,
isStreamable,
isLastNode,
- iterationContext
+ iterationContext,
+ isStructuredOutput
})
// Merge results from recursive tool calls
@@ -1775,6 +2154,553 @@ class Agent_Agentflow implements INode {
return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput }
}
+
+ /**
+ * Handles tool calls and their responses, with support for recursive tool calling
+ */
+ private async handleResumedToolCalls({
+ humanInput,
+ humanInputAction,
+ messages,
+ toolsInstance,
+ sseStreamer,
+ chatId,
+ input,
+ options,
+ abortController,
+ llmWithoutToolsBind,
+ isStreamable,
+ isLastNode,
+ iterationContext,
+ isStructuredOutput = false
+ }: {
+ humanInput: IHumanInput
+ humanInputAction: Record | undefined
+ messages: BaseMessageLike[]
+ toolsInstance: Tool[]
+ sseStreamer: IServerSideEventStreamer | undefined
+ chatId: string
+ input: string | Record
+ options: ICommonObject
+ abortController: AbortController
+ llmWithoutToolsBind: BaseChatModel
+ isStreamable: boolean
+ isLastNode: boolean
+ iterationContext: ICommonObject
+ isStructuredOutput?: boolean
+ }): Promise<{
+ response: AIMessageChunk
+ usedTools: IUsedTool[]
+ sourceDocuments: Array
+ artifacts: any[]
+ totalTokens: number
+ isWaitingForHumanInput?: boolean
+ }> {
+ let llmNodeInstance = llmWithoutToolsBind
+ const usedTools: IUsedTool[] = []
+ let sourceDocuments: Array = []
+ let artifacts: any[] = []
+ let isWaitingForHumanInput: boolean | undefined
+
+ const lastCheckpointMessages = humanInputAction?.data?.input?.messages ?? []
+ if (!lastCheckpointMessages.length) {
+ return { response: new AIMessageChunk(''), usedTools: [], sourceDocuments: [], artifacts: [], totalTokens: 0 }
+ }
+
+ // Use the last message as the response
+ const response = lastCheckpointMessages[lastCheckpointMessages.length - 1] as AIMessageChunk
+
+ // Replace messages array
+ messages.length = 0
+ messages.push(...lastCheckpointMessages.slice(0, lastCheckpointMessages.length - 1))
+
+ // Track total tokens used throughout this process
+ let totalTokens = response.usage_metadata?.total_tokens || 0
+
+ if (!response.tool_calls || response.tool_calls.length === 0) {
+ return { response, usedTools: [], sourceDocuments: [], artifacts: [], totalTokens }
+ }
+
+ // Stream tool calls if available
+ if (sseStreamer) {
+ const formattedToolCalls = response.tool_calls.map((toolCall: any) => ({
+ tool: toolCall.name || 'tool',
+ toolInput: toolCall.args,
+ toolOutput: ''
+ }))
+ sseStreamer.streamCalledToolsEvent(chatId, flatten(formattedToolCalls))
+ }
+
+ // Remove tool calls with no id
+ const toBeRemovedToolCalls = []
+ for (let i = 0; i < response.tool_calls.length; i++) {
+ const toolCall = response.tool_calls[i]
+ if (!toolCall.id) {
+ toBeRemovedToolCalls.push(toolCall)
+ usedTools.push({
+ tool: toolCall.name || 'tool',
+ toolInput: toolCall.args,
+ toolOutput: response.content
+ })
+ }
+ }
+
+ for (const toolCall of toBeRemovedToolCalls) {
+ response.tool_calls.splice(response.tool_calls.indexOf(toolCall), 1)
+ }
+
+ // Add LLM response with tool calls to messages
+ messages.push({
+ id: response.id,
+ role: 'assistant',
+ content: response.content,
+ tool_calls: response.tool_calls,
+ usage_metadata: response.usage_metadata
+ })
+
+ // Process each tool call
+ for (let i = 0; i < response.tool_calls.length; i++) {
+ const toolCall = response.tool_calls[i]
+
+ const selectedTool = toolsInstance.find((tool) => tool.name === toolCall.name)
+ if (selectedTool) {
+ let parsedDocs
+ let parsedArtifacts
+
+ const flowConfig = {
+ chatflowId: options.chatflowid,
+ sessionId: options.sessionId,
+ chatId: options.chatId,
+ input: input,
+ state: options.agentflowRuntime?.state
+ }
+
+ if (humanInput.type === 'reject') {
+ messages.pop()
+ const toBeRemovedTool = toolsInstance.find((tool) => tool.name === toolCall.name)
+ if (toBeRemovedTool) {
+ toolsInstance = toolsInstance.filter((tool) => tool.name !== toolCall.name)
+ // Remove other tools with the same agentSelectedTool such as MCP tools
+ toolsInstance = toolsInstance.filter(
+ (tool) => (tool as any).agentSelectedTool !== (toBeRemovedTool as any).agentSelectedTool
+ )
+ }
+ }
+ if (humanInput.type === 'proceed') {
+ let toolIds: ICommonObject | undefined
+ if (options.analyticHandlers) {
+ toolIds = await options.analyticHandlers.onToolStart(toolCall.name, toolCall.args, options.parentTraceIds)
+ }
+
+ try {
+ //@ts-ignore
+ let toolOutput = await selectedTool.call(toolCall.args, { signal: abortController?.signal }, undefined, flowConfig)
+
+ if (options.analyticHandlers && toolIds) {
+ await options.analyticHandlers.onToolEnd(toolIds, toolOutput)
+ }
+
+ // Extract source documents if present
+ if (typeof toolOutput === 'string' && toolOutput.includes(SOURCE_DOCUMENTS_PREFIX)) {
+ const [output, docs] = toolOutput.split(SOURCE_DOCUMENTS_PREFIX)
+ toolOutput = output
+ try {
+ parsedDocs = JSON.parse(docs)
+ sourceDocuments.push(parsedDocs)
+ } catch (e) {
+ console.error('Error parsing source documents from tool:', e)
+ }
+ }
+
+ // Extract artifacts if present
+ if (typeof toolOutput === 'string' && toolOutput.includes(ARTIFACTS_PREFIX)) {
+ const [output, artifact] = toolOutput.split(ARTIFACTS_PREFIX)
+ toolOutput = output
+ try {
+ parsedArtifacts = JSON.parse(artifact)
+ artifacts.push(parsedArtifacts)
+ } catch (e) {
+ console.error('Error parsing artifacts from tool:', e)
+ }
+ }
+
+ let toolInput
+ if (typeof toolOutput === 'string' && toolOutput.includes(TOOL_ARGS_PREFIX)) {
+ const [output, args] = toolOutput.split(TOOL_ARGS_PREFIX)
+ toolOutput = output
+ try {
+ toolInput = JSON.parse(args)
+ } catch (e) {
+ console.error('Error parsing tool input from tool:', e)
+ }
+ }
+
+ // Add tool message to conversation
+ messages.push({
+ role: 'tool',
+ content: toolOutput,
+ tool_call_id: toolCall.id,
+ name: toolCall.name,
+ additional_kwargs: {
+ artifacts: parsedArtifacts,
+ sourceDocuments: parsedDocs
+ }
+ })
+
+ // Track used tools
+ usedTools.push({
+ tool: toolCall.name,
+ toolInput: toolInput ?? toolCall.args,
+ toolOutput
+ })
+ } catch (e) {
+ if (options.analyticHandlers && toolIds) {
+ await options.analyticHandlers.onToolEnd(toolIds, e)
+ }
+
+ console.error('Error invoking tool:', e)
+ const errMsg = getErrorMessage(e)
+ let toolInput = toolCall.args
+ if (typeof errMsg === 'string' && errMsg.includes(TOOL_ARGS_PREFIX)) {
+ const [_, args] = errMsg.split(TOOL_ARGS_PREFIX)
+ try {
+ toolInput = JSON.parse(args)
+ } catch (e) {
+ console.error('Error parsing tool input from tool:', e)
+ }
+ }
+
+ usedTools.push({
+ tool: selectedTool.name,
+ toolInput,
+ toolOutput: '',
+ error: getErrorMessage(e)
+ })
+ sseStreamer?.streamUsedToolsEvent(chatId, flatten(usedTools))
+ throw new Error(getErrorMessage(e))
+ }
+ }
+ }
+ }
+
+ // Return direct tool output if there's exactly one tool with returnDirect
+ if (response.tool_calls.length === 1) {
+ const selectedTool = toolsInstance.find((tool) => tool.name === response.tool_calls?.[0]?.name)
+ if (selectedTool && selectedTool.returnDirect) {
+ const lastToolOutput = usedTools[0]?.toolOutput || ''
+ const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
+
+ if (sseStreamer && !isStructuredOutput) {
+ sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
+ }
+
+ return {
+ response: new AIMessageChunk(lastToolOutputString),
+ usedTools,
+ sourceDocuments,
+ artifacts,
+ totalTokens
+ }
+ }
+ }
+
+ // Get LLM response after tool calls
+ let newResponse: AIMessageChunk
+
+ if (llmNodeInstance && (llmNodeInstance as any).builtInTools && (llmNodeInstance as any).builtInTools.length > 0) {
+ toolsInstance.push(...(llmNodeInstance as any).builtInTools)
+ }
+
+ if (llmNodeInstance && toolsInstance.length > 0) {
+ if (llmNodeInstance.bindTools === undefined) {
+ throw new Error(`Agent needs to have a function calling capable models.`)
+ }
+
+ // @ts-ignore
+ llmNodeInstance = llmNodeInstance.bindTools(toolsInstance)
+ }
+
+ if (isStreamable) {
+ newResponse = await this.handleStreamingResponse(
+ sseStreamer,
+ llmNodeInstance,
+ messages,
+ chatId,
+ abortController,
+ isStructuredOutput
+ )
+ } else {
+ newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
+
+ // Stream non-streaming response if this is the last node
+ if (isLastNode && sseStreamer && !isStructuredOutput) {
+ let responseContent = JSON.stringify(newResponse, null, 2)
+ if (typeof newResponse.content === 'string') {
+ responseContent = newResponse.content
+ }
+ sseStreamer.streamTokenEvent(chatId, responseContent)
+ }
+ }
+
+ // Add tokens from this response
+ if (newResponse.usage_metadata?.total_tokens) {
+ totalTokens += newResponse.usage_metadata.total_tokens
+ }
+
+ // Check for recursive tool calls and handle them
+ if (newResponse.tool_calls && newResponse.tool_calls.length > 0) {
+ const {
+ response: recursiveResponse,
+ usedTools: recursiveUsedTools,
+ sourceDocuments: recursiveSourceDocuments,
+ artifacts: recursiveArtifacts,
+ totalTokens: recursiveTokens,
+ isWaitingForHumanInput: recursiveIsWaitingForHumanInput
+ } = await this.handleToolCalls({
+ response: newResponse,
+ messages,
+ toolsInstance,
+ sseStreamer,
+ chatId,
+ input,
+ options,
+ abortController,
+ llmNodeInstance,
+ isStreamable,
+ isLastNode,
+ iterationContext,
+ isStructuredOutput
+ })
+
+ // Merge results from recursive tool calls
+ newResponse = recursiveResponse
+ usedTools.push(...recursiveUsedTools)
+ sourceDocuments = [...sourceDocuments, ...recursiveSourceDocuments]
+ artifacts = [...artifacts, ...recursiveArtifacts]
+ totalTokens += recursiveTokens
+ isWaitingForHumanInput = recursiveIsWaitingForHumanInput
+ }
+
+ return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput }
+ }
+
+ /**
+ * Extracts artifacts from response metadata (both annotations and built-in tools)
+ */
+ private async extractArtifactsFromResponse(
+ responseMetadata: any,
+ modelNodeData: INodeData,
+ options: ICommonObject
+ ): Promise<{ artifacts: any[]; fileAnnotations: any[] }> {
+ const artifacts: any[] = []
+ const fileAnnotations: any[] = []
+
+ if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
+ return { artifacts, fileAnnotations }
+ }
+
+ for (const outputItem of responseMetadata.output) {
+ // Handle container file citations from annotations
+ if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
+ for (const contentItem of outputItem.content) {
+ if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
+ for (const annotation of contentItem.annotations) {
+ if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
+ try {
+ // Download and store the file content
+ const downloadResult = await this.downloadContainerFile(
+ annotation.container_id,
+ annotation.file_id,
+ annotation.filename,
+ modelNodeData,
+ options
+ )
+
+ if (downloadResult) {
+ const fileType = this.getArtifactTypeFromFilename(annotation.filename)
+
+ if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
+ const artifact = {
+ type: fileType,
+ data: downloadResult.filePath
+ }
+
+ artifacts.push(artifact)
+ } else {
+ fileAnnotations.push({
+ filePath: downloadResult.filePath,
+ fileName: annotation.filename
+ })
+ }
+ }
+ } catch (error) {
+ console.error('Error processing annotation:', error)
+ }
+ }
+ }
+ }
+ }
+ }
+
+ // Handle built-in tool artifacts (like image generation)
+ if (outputItem.type === 'image_generation_call' && outputItem.result) {
+ try {
+ const savedImageResult = await this.saveBase64Image(outputItem, options)
+ if (savedImageResult) {
+ // Replace the base64 result with the file path in the response metadata
+ outputItem.result = savedImageResult.filePath
+
+ // Create artifact in the same format as other image artifacts
+ const fileType = this.getArtifactTypeFromFilename(savedImageResult.fileName)
+ artifacts.push({
+ type: fileType,
+ data: savedImageResult.filePath
+ })
+ }
+ } catch (error) {
+ console.error('Error processing image generation artifact:', error)
+ }
+ }
+ }
+
+ return { artifacts, fileAnnotations }
+ }
+
+ /**
+ * Downloads file content from container file citation
+ */
+ private async downloadContainerFile(
+ containerId: string,
+ fileId: string,
+ filename: string,
+ modelNodeData: INodeData,
+ options: ICommonObject
+ ): Promise<{ filePath: string; totalSize: number } | null> {
+ try {
+ const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
+ const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
+
+ if (!openAIApiKey) {
+ console.warn('No OpenAI API key available for downloading container file')
+ return null
+ }
+
+ // Download the file using OpenAI Container API
+ const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
+ method: 'GET',
+ headers: {
+ Accept: '*/*',
+ Authorization: `Bearer ${openAIApiKey}`
+ }
+ })
+
+ if (!response.ok) {
+ console.warn(
+ `Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
+ )
+ return null
+ }
+
+ // Extract the binary data from the Response object
+ const data = await response.arrayBuffer()
+ const dataBuffer = Buffer.from(data)
+ const mimeType = this.getMimeTypeFromFilename(filename)
+
+ // Store the file using the same storage utility as OpenAIAssistant
+ const { path, totalSize } = await addSingleFileToStorage(
+ mimeType,
+ dataBuffer,
+ filename,
+ options.orgId,
+ options.chatflowid,
+ options.chatId
+ )
+
+ return { filePath: path, totalSize }
+ } catch (error) {
+ console.error('Error downloading container file:', error)
+ return null
+ }
+ }
+
+ /**
+ * Gets MIME type from filename extension
+ */
+ private getMimeTypeFromFilename(filename: string): string {
+ const extension = filename.toLowerCase().split('.').pop()
+ const mimeTypes: { [key: string]: string } = {
+ png: 'image/png',
+ jpg: 'image/jpeg',
+ jpeg: 'image/jpeg',
+ gif: 'image/gif',
+ pdf: 'application/pdf',
+ txt: 'text/plain',
+ csv: 'text/csv',
+ json: 'application/json',
+ html: 'text/html',
+ xml: 'application/xml'
+ }
+ return mimeTypes[extension || ''] || 'application/octet-stream'
+ }
+
+ /**
+ * Gets artifact type from filename extension for UI rendering
+ */
+ private getArtifactTypeFromFilename(filename: string): string {
+ const extension = filename.toLowerCase().split('.').pop()
+ const artifactTypes: { [key: string]: string } = {
+ png: 'png',
+ jpg: 'jpeg',
+ jpeg: 'jpeg',
+ html: 'html',
+ htm: 'html',
+ md: 'markdown',
+ markdown: 'markdown',
+ json: 'json',
+ js: 'javascript',
+ javascript: 'javascript',
+ tex: 'latex',
+ latex: 'latex',
+ txt: 'text',
+ csv: 'text',
+ pdf: 'text'
+ }
+ return artifactTypes[extension || ''] || 'text'
+ }
+
+ /**
+ * Processes sandbox links in the response text and converts them to file annotations
+ */
+ private async processSandboxLinks(text: string, baseURL: string, chatflowId: string, chatId: string): Promise {
+ let processedResponse = text
+
+ // Regex to match sandbox links: [text](sandbox:/path/to/file)
+ const sandboxLinkRegex = /\[([^\]]+)\]\(sandbox:\/([^)]+)\)/g
+ const matches = Array.from(text.matchAll(sandboxLinkRegex))
+
+ for (const match of matches) {
+ const fullMatch = match[0]
+ const linkText = match[1]
+ const filePath = match[2]
+
+ try {
+ // Extract filename from the file path
+ const fileName = filePath.split('/').pop() || filePath
+
+ // Replace sandbox link with proper download URL
+ const downloadUrl = `${baseURL}/api/v1/get-upload-file?chatflowId=${chatflowId}&chatId=${chatId}&fileName=${fileName}&download=true`
+ const newLink = `[${linkText}](${downloadUrl})`
+
+ processedResponse = processedResponse.replace(fullMatch, newLink)
+ } catch (error) {
+ console.error('Error processing sandbox link:', error)
+ // If there's an error, remove the sandbox link as fallback
+ processedResponse = processedResponse.replace(fullMatch, linkText)
+ }
+ }
+
+ return processedResponse
+ }
}
module.exports = { nodeClass: Agent_Agentflow }
diff --git a/packages/components/nodes/agentflow/Condition/Condition.ts b/packages/components/nodes/agentflow/Condition/Condition.ts
index af2fa0411..7ae1be062 100644
--- a/packages/components/nodes/agentflow/Condition/Condition.ts
+++ b/packages/components/nodes/agentflow/Condition/Condition.ts
@@ -1,4 +1,5 @@
import { CommonType, ICommonObject, ICondition, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
+import removeMarkdown from 'remove-markdown'
class Condition_Agentflow implements INode {
label: string
@@ -300,8 +301,8 @@ class Condition_Agentflow implements INode {
value2 = parseFloat(_value2 as string) || 0
break
default: // string
- value1 = _value1 as string
- value2 = _value2 as string
+ value1 = removeMarkdown((_value1 as string) || '')
+ value2 = removeMarkdown((_value2 as string) || '')
}
const compareOperationResult = compareOperationFunctions[operation](value1, value2)
@@ -316,7 +317,7 @@ class Condition_Agentflow implements INode {
}
}
- // If no condition is fullfilled, add isFulfilled to the ELSE condition
+ // If no condition is fulfilled, add isFulfilled to the ELSE condition
const dummyElseConditionData = {
type: 'string',
value1: '',
diff --git a/packages/components/nodes/agentflow/ConditionAgent/ConditionAgent.ts b/packages/components/nodes/agentflow/ConditionAgent/ConditionAgent.ts
index 6ec809f96..b23dd198f 100644
--- a/packages/components/nodes/agentflow/ConditionAgent/ConditionAgent.ts
+++ b/packages/components/nodes/agentflow/ConditionAgent/ConditionAgent.ts
@@ -27,7 +27,7 @@ class ConditionAgent_Agentflow implements INode {
constructor() {
this.label = 'Condition Agent'
this.name = 'conditionAgentAgentflow'
- this.version = 1.0
+ this.version = 1.1
this.type = 'ConditionAgent'
this.category = 'Agent Flows'
this.description = `Utilize an agent to split flows based on dynamic conditions`
@@ -80,6 +80,26 @@ class ConditionAgent_Agentflow implements INode {
scenario: ''
}
]
+ },
+ {
+ label: 'Override System Prompt',
+ name: 'conditionAgentOverrideSystemPrompt',
+ type: 'boolean',
+ description: 'Override initial system prompt for Condition Agent',
+ optional: true
+ },
+ {
+ label: 'Node System Prompt',
+ name: 'conditionAgentSystemPrompt',
+ type: 'string',
+ rows: 4,
+ optional: true,
+ acceptVariable: true,
+ default: CONDITION_AGENT_SYSTEM_PROMPT,
+ description: 'Expert use only. Modifying this can significantly alter agent behavior. Leave default if unsure',
+ show: {
+ conditionAgentOverrideSystemPrompt: true
+ }
}
/*{
label: 'Enable Memory',
@@ -242,6 +262,12 @@ class ConditionAgent_Agentflow implements INode {
const conditionAgentInput = nodeData.inputs?.conditionAgentInput as string
let input = conditionAgentInput || question
const conditionAgentInstructions = nodeData.inputs?.conditionAgentInstructions as string
+ const conditionAgentSystemPrompt = nodeData.inputs?.conditionAgentSystemPrompt as string
+ const conditionAgentOverrideSystemPrompt = nodeData.inputs?.conditionAgentOverrideSystemPrompt as boolean
+ let systemPrompt = CONDITION_AGENT_SYSTEM_PROMPT
+ if (conditionAgentSystemPrompt && conditionAgentOverrideSystemPrompt) {
+ systemPrompt = conditionAgentSystemPrompt
+ }
// Extract memory and configuration options
const enableMemory = nodeData.inputs?.conditionAgentEnableMemory as boolean
@@ -277,31 +303,15 @@ class ConditionAgent_Agentflow implements INode {
const messages: BaseMessageLike[] = [
{
role: 'system',
- content: CONDITION_AGENT_SYSTEM_PROMPT
+ content: systemPrompt
},
{
role: 'user',
- content: `{"input": "Hello", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}`
+ content: `{"input": "Hello", "scenarios": ["user is asking about AI", "user is not asking about AI"], "instruction": "Your task is to check if the user is asking about AI."}`
},
{
role: 'assistant',
- content: `\`\`\`json\n{"output": "default"}\n\`\`\``
- },
- {
- role: 'user',
- content: `{"input": "What is AIGC?", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}`
- },
- {
- role: 'assistant',
- content: `\`\`\`json\n{"output": "user is asking about AI"}\n\`\`\``
- },
- {
- role: 'user',
- content: `{"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "default"], "instruction": "Determine if the user is interested in learning about AI"}`
- },
- {
- role: 'assistant',
- content: `\`\`\`json\n{"output": "user is interested in AI topics"}\n\`\`\``
+ content: `\`\`\`json\n{"output": "user is not asking about AI"}\n\`\`\``
}
]
// Use to store messages with image file references as we do not want to store the base64 data into database
@@ -374,15 +384,19 @@ class ConditionAgent_Agentflow implements INode {
)
}
- let calledOutputName = 'default'
+ let calledOutputName: string
try {
const parsedResponse = this.parseJsonMarkdown(response.content as string)
- if (!parsedResponse.output) {
- throw new Error('Missing "output" key in response')
+ if (!parsedResponse.output || typeof parsedResponse.output !== 'string') {
+ throw new Error('LLM response is missing the "output" key or it is not a string.')
}
calledOutputName = parsedResponse.output
} catch (error) {
- console.warn(`Failed to parse LLM response: ${error}. Using default output.`)
+ throw new Error(
+ `Failed to parse a valid scenario from the LLM's response. Please check if the model is capable of following JSON output instructions. Raw LLM Response: "${
+ response.content as string
+ }"`
+ )
}
// Clean up empty inputs
diff --git a/packages/components/nodes/agentflow/CustomFunction/CustomFunction.ts b/packages/components/nodes/agentflow/CustomFunction/CustomFunction.ts
index 6922c651b..e768c7809 100644
--- a/packages/components/nodes/agentflow/CustomFunction/CustomFunction.ts
+++ b/packages/components/nodes/agentflow/CustomFunction/CustomFunction.ts
@@ -8,8 +8,7 @@ import {
INodeParams,
IServerSideEventStreamer
} from '../../../src/Interface'
-import { availableDependencies, defaultAllowBuiltInDep, getVars, prepareSandboxVars } from '../../../src/utils'
-import { NodeVM } from '@flowiseai/nodevm'
+import { getVars, executeJavaScriptCode, createCodeExecutionSandbox, processTemplateVariables } from '../../../src/utils'
import { updateFlowState } from '../utils'
interface ICustomFunctionInputVariables {
@@ -19,9 +18,9 @@ interface ICustomFunctionInputVariables {
const exampleFunc = `/*
* You can use any libraries imported in Flowise
-* You can use properties specified in Input Schema as variables. Ex: Property = userid, Variable = $userid
+* You can use properties specified in Input Variables with the prefix $. For example: $foo
* You can get default flow config: $flow.sessionId, $flow.chatId, $flow.chatflowId, $flow.input, $flow.state
-* You can get custom variables: $vars.
+* You can get global variables: $vars.
* Must return a string value at the end of function
*/
@@ -146,77 +145,51 @@ class CustomFunction_Agentflow implements INode {
const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity
- // Update flow state if needed
- let newState = { ...state }
- if (_customFunctionUpdateState && Array.isArray(_customFunctionUpdateState) && _customFunctionUpdateState.length > 0) {
- newState = updateFlowState(state, _customFunctionUpdateState)
- }
-
- const variables = await getVars(appDataSource, databaseEntities, nodeData)
+ const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
const flow = {
chatflowId: options.chatflowid,
sessionId: options.sessionId,
chatId: options.chatId,
- input
+ input,
+ state
}
- let sandbox: any = {
- $input: input,
- util: undefined,
- Symbol: undefined,
- child_process: undefined,
- fs: undefined,
- process: undefined
- }
- sandbox['$vars'] = prepareSandboxVars(variables)
- sandbox['$flow'] = flow
-
+ // Create additional sandbox variables for custom function inputs
+ const additionalSandbox: ICommonObject = {}
for (const item of functionInputVariables) {
const variableName = item.variableName
const variableValue = item.variableValue
- sandbox[`$${variableName}`] = variableValue
+ additionalSandbox[`$${variableName}`] = variableValue
}
- const builtinDeps = process.env.TOOL_FUNCTION_BUILTIN_DEP
- ? defaultAllowBuiltInDep.concat(process.env.TOOL_FUNCTION_BUILTIN_DEP.split(','))
- : defaultAllowBuiltInDep
- const externalDeps = process.env.TOOL_FUNCTION_EXTERNAL_DEP ? process.env.TOOL_FUNCTION_EXTERNAL_DEP.split(',') : []
- const deps = availableDependencies.concat(externalDeps)
+ const sandbox = createCodeExecutionSandbox(input, variables, flow, additionalSandbox)
- const nodeVMOptions = {
- console: 'inherit',
- sandbox,
- require: {
- external: { modules: deps },
- builtin: builtinDeps
- },
- eval: false,
- wasm: false,
- timeout: 10000
- } as any
+ // Setup streaming function if needed
+ const streamOutput = isStreamable
+ ? (output: string) => {
+ const sseStreamer: IServerSideEventStreamer = options.sseStreamer
+ sseStreamer.streamTokenEvent(chatId, output)
+ }
+ : undefined
- const vm = new NodeVM(nodeVMOptions)
try {
- const response = await vm.run(`module.exports = async function() {${javascriptFunction}}()`, __dirname)
+ const response = await executeJavaScriptCode(javascriptFunction, sandbox, {
+ libraries: ['axios'],
+ streamOutput
+ })
let finalOutput = response
if (typeof response === 'object') {
finalOutput = JSON.stringify(response, null, 2)
}
- if (isStreamable) {
- const sseStreamer: IServerSideEventStreamer = options.sseStreamer
- sseStreamer.streamTokenEvent(chatId, finalOutput)
+ // Update flow state if needed
+ let newState = { ...state }
+ if (_customFunctionUpdateState && Array.isArray(_customFunctionUpdateState) && _customFunctionUpdateState.length > 0) {
+ newState = updateFlowState(state, _customFunctionUpdateState)
}
- // Process template variables in state
- if (newState && Object.keys(newState).length > 0) {
- for (const key in newState) {
- if (newState[key].toString().includes('{{ output }}')) {
- newState[key] = finalOutput
- }
- }
- }
+ newState = processTemplateVariables(newState, finalOutput)
const returnOutput = {
id: nodeData.id,
diff --git a/packages/components/nodes/agentflow/ExecuteFlow/ExecuteFlow.ts b/packages/components/nodes/agentflow/ExecuteFlow/ExecuteFlow.ts
index 26e5df7b6..e2f0765ad 100644
--- a/packages/components/nodes/agentflow/ExecuteFlow/ExecuteFlow.ts
+++ b/packages/components/nodes/agentflow/ExecuteFlow/ExecuteFlow.ts
@@ -8,7 +8,7 @@ import {
IServerSideEventStreamer
} from '../../../src/Interface'
import axios, { AxiosRequestConfig } from 'axios'
-import { getCredentialData, getCredentialParam } from '../../../src/utils'
+import { getCredentialData, getCredentialParam, processTemplateVariables, parseJsonBody } from '../../../src/utils'
import { DataSource } from 'typeorm'
import { BaseMessageLike } from '@langchain/core/messages'
import { updateFlowState } from '../utils'
@@ -30,7 +30,7 @@ class ExecuteFlow_Agentflow implements INode {
constructor() {
this.label = 'Execute Flow'
this.name = 'executeFlowAgentflow'
- this.version = 1.0
+ this.version = 1.1
this.type = 'ExecuteFlow'
this.category = 'Agent Flows'
this.description = 'Execute another flow'
@@ -62,7 +62,8 @@ class ExecuteFlow_Agentflow implements INode {
name: 'executeFlowOverrideConfig',
description: 'Override the config passed to the flow',
type: 'json',
- optional: true
+ optional: true,
+ acceptVariable: true
},
{
label: 'Base URL',
@@ -127,7 +128,8 @@ class ExecuteFlow_Agentflow implements INode {
return returnData
}
- const chatflows = await appDataSource.getRepository(databaseEntities['ChatFlow']).find()
+ const searchOptions = options.searchOptions || {}
+ const chatflows = await appDataSource.getRepository(databaseEntities['ChatFlow']).findBy(searchOptions)
for (let i = 0; i < chatflows.length; i += 1) {
let cfType = 'Chatflow'
@@ -161,12 +163,15 @@ class ExecuteFlow_Agentflow implements INode {
const flowInput = nodeData.inputs?.executeFlowInput as string
const returnResponseAs = nodeData.inputs?.executeFlowReturnResponseAs as string
const _executeFlowUpdateState = nodeData.inputs?.executeFlowUpdateState
- const overrideConfig =
- typeof nodeData.inputs?.executeFlowOverrideConfig === 'string' &&
- nodeData.inputs.executeFlowOverrideConfig.startsWith('{') &&
- nodeData.inputs.executeFlowOverrideConfig.endsWith('}')
- ? JSON.parse(nodeData.inputs.executeFlowOverrideConfig)
- : nodeData.inputs?.executeFlowOverrideConfig
+
+ let overrideConfig = nodeData.inputs?.executeFlowOverrideConfig
+ if (typeof overrideConfig === 'string' && overrideConfig.startsWith('{') && overrideConfig.endsWith('}')) {
+ try {
+ overrideConfig = parseJsonBody(overrideConfig)
+ } catch (parseError) {
+ throw new Error(`Invalid JSON in executeFlowOverrideConfig: ${parseError.message}`)
+ }
+ }
const state = options.agentflowRuntime?.state as ICommonObject
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
@@ -180,7 +185,8 @@ class ExecuteFlow_Agentflow implements INode {
if (selectedFlowId === options.chatflowid) throw new Error('Cannot call the same agentflow!')
let headers: Record = {
- 'Content-Type': 'application/json'
+ 'Content-Type': 'application/json',
+ 'flowise-tool': 'true'
}
if (chatflowApiKey) headers = { ...headers, Authorization: `Bearer ${chatflowApiKey}` }
@@ -214,13 +220,7 @@ class ExecuteFlow_Agentflow implements INode {
}
// Process template variables in state
- if (newState && Object.keys(newState).length > 0) {
- for (const key in newState) {
- if (newState[key].toString().includes('{{ output }}')) {
- newState[key] = resultText
- }
- }
- }
+ newState = processTemplateVariables(newState, resultText)
// Only add to runtime chat history if this is the first node
const inputMessages = []
diff --git a/packages/components/nodes/agentflow/HTTP/HTTP.ts b/packages/components/nodes/agentflow/HTTP/HTTP.ts
index 752d6dd0b..697fb0e31 100644
--- a/packages/components/nodes/agentflow/HTTP/HTTP.ts
+++ b/packages/components/nodes/agentflow/HTTP/HTTP.ts
@@ -1,8 +1,9 @@
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
-import axios, { AxiosRequestConfig, Method, ResponseType } from 'axios'
+import { AxiosRequestConfig, Method, ResponseType } from 'axios'
import FormData from 'form-data'
import * as querystring from 'querystring'
-import { getCredentialData, getCredentialParam } from '../../../src/utils'
+import { getCredentialData, getCredentialParam, parseJsonBody } from '../../../src/utils'
+import { secureAxiosRequest } from '../../../src/httpSecurity'
class HTTP_Agentflow implements INode {
label: string
@@ -21,7 +22,7 @@ class HTTP_Agentflow implements INode {
constructor() {
this.label = 'HTTP'
this.name = 'httpAgentflow'
- this.version = 1.0
+ this.version = 1.1
this.type = 'HTTP'
this.category = 'Agent Flows'
this.description = 'Send a HTTP request'
@@ -66,12 +67,14 @@ class HTTP_Agentflow implements INode {
{
label: 'URL',
name: 'url',
- type: 'string'
+ type: 'string',
+ acceptVariable: true
},
{
label: 'Headers',
name: 'headers',
type: 'array',
+ acceptVariable: true,
array: [
{
label: 'Key',
@@ -83,7 +86,8 @@ class HTTP_Agentflow implements INode {
label: 'Value',
name: 'value',
type: 'string',
- default: ''
+ default: '',
+ acceptVariable: true
}
],
optional: true
@@ -92,6 +96,7 @@ class HTTP_Agentflow implements INode {
label: 'Query Params',
name: 'queryParams',
type: 'array',
+ acceptVariable: true,
array: [
{
label: 'Key',
@@ -103,7 +108,8 @@ class HTTP_Agentflow implements INode {
label: 'Value',
name: 'value',
type: 'string',
- default: ''
+ default: '',
+ acceptVariable: true
}
],
optional: true
@@ -147,6 +153,7 @@ class HTTP_Agentflow implements INode {
label: 'Body',
name: 'body',
type: 'array',
+ acceptVariable: true,
show: {
bodyType: ['xWwwFormUrlencoded', 'formData']
},
@@ -161,7 +168,8 @@ class HTTP_Agentflow implements INode {
label: 'Value',
name: 'value',
type: 'string',
- default: ''
+ default: '',
+ acceptVariable: true
}
],
optional: true
@@ -220,14 +228,14 @@ class HTTP_Agentflow implements INode {
// Add credentials if provided
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
if (credentialData && Object.keys(credentialData).length !== 0) {
- const basicAuthUsername = getCredentialParam('username', credentialData, nodeData)
- const basicAuthPassword = getCredentialParam('password', credentialData, nodeData)
+ const basicAuthUsername = getCredentialParam('basicAuthUsername', credentialData, nodeData)
+ const basicAuthPassword = getCredentialParam('basicAuthPassword', credentialData, nodeData)
const bearerToken = getCredentialParam('token', credentialData, nodeData)
const apiKeyName = getCredentialParam('key', credentialData, nodeData)
const apiKeyValue = getCredentialParam('value', credentialData, nodeData)
// Determine which type of auth to use based on available credentials
- if (basicAuthUsername && basicAuthPassword) {
+ if (basicAuthUsername || basicAuthPassword) {
// Basic Auth
const auth = Buffer.from(`${basicAuthUsername}:${basicAuthPassword}`).toString('base64')
requestHeaders['Authorization'] = `Basic ${auth}`
@@ -266,10 +274,11 @@ class HTTP_Agentflow implements INode {
// Handle request body based on body type
if (method !== 'GET' && body) {
switch (bodyType) {
- case 'json':
- requestConfig.data = typeof body === 'string' ? JSON.parse(body) : body
+ case 'json': {
+ requestConfig.data = typeof body === 'string' ? parseJsonBody(body) : body
requestHeaders['Content-Type'] = 'application/json'
break
+ }
case 'raw':
requestConfig.data = body
break
@@ -284,14 +293,14 @@ class HTTP_Agentflow implements INode {
break
}
case 'xWwwFormUrlencoded':
- requestConfig.data = querystring.stringify(typeof body === 'string' ? JSON.parse(body) : body)
+ requestConfig.data = querystring.stringify(typeof body === 'string' ? parseJsonBody(body) : body)
requestHeaders['Content-Type'] = 'application/x-www-form-urlencoded'
break
}
}
- // Make the HTTP request
- const response = await axios(requestConfig)
+ // Make the secure HTTP request that validates all URLs in redirect chains
+ const response = await secureAxiosRequest(requestConfig)
// Process response based on response type
let responseData
@@ -330,6 +339,9 @@ class HTTP_Agentflow implements INode {
} catch (error) {
console.error('HTTP Request Error:', error)
+ const errorMessage =
+ error.response?.data?.message || error.response?.data?.error || error.message || 'An error occurred during the HTTP request'
+
// Format error response
const errorResponse: any = {
id: nodeData.id,
@@ -347,7 +359,7 @@ class HTTP_Agentflow implements INode {
},
error: {
name: error.name || 'Error',
- message: error.message || 'An error occurred during the HTTP request'
+ message: errorMessage
},
state
}
@@ -360,7 +372,7 @@ class HTTP_Agentflow implements INode {
errorResponse.error.headers = error.response.headers
}
- throw new Error(error)
+ throw new Error(errorMessage)
}
}
}
diff --git a/packages/components/nodes/agentflow/HumanInput/HumanInput.ts b/packages/components/nodes/agentflow/HumanInput/HumanInput.ts
index 6fa388e26..b4811d1c8 100644
--- a/packages/components/nodes/agentflow/HumanInput/HumanInput.ts
+++ b/packages/components/nodes/agentflow/HumanInput/HumanInput.ts
@@ -208,7 +208,7 @@ class HumanInput_Agentflow implements INode {
humanInputDescription = (nodeData.inputs?.humanInputDescription as string) || 'Do you want to proceed?'
const messages = [...pastChatHistory, ...runtimeChatHistory]
// Find the last message in the messages array
- const lastMessage = (messages[messages.length - 1] as any).content || ''
+ const lastMessage = messages.length > 0 ? (messages[messages.length - 1] as any).content || '' : ''
humanInputDescription = `${lastMessage}\n\n${humanInputDescription}`
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
@@ -241,8 +241,11 @@ class HumanInput_Agentflow implements INode {
if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
for await (const chunk of await llmNodeInstance.stream(messages)) {
- sseStreamer.streamTokenEvent(chatId, chunk.content.toString())
- response = response.concat(chunk)
+ const content = typeof chunk === 'string' ? chunk : chunk.content.toString()
+ sseStreamer.streamTokenEvent(chatId, content)
+
+ const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
+ response = response.concat(messageChunk)
}
humanInputDescription = response.content as string
} else {
diff --git a/packages/components/nodes/agentflow/Iteration/Iteration.ts b/packages/components/nodes/agentflow/Iteration/Iteration.ts
index 048035fb2..145602b93 100644
--- a/packages/components/nodes/agentflow/Iteration/Iteration.ts
+++ b/packages/components/nodes/agentflow/Iteration/Iteration.ts
@@ -1,4 +1,5 @@
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { parseJsonBody } from '../../../src/utils'
class Iteration_Agentflow implements INode {
label: string
@@ -39,12 +40,17 @@ class Iteration_Agentflow implements INode {
const iterationInput = nodeData.inputs?.iterationInput
// Helper function to clean JSON strings with redundant backslashes
- const cleanJsonString = (str: string): string => {
- return str.replace(/\\(["'[\]{}])/g, '$1')
+ const safeParseJson = (str: string): string => {
+ try {
+ return parseJsonBody(str)
+ } catch {
+ // Try parsing after cleaning
+ return parseJsonBody(str.replace(/\\(["'[\]{}])/g, '$1'))
+ }
}
const iterationInputArray =
- typeof iterationInput === 'string' && iterationInput !== '' ? JSON.parse(cleanJsonString(iterationInput)) : iterationInput
+ typeof iterationInput === 'string' && iterationInput !== '' ? safeParseJson(iterationInput) : iterationInput
if (!iterationInputArray || !Array.isArray(iterationInputArray)) {
throw new Error('Invalid input array')
diff --git a/packages/components/nodes/agentflow/LLM/LLM.ts b/packages/components/nodes/agentflow/LLM/LLM.ts
index 18f8d187d..a5bf4deb7 100644
--- a/packages/components/nodes/agentflow/LLM/LLM.ts
+++ b/packages/components/nodes/agentflow/LLM/LLM.ts
@@ -1,10 +1,9 @@
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
-import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
+import { ICommonObject, IMessage, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
import { AIMessageChunk, BaseMessageLike, MessageContentText } from '@langchain/core/messages'
import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
-import { z } from 'zod'
import { AnalyticHandler } from '../../../src/handler'
-import { ILLMMessage, IStructuredOutput } from '../Interface.Agentflow'
+import { ILLMMessage } from '../Interface.Agentflow'
import {
getPastChatHistoryImageMessages,
getUniqueImageMessages,
@@ -12,7 +11,8 @@ import {
replaceBase64ImagesWithFileReferences,
updateFlowState
} from '../utils'
-import { get } from 'lodash'
+import { processTemplateVariables, configureStructuredOutput } from '../../../src/utils'
+import { flatten } from 'lodash'
class LLM_Agentflow implements INode {
label: string
@@ -262,6 +262,7 @@ class LLM_Agentflow implements INode {
}`,
description: 'JSON schema for the structured output',
optional: true,
+ hideCodeExecute: true,
show: {
'llmStructuredOutput[$index].type': 'jsonArray'
}
@@ -358,6 +359,7 @@ class LLM_Agentflow implements INode {
const state = options.agentflowRuntime?.state as ICommonObject
const pastChatHistory = (options.pastChatHistory as BaseMessageLike[]) ?? []
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
+ const prependedChatHistory = options.prependedChatHistory as IMessage[]
const chatId = options.chatId as string
// Initialize the LLM model instance
@@ -381,11 +383,27 @@ class LLM_Agentflow implements INode {
// Use to keep track of past messages with image file references
let pastImageMessagesWithFileRef: BaseMessageLike[] = []
+ // Prepend history ONLY if it is the first node
+ if (prependedChatHistory.length > 0 && !runtimeChatHistory.length) {
+ for (const msg of prependedChatHistory) {
+ const role: string = msg.role === 'apiMessage' ? 'assistant' : 'user'
+ const content: string = msg.content ?? ''
+ messages.push({
+ role,
+ content
+ })
+ }
+ }
+
for (const msg of llmMessages) {
const role = msg.role
const content = msg.content
if (role && content) {
- messages.push({ role, content })
+ if (role === 'system') {
+ messages.unshift({ role, content })
+ } else {
+ messages.push({ role, content })
+ }
}
}
@@ -410,7 +428,7 @@ class LLM_Agentflow implements INode {
/*
* If this is the first node:
* - Add images to messages if exist
- * - Add user message
+ * - Add user message if it does not exist in the llmMessages array
*/
if (options.uploads) {
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
@@ -421,7 +439,7 @@ class LLM_Agentflow implements INode {
}
}
- if (input && typeof input === 'string') {
+ if (input && typeof input === 'string' && !llmMessages.some((msg) => msg.role === 'user')) {
messages.push({
role: 'user',
content: input
@@ -433,7 +451,7 @@ class LLM_Agentflow implements INode {
// Configure structured output if specified
const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0
if (isStructuredOutput) {
- llmNodeInstance = this.configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
+ llmNodeInstance = configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
}
// Initialize response and determine if streaming is possible
@@ -460,11 +478,15 @@ class LLM_Agentflow implements INode {
// Stream whole response back to UI if this is the last node
if (isLastNode && options.sseStreamer) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
- let responseContent = JSON.stringify(response, null, 2)
- if (typeof response.content === 'string') {
- responseContent = response.content
+ let finalResponse = ''
+ if (response.content && Array.isArray(response.content)) {
+ finalResponse = response.content.map((item: any) => item.text).join('\n')
+ } else if (response.content && typeof response.content === 'string') {
+ finalResponse = response.content
+ } else {
+ finalResponse = JSON.stringify(response, null, 2)
}
- sseStreamer.streamTokenEvent(chatId, responseContent)
+ sseStreamer.streamTokenEvent(chatId, finalResponse)
}
}
@@ -486,8 +508,15 @@ class LLM_Agentflow implements INode {
}
// Prepare final response and output object
- const finalResponse = (response.content as string) ?? JSON.stringify(response, null, 2)
- const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta)
+ let finalResponse = ''
+ if (response.content && Array.isArray(response.content)) {
+ finalResponse = response.content.map((item: any) => item.text).join('\n')
+ } else if (response.content && typeof response.content === 'string') {
+ finalResponse = response.content
+ } else {
+ finalResponse = JSON.stringify(response, null, 2)
+ }
+ const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta, isStructuredOutput)
// End analytics tracking
if (analyticHandlers && llmIds) {
@@ -500,36 +529,7 @@ class LLM_Agentflow implements INode {
}
// Process template variables in state
- if (newState && Object.keys(newState).length > 0) {
- for (const key in newState) {
- const stateValue = newState[key].toString()
- if (stateValue.includes('{{ output')) {
- // Handle simple output replacement
- if (stateValue === '{{ output }}') {
- newState[key] = finalResponse
- continue
- }
-
- // Handle JSON path expressions like {{ output.item1 }}
- // eslint-disable-next-line
- const match = stateValue.match(/{{[\s]*output\.([\w\.]+)[\s]*}}/)
- if (match) {
- try {
- // Parse the response if it's JSON
- const jsonResponse = typeof finalResponse === 'string' ? JSON.parse(finalResponse) : finalResponse
- // Get the value using lodash get
- const path = match[1]
- const value = get(jsonResponse, path)
- newState[key] = value ?? stateValue // Fall back to original if path not found
- } catch (e) {
- // If JSON parsing fails, keep original template
- console.warn(`Failed to parse JSON or find path in output: ${e}`)
- newState[key] = stateValue
- }
- }
- }
- }
- }
+ newState = processTemplateVariables(newState, finalResponse)
// Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
@@ -545,7 +545,19 @@ class LLM_Agentflow implements INode {
inputMessages.push(...runtimeImageMessagesWithFileRef)
}
if (input && typeof input === 'string') {
- inputMessages.push({ role: 'user', content: input })
+ if (!enableMemory) {
+ if (!llmMessages.some((msg) => msg.role === 'user')) {
+ inputMessages.push({ role: 'user', content: input })
+ } else {
+ llmMessages.map((msg) => {
+ if (msg.role === 'user') {
+ inputMessages.push({ role: 'user', content: msg.content })
+ }
+ })
+ }
+ } else {
+ inputMessages.push({ role: 'user', content: input })
+ }
}
}
@@ -742,59 +754,6 @@ class LLM_Agentflow implements INode {
}
}
- /**
- * Configures structured output for the LLM
- */
- private configureStructuredOutput(llmNodeInstance: BaseChatModel, llmStructuredOutput: IStructuredOutput[]): BaseChatModel {
- try {
- const zodObj: ICommonObject = {}
- for (const sch of llmStructuredOutput) {
- if (sch.type === 'string') {
- zodObj[sch.key] = z.string().describe(sch.description || '')
- } else if (sch.type === 'stringArray') {
- zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
- } else if (sch.type === 'number') {
- zodObj[sch.key] = z.number().describe(sch.description || '')
- } else if (sch.type === 'boolean') {
- zodObj[sch.key] = z.boolean().describe(sch.description || '')
- } else if (sch.type === 'enum') {
- const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
- zodObj[sch.key] = z
- .enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
- .describe(sch.description || '')
- } else if (sch.type === 'jsonArray') {
- const jsonSchema = sch.jsonSchema
- if (jsonSchema) {
- try {
- // Parse the JSON schema
- const schemaObj = JSON.parse(jsonSchema)
-
- // Create a Zod schema from the JSON schema
- const itemSchema = this.createZodSchemaFromJSON(schemaObj)
-
- // Create an array schema of the item schema
- zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
- } catch (err) {
- console.error(`Error parsing JSON schema for ${sch.key}:`, err)
- // Fallback to generic array of records
- zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
- }
- } else {
- // If no schema provided, use generic array of records
- zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
- }
- }
- }
- const structuredOutput = z.object(zodObj)
-
- // @ts-ignore
- return llmNodeInstance.withStructuredOutput(structuredOutput)
- } catch (exception) {
- console.error(exception)
- return llmNodeInstance
- }
- }
-
/**
* Handles streaming response from the LLM
*/
@@ -811,16 +770,20 @@ class LLM_Agentflow implements INode {
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
if (sseStreamer) {
let content = ''
- if (Array.isArray(chunk.content) && chunk.content.length > 0) {
+
+ if (typeof chunk === 'string') {
+ content = chunk
+ } else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
const contents = chunk.content as MessageContentText[]
content = contents.map((item) => item.text).join('')
- } else {
+ } else if (chunk.content) {
content = chunk.content.toString()
}
sseStreamer.streamTokenEvent(chatId, content)
}
- response = response.concat(chunk)
+ const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
+ response = response.concat(messageChunk)
}
} catch (error) {
console.error('Error during streaming:', error)
@@ -841,7 +804,8 @@ class LLM_Agentflow implements INode {
finalResponse: string,
startTime: number,
endTime: number,
- timeDelta: number
+ timeDelta: number,
+ isStructuredOutput: boolean
): any {
const output: any = {
content: finalResponse,
@@ -860,6 +824,15 @@ class LLM_Agentflow implements INode {
output.usageMetadata = response.usage_metadata
}
+ if (isStructuredOutput && typeof response === 'object') {
+ const structuredOutput = response as Record
+ for (const key in structuredOutput) {
+ if (structuredOutput[key] !== undefined && structuredOutput[key] !== null) {
+ output[key] = structuredOutput[key]
+ }
+ }
+ }
+
return output
}
@@ -870,7 +843,12 @@ class LLM_Agentflow implements INode {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
if (response.tool_calls) {
- sseStreamer.streamCalledToolsEvent(chatId, response.tool_calls)
+ const formattedToolCalls = response.tool_calls.map((toolCall: any) => ({
+ tool: toolCall.name || 'tool',
+ toolInput: toolCall.args,
+ toolOutput: ''
+ }))
+ sseStreamer.streamCalledToolsEvent(chatId, flatten(formattedToolCalls))
}
if (response.usage_metadata) {
@@ -879,107 +857,6 @@ class LLM_Agentflow implements INode {
sseStreamer.streamEndEvent(chatId)
}
-
- /**
- * Creates a Zod schema from a JSON schema object
- * @param jsonSchema The JSON schema object
- * @returns A Zod schema
- */
- private createZodSchemaFromJSON(jsonSchema: any): z.ZodTypeAny {
- // If the schema is an object with properties, create an object schema
- if (typeof jsonSchema === 'object' && jsonSchema !== null) {
- const schemaObj: Record = {}
-
- // Process each property in the schema
- for (const [key, value] of Object.entries(jsonSchema)) {
- if (value === null) {
- // Handle null values
- schemaObj[key] = z.null()
- } else if (typeof value === 'object' && !Array.isArray(value)) {
- // Check if the property has a type definition
- if ('type' in value) {
- const type = value.type as string
- const description = ('description' in value ? (value.description as string) : '') || ''
-
- // Create the appropriate Zod type based on the type property
- if (type === 'string') {
- schemaObj[key] = z.string().describe(description)
- } else if (type === 'number') {
- schemaObj[key] = z.number().describe(description)
- } else if (type === 'boolean') {
- schemaObj[key] = z.boolean().describe(description)
- } else if (type === 'array') {
- // If it's an array type, check if items is defined
- if ('items' in value && value.items) {
- const itemSchema = this.createZodSchemaFromJSON(value.items)
- schemaObj[key] = z.array(itemSchema).describe(description)
- } else {
- // Default to array of any if items not specified
- schemaObj[key] = z.array(z.any()).describe(description)
- }
- } else if (type === 'object') {
- // If it's an object type, check if properties is defined
- if ('properties' in value && value.properties) {
- const nestedSchema = this.createZodSchemaFromJSON(value.properties)
- schemaObj[key] = nestedSchema.describe(description)
- } else {
- // Default to record of any if properties not specified
- schemaObj[key] = z.record(z.any()).describe(description)
- }
- } else {
- // Default to any for unknown types
- schemaObj[key] = z.any().describe(description)
- }
-
- // Check if the property is optional
- if ('optional' in value && value.optional === true) {
- schemaObj[key] = schemaObj[key].optional()
- }
- } else if (Array.isArray(value)) {
- // Array values without a type property
- if (value.length > 0) {
- // If the array has items, recursively create a schema for the first item
- const itemSchema = this.createZodSchemaFromJSON(value[0])
- schemaObj[key] = z.array(itemSchema)
- } else {
- // Empty array, allow any array
- schemaObj[key] = z.array(z.any())
- }
- } else {
- // It's a nested object without a type property, recursively create schema
- schemaObj[key] = this.createZodSchemaFromJSON(value)
- }
- } else if (Array.isArray(value)) {
- // Array values
- if (value.length > 0) {
- // If the array has items, recursively create a schema for the first item
- const itemSchema = this.createZodSchemaFromJSON(value[0])
- schemaObj[key] = z.array(itemSchema)
- } else {
- // Empty array, allow any array
- schemaObj[key] = z.array(z.any())
- }
- } else {
- // For primitive values (which shouldn't be in the schema directly)
- // Use the corresponding Zod type
- if (typeof value === 'string') {
- schemaObj[key] = z.string()
- } else if (typeof value === 'number') {
- schemaObj[key] = z.number()
- } else if (typeof value === 'boolean') {
- schemaObj[key] = z.boolean()
- } else {
- schemaObj[key] = z.any()
- }
- }
- }
-
- return z.object(schemaObj)
- }
-
- // Fallback to any for unknown types
- return z.any()
- }
}
module.exports = { nodeClass: LLM_Agentflow }
diff --git a/packages/components/nodes/agentflow/Loop/Loop.ts b/packages/components/nodes/agentflow/Loop/Loop.ts
index bc9d7b08d..edf7f5e1d 100644
--- a/packages/components/nodes/agentflow/Loop/Loop.ts
+++ b/packages/components/nodes/agentflow/Loop/Loop.ts
@@ -1,4 +1,5 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
+import { updateFlowState } from '../utils'
class Loop_Agentflow implements INode {
label: string
@@ -19,7 +20,7 @@ class Loop_Agentflow implements INode {
constructor() {
this.label = 'Loop'
this.name = 'loopAgentflow'
- this.version = 1.0
+ this.version = 1.1
this.type = 'Loop'
this.category = 'Agent Flows'
this.description = 'Loop back to a previous node'
@@ -40,6 +41,40 @@ class Loop_Agentflow implements INode {
name: 'maxLoopCount',
type: 'number',
default: 5
+ },
+ {
+ label: 'Fallback Message',
+ name: 'fallbackMessage',
+ type: 'string',
+ description: 'Message to display if the loop count is exceeded',
+ placeholder: 'Enter your fallback message here',
+ rows: 4,
+ acceptVariable: true,
+ optional: true
+ },
+ {
+ label: 'Update Flow State',
+ name: 'loopUpdateState',
+ description: 'Update runtime state during the execution of the workflow',
+ type: 'array',
+ optional: true,
+ acceptVariable: true,
+ array: [
+ {
+ label: 'Key',
+ name: 'key',
+ type: 'asyncOptions',
+ loadMethod: 'listRuntimeStateKeys',
+ freeSolo: true
+ },
+ {
+ label: 'Value',
+ name: 'value',
+ type: 'string',
+ acceptVariable: true,
+ acceptNodeOutputAsVariable: true
+ }
+ ]
}
]
}
@@ -58,12 +93,20 @@ class Loop_Agentflow implements INode {
})
}
return returnOptions
+ },
+ async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise {
+ const previousNodes = options.previousNodes as ICommonObject[]
+ const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
+ const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
+ return state.map((item) => ({ label: item.key, name: item.key }))
}
}
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const loopBackToNode = nodeData.inputs?.loopBackToNode as string
const _maxLoopCount = nodeData.inputs?.maxLoopCount as string
+ const fallbackMessage = nodeData.inputs?.fallbackMessage as string
+ const _loopUpdateState = nodeData.inputs?.loopUpdateState
const state = options.agentflowRuntime?.state as ICommonObject
@@ -75,16 +118,34 @@ class Loop_Agentflow implements INode {
maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5
}
+ const finalOutput = 'Loop back to ' + `${loopBackToNodeLabel} (${loopBackToNodeId})`
+
+ // Update flow state if needed
+ let newState = { ...state }
+ if (_loopUpdateState && Array.isArray(_loopUpdateState) && _loopUpdateState.length > 0) {
+ newState = updateFlowState(state, _loopUpdateState)
+ }
+
+ // Process template variables in state
+ if (newState && Object.keys(newState).length > 0) {
+ for (const key in newState) {
+ if (newState[key].toString().includes('{{ output }}')) {
+ newState[key] = finalOutput
+ }
+ }
+ }
+
const returnOutput = {
id: nodeData.id,
name: this.name,
input: data,
output: {
- content: 'Loop back to ' + `${loopBackToNodeLabel} (${loopBackToNodeId})`,
+ content: finalOutput,
nodeID: loopBackToNodeId,
- maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5
+ maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5,
+ fallbackMessage
},
- state
+ state: newState
}
return returnOutput
diff --git a/packages/components/nodes/agentflow/Retriever/Retriever.ts b/packages/components/nodes/agentflow/Retriever/Retriever.ts
index 68420484e..e7ce426c2 100644
--- a/packages/components/nodes/agentflow/Retriever/Retriever.ts
+++ b/packages/components/nodes/agentflow/Retriever/Retriever.ts
@@ -8,6 +8,7 @@ import {
IServerSideEventStreamer
} from '../../../src/Interface'
import { updateFlowState } from '../utils'
+import { processTemplateVariables } from '../../../src/utils'
import { DataSource } from 'typeorm'
import { BaseRetriever } from '@langchain/core/retrievers'
import { Document } from '@langchain/core/documents'
@@ -119,7 +120,8 @@ class Retriever_Agentflow implements INode {
return returnData
}
- const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).find()
+ const searchOptions = options.searchOptions || {}
+ const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).findBy(searchOptions)
for (const store of stores) {
if (store.status === 'UPSERTED') {
const obj = {
@@ -196,14 +198,7 @@ class Retriever_Agentflow implements INode {
sseStreamer.streamTokenEvent(chatId, finalOutput)
}
- // Process template variables in state
- if (newState && Object.keys(newState).length > 0) {
- for (const key in newState) {
- if (newState[key].toString().includes('{{ output }}')) {
- newState[key] = finalOutput
- }
- }
- }
+ newState = processTemplateVariables(newState, finalOutput)
const returnOutput = {
id: nodeData.id,
diff --git a/packages/components/nodes/agentflow/Start/Start.ts b/packages/components/nodes/agentflow/Start/Start.ts
index 5f6bf8449..833e3b7c2 100644
--- a/packages/components/nodes/agentflow/Start/Start.ts
+++ b/packages/components/nodes/agentflow/Start/Start.ts
@@ -18,7 +18,7 @@ class Start_Agentflow implements INode {
constructor() {
this.label = 'Start'
this.name = 'startAgentflow'
- this.version = 1.0
+ this.version = 1.1
this.type = 'Start'
this.category = 'Agent Flows'
this.description = 'Starting point of the agentflow'
@@ -153,6 +153,13 @@ class Start_Agentflow implements INode {
optional: true
}
]
+ },
+ {
+ label: 'Persist State',
+ name: 'startPersistState',
+ type: 'boolean',
+ description: 'Persist the state in the same session',
+ optional: true
}
]
}
@@ -161,6 +168,7 @@ class Start_Agentflow implements INode {
const _flowState = nodeData.inputs?.startState as string
const startInputType = nodeData.inputs?.startInputType as string
const startEphemeralMemory = nodeData.inputs?.startEphemeralMemory as boolean
+ const startPersistState = nodeData.inputs?.startPersistState as boolean
let flowStateArray = []
if (_flowState) {
@@ -176,6 +184,13 @@ class Start_Agentflow implements INode {
flowState[state.key] = state.value
}
+ const runtimeState = options.agentflowRuntime?.state as ICommonObject
+ if (startPersistState === true && runtimeState && Object.keys(runtimeState).length) {
+ for (const state in runtimeState) {
+ flowState[state] = runtimeState[state]
+ }
+ }
+
const inputData: ICommonObject = {}
const outputData: ICommonObject = {}
@@ -202,6 +217,10 @@ class Start_Agentflow implements INode {
outputData.ephemeralMemory = true
}
+ if (startPersistState) {
+ outputData.persistState = true
+ }
+
const returnOutput = {
id: nodeData.id,
name: this.name,
diff --git a/packages/components/nodes/agentflow/Tool/Tool.ts b/packages/components/nodes/agentflow/Tool/Tool.ts
index c3945ff3e..300aaafa1 100644
--- a/packages/components/nodes/agentflow/Tool/Tool.ts
+++ b/packages/components/nodes/agentflow/Tool/Tool.ts
@@ -1,7 +1,8 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
import { updateFlowState } from '../utils'
+import { processTemplateVariables } from '../../../src/utils'
import { Tool } from '@langchain/core/tools'
-import { ARTIFACTS_PREFIX } from '../../../src/agents'
+import { ARTIFACTS_PREFIX, TOOL_ARGS_PREFIX } from '../../../src/agents'
import zodToJsonSchema from 'zod-to-json-schema'
interface IToolInputArgs {
@@ -28,7 +29,7 @@ class Tool_Agentflow implements INode {
constructor() {
this.label = 'Tool'
this.name = 'toolAgentflow'
- this.version = 1.0
+ this.version = 1.1
this.type = 'Tool'
this.category = 'Agent Flows'
this.description = 'Tools allow LLM to interact with external systems'
@@ -37,7 +38,7 @@ class Tool_Agentflow implements INode {
this.inputs = [
{
label: 'Tool',
- name: 'selectedTool',
+ name: 'toolAgentflowSelectedTool',
type: 'asyncOptions',
loadMethod: 'listTools',
loadConfig: true
@@ -64,7 +65,7 @@ class Tool_Agentflow implements INode {
}
],
show: {
- selectedTool: '.+'
+ toolAgentflowSelectedTool: '.+'
}
},
{
@@ -124,8 +125,11 @@ class Tool_Agentflow implements INode {
},
async listToolInputArgs(nodeData: INodeData, options: ICommonObject): Promise {
const currentNode = options.currentNode as ICommonObject
- const selectedTool = currentNode?.inputs?.selectedTool as string
- const selectedToolConfig = currentNode?.inputs?.selectedToolConfig as ICommonObject
+ const selectedTool = (currentNode?.inputs?.selectedTool as string) || (currentNode?.inputs?.toolAgentflowSelectedTool as string)
+ const selectedToolConfig =
+ (currentNode?.inputs?.selectedToolConfig as ICommonObject) ||
+ (currentNode?.inputs?.toolAgentflowSelectedToolConfig as ICommonObject) ||
+ {}
const nodeInstanceFilePath = options.componentNodes[selectedTool].filePath as string
@@ -158,7 +162,7 @@ class Tool_Agentflow implements INode {
toolInputArgs = { properties: allProperties }
} else {
// Handle single tool instance
- toolInputArgs = toolInstance.schema ? zodToJsonSchema(toolInstance.schema) : {}
+ toolInputArgs = toolInstance.schema ? zodToJsonSchema(toolInstance.schema as any) : {}
}
if (toolInputArgs && Object.keys(toolInputArgs).length > 0) {
@@ -183,8 +187,11 @@ class Tool_Agentflow implements INode {
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
- const selectedTool = nodeData.inputs?.selectedTool as string
- const selectedToolConfig = nodeData.inputs?.selectedToolConfig as ICommonObject
+ const selectedTool = (nodeData.inputs?.selectedTool as string) || (nodeData.inputs?.toolAgentflowSelectedTool as string)
+ const selectedToolConfig =
+ (nodeData?.inputs?.selectedToolConfig as ICommonObject) ||
+ (nodeData?.inputs?.toolAgentflowSelectedToolConfig as ICommonObject) ||
+ {}
const toolInputArgs = nodeData.inputs?.toolInputArgs as IToolInputArgs[]
const _toolUpdateState = nodeData.inputs?.toolUpdateState
@@ -220,13 +227,55 @@ class Tool_Agentflow implements INode {
const toolInstance = (await newToolNodeInstance.init(newNodeData, '', options)) as Tool | Tool[]
let toolCallArgs: Record = {}
+
+ const parseInputValue = (value: string): any => {
+ if (typeof value !== 'string') {
+ return value
+ }
+
+ // Remove escape characters (backslashes before special characters)
+ // ex: \["a", "b", "c", "d", "e"\]
+ let cleanedValue = value
+ .replace(/\\"/g, '"') // \" -> "
+ .replace(/\\\\/g, '\\') // \\ -> \
+ .replace(/\\\[/g, '[') // \[ -> [
+ .replace(/\\\]/g, ']') // \] -> ]
+ .replace(/\\\{/g, '{') // \{ -> {
+ .replace(/\\\}/g, '}') // \} -> }
+
+ // Try to parse as JSON if it looks like JSON/array
+ if (
+ (cleanedValue.startsWith('[') && cleanedValue.endsWith(']')) ||
+ (cleanedValue.startsWith('{') && cleanedValue.endsWith('}'))
+ ) {
+ try {
+ return JSON.parse(cleanedValue)
+ } catch (e) {
+ // If parsing fails, return the cleaned value
+ return cleanedValue
+ }
+ }
+
+ return cleanedValue
+ }
+
+ if (newToolNodeInstance.transformNodeInputsToToolArgs) {
+ const defaultParams = newToolNodeInstance.transformNodeInputsToToolArgs(newNodeData)
+
+ toolCallArgs = {
+ ...defaultParams,
+ ...toolCallArgs
+ }
+ }
+
for (const item of toolInputArgs) {
const variableName = item.inputArgName
const variableValue = item.inputArgValue
- toolCallArgs[variableName] = variableValue
+ toolCallArgs[variableName] = parseInputValue(variableValue)
}
const flowConfig = {
+ chatflowId: options.chatflowid,
sessionId: options.sessionId,
chatId: options.chatId,
input: input,
@@ -262,6 +311,17 @@ class Tool_Agentflow implements INode {
}
}
+ let toolInput
+ if (typeof toolOutput === 'string' && toolOutput.includes(TOOL_ARGS_PREFIX)) {
+ const [output, args] = toolOutput.split(TOOL_ARGS_PREFIX)
+ toolOutput = output
+ try {
+ toolInput = JSON.parse(args)
+ } catch (e) {
+ console.error('Error parsing tool input from tool:', e)
+ }
+ }
+
if (typeof toolOutput === 'object') {
toolOutput = JSON.stringify(toolOutput, null, 2)
}
@@ -271,20 +331,13 @@ class Tool_Agentflow implements INode {
sseStreamer.streamTokenEvent(chatId, toolOutput)
}
- // Process template variables in state
- if (newState && Object.keys(newState).length > 0) {
- for (const key in newState) {
- if (newState[key].toString().includes('{{ output }}')) {
- newState[key] = toolOutput
- }
- }
- }
+ newState = processTemplateVariables(newState, toolOutput)
const returnOutput = {
id: nodeData.id,
name: this.name,
input: {
- toolInputArgs: toolInputArgs,
+ toolInputArgs: toolInput ?? toolInputArgs,
selectedTool: selectedTool
},
output: {
diff --git a/packages/components/nodes/agentflow/prompt.ts b/packages/components/nodes/agentflow/prompt.ts
index a5d9cd893..ee941ae22 100644
--- a/packages/components/nodes/agentflow/prompt.ts
+++ b/packages/components/nodes/agentflow/prompt.ts
@@ -39,37 +39,38 @@ export const DEFAULT_HUMAN_INPUT_DESCRIPTION_HTML = `
Summarize the conversati
`
-export const CONDITION_AGENT_SYSTEM_PROMPT = `You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios. If none of the scenarios match the input, you should return "default."
-
-- **Input**: A string representing the user's query or message.
-- **Scenarios**: A list of predefined scenarios that relate to the input.
-- **Instruction**: Determine if the input fits any of the scenarios.
-
-## Steps
-
-1. **Read the input string** and the list of scenarios.
-2. **Analyze the content of the input** to identify its main topic or intention.
-3. **Compare the input with each scenario**:
- - If a scenario matches the main topic of the input, select that scenario.
- - If no scenarios match, prepare to output "\`\`\`json\n{"output": "default"}\`\`\`"
-4. **Output the result**: If a match is found, return the corresponding scenario in JSON; otherwise, return "\`\`\`json\n{"output": "default"}\`\`\`"
-
-## Output Format
-
-Output should be a JSON object that either names the matching scenario or returns "\`\`\`json\n{"output": "default"}\`\`\`" if no scenarios match. No explanation is needed.
-
-## Examples
-
-1. **Input**: {"input": "Hello", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}
- **Output**: "\`\`\`json\n{"output": "default"}\`\`\`"
-
-2. **Input**: {"input": "What is AIGC?", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}
- **Output**: "\`\`\`json\n{"output": "user is asking about AI"}\`\`\`"
-
-3. **Input**: {"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "default"], "instruction": "Determine if the user is interested in learning about AI"}
- **Output**: "\`\`\`json\n{"output": "user is interested in AI topics"}\`\`\`"
-
-## Note
-- Ensure that the input scenarios align well with potential user queries for accurate matching
-- DO NOT include anything other than the JSON in your response.
-`
+export const CONDITION_AGENT_SYSTEM_PROMPT = `
You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios.
+
+
Input: A string representing the user's query, message or data.
+
Scenarios: A list of predefined scenarios that relate to the input.
+
Instruction: Determine which of the provided scenarios is the best fit for the input.
+
+
Steps
+
+
Read the input string and the list of scenarios.
+
Analyze the content of the input to identify its main topic or intention.
+
Compare the input with each scenario: Evaluate how well the input's topic or intention aligns with each of the provided scenarios and select the one that is the best fit.
+
Output the result: Return the selected scenario in the specified JSON format.
+
+
Output Format
+
Output should be a JSON object that names the selected scenario, like this: {"output": ""}. No explanation is needed.
+
Examples
+
+
+
Input: {"input": "Hello", "scenarios": ["user is asking about AI", "user is not asking about AI"], "instruction": "Your task is to check if the user is asking about AI."}
+
Output: {"output": "user is not asking about AI"}
+
+
+
Input: {"input": "What is AIGC?", "scenarios": ["user is asking about AI", "user is asking about the weather"], "instruction": "Your task is to check and see if the user is asking a topic about AI."}
+
Output: {"output": "user is asking about AI"}
+
+
+
Input: {"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "user wants to order food"], "instruction": "Determine if the user is interested in learning about AI."}
+
Output: {"output": "user is interested in AI topics"}
+
+
+
Note
+
+
Ensure that the input scenarios align well with potential user queries for accurate matching.
+
DO NOT include anything other than the JSON in your response.
+
`
diff --git a/packages/components/nodes/agentflow/utils.ts b/packages/components/nodes/agentflow/utils.ts
index 8891e74eb..14d832c8a 100644
--- a/packages/components/nodes/agentflow/utils.ts
+++ b/packages/components/nodes/agentflow/utils.ts
@@ -4,7 +4,7 @@ import { getFileFromStorage } from '../../src/storageUtils'
import { ICommonObject, IFileUpload } from '../../src/Interface'
import { BaseMessageLike } from '@langchain/core/messages'
import { IFlowState } from './Interface.Agentflow'
-import { mapMimeTypeToInputField } from '../../src/utils'
+import { handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
export const addImagesToMessages = async (
options: ICommonObject,
@@ -18,7 +18,7 @@ export const addImagesToMessages = async (
for (const upload of imageUploads) {
let bf = upload.data
if (upload.type == 'stored-file') {
- const contents = await getFileFromStorage(upload.name, options.chatflowid, options.chatId)
+ const contents = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64
bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64')
@@ -90,7 +90,7 @@ export const processMessagesWithImages = async (
hasImageReferences = true
try {
// Get file contents from storage
- const contents = await getFileFromStorage(item.name, options.chatflowid, options.chatId)
+ const contents = await getFileFromStorage(item.name, options.orgId, options.chatflowid, options.chatId)
// Create base64 data URL
const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64')
@@ -313,13 +313,16 @@ export const getPastChatHistoryImageMessages = async (
if (message.additional_kwargs && message.additional_kwargs.fileUploads) {
// example: [{"type":"stored-file","name":"0_DiXc4ZklSTo3M8J4.jpg","mime":"image/jpeg"}]
const fileUploads = message.additional_kwargs.fileUploads
+ const artifacts = message.additional_kwargs.artifacts
+ const fileAnnotations = message.additional_kwargs.fileAnnotations
+ const usedTools = message.additional_kwargs.usedTools
try {
let messageWithFileUploads = ''
const uploads: IFileUpload[] = typeof fileUploads === 'string' ? JSON.parse(fileUploads) : fileUploads
const imageContents: MessageContentImageUrl[] = []
for (const upload of uploads) {
if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) {
- const fileData = await getFileFromStorage(upload.name, options.chatflowid, options.chatId)
+ const fileData = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64
const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64')
@@ -343,7 +346,8 @@ export const getPastChatHistoryImageMessages = async (
const nodeOptions = {
retrieveAttachmentChatId: true,
chatflowid: options.chatflowid,
- chatId: options.chatId
+ chatId: options.chatId,
+ orgId: options.orgId
}
let fileInputFieldFromMimeType = 'txtFile'
fileInputFieldFromMimeType = mapMimeTypeToInputField(upload.mime)
@@ -353,26 +357,87 @@ export const getPastChatHistoryImageMessages = async (
}
}
const documents: string = await fileLoaderNodeInstance.init(nodeData, '', nodeOptions)
- messageWithFileUploads += `${documents}\n\n`
+ messageWithFileUploads += `${handleEscapeCharacters(documents, true)}\n\n`
}
}
const messageContent = messageWithFileUploads ? `${messageWithFileUploads}\n\n${message.content}` : message.content
+ const hasArtifacts = artifacts && Array.isArray(artifacts) && artifacts.length > 0
+ const hasFileAnnotations = fileAnnotations && Array.isArray(fileAnnotations) && fileAnnotations.length > 0
+ const hasUsedTools = usedTools && Array.isArray(usedTools) && usedTools.length > 0
+
if (imageContents.length > 0) {
- chatHistory.push({
+ const imageMessage: any = {
role: messageRole,
content: imageContents
- })
+ }
+ if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
+ imageMessage.additional_kwargs = {}
+ if (hasArtifacts) imageMessage.additional_kwargs.artifacts = artifacts
+ if (hasFileAnnotations) imageMessage.additional_kwargs.fileAnnotations = fileAnnotations
+ if (hasUsedTools) imageMessage.additional_kwargs.usedTools = usedTools
+ }
+ chatHistory.push(imageMessage)
transformedPastMessages.push({
role: messageRole,
content: [...JSON.parse((pastChatHistory[i] as any).additional_kwargs.fileUploads)]
})
}
- chatHistory.push({
+
+ const contentMessage: any = {
role: messageRole,
content: messageContent
- })
+ }
+ if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
+ contentMessage.additional_kwargs = {}
+ if (hasArtifacts) contentMessage.additional_kwargs.artifacts = artifacts
+ if (hasFileAnnotations) contentMessage.additional_kwargs.fileAnnotations = fileAnnotations
+ if (hasUsedTools) contentMessage.additional_kwargs.usedTools = usedTools
+ }
+ chatHistory.push(contentMessage)
} catch (e) {
// failed to parse fileUploads, continue with text only
+ const hasArtifacts = artifacts && Array.isArray(artifacts) && artifacts.length > 0
+ const hasFileAnnotations = fileAnnotations && Array.isArray(fileAnnotations) && fileAnnotations.length > 0
+ const hasUsedTools = usedTools && Array.isArray(usedTools) && usedTools.length > 0
+
+ const errorMessage: any = {
+ role: messageRole,
+ content: message.content
+ }
+ if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
+ errorMessage.additional_kwargs = {}
+ if (hasArtifacts) errorMessage.additional_kwargs.artifacts = artifacts
+ if (hasFileAnnotations) errorMessage.additional_kwargs.fileAnnotations = fileAnnotations
+ if (hasUsedTools) errorMessage.additional_kwargs.usedTools = usedTools
+ }
+ chatHistory.push(errorMessage)
+ }
+ } else if (message.additional_kwargs) {
+ const hasArtifacts =
+ message.additional_kwargs.artifacts &&
+ Array.isArray(message.additional_kwargs.artifacts) &&
+ message.additional_kwargs.artifacts.length > 0
+ const hasFileAnnotations =
+ message.additional_kwargs.fileAnnotations &&
+ Array.isArray(message.additional_kwargs.fileAnnotations) &&
+ message.additional_kwargs.fileAnnotations.length > 0
+ const hasUsedTools =
+ message.additional_kwargs.usedTools &&
+ Array.isArray(message.additional_kwargs.usedTools) &&
+ message.additional_kwargs.usedTools.length > 0
+
+ if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
+ const messageAdditionalKwargs: any = {}
+ if (hasArtifacts) messageAdditionalKwargs.artifacts = message.additional_kwargs.artifacts
+ if (hasFileAnnotations) messageAdditionalKwargs.fileAnnotations = message.additional_kwargs.fileAnnotations
+ if (hasUsedTools) messageAdditionalKwargs.usedTools = message.additional_kwargs.usedTools
+
+ chatHistory.push({
+ role: messageRole,
+ content: message.content,
+ additional_kwargs: messageAdditionalKwargs
+ })
+ } else {
chatHistory.push({
role: messageRole,
content: message.content
@@ -394,9 +459,9 @@ export const getPastChatHistoryImageMessages = async (
/**
* Updates the flow state with new values
*/
-export const updateFlowState = (state: ICommonObject, llmUpdateState: IFlowState[]): ICommonObject => {
+export const updateFlowState = (state: ICommonObject, updateState: IFlowState[]): ICommonObject => {
let newFlowState: Record = {}
- for (const state of llmUpdateState) {
+ for (const state of updateState) {
newFlowState[state.key] = state.value
}
diff --git a/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts b/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts
index d61ffd4be..88c1c5bb8 100644
--- a/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts
+++ b/packages/components/nodes/agents/AirtableAgent/AirtableAgent.ts
@@ -128,7 +128,7 @@ class Airtable_Agents implements INode {
let base64String = Buffer.from(JSON.stringify(airtableData)).toString('base64')
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
const pyodide = await LoadPyodide()
@@ -163,7 +163,7 @@ json.dumps(my_dict)`
const chain = new LLMChain({
llm: model,
prompt: PromptTemplate.fromTemplate(systemPrompt),
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
const inputs = {
dict: dataframeColDict,
@@ -183,7 +183,7 @@ json.dumps(my_dict)`
// TODO: get print console output
finalResult = await pyodide.runPythonAsync(code)
} catch (error) {
- throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using follwoing code: "${pythonCode}"`)
+ throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using following code: "${pythonCode}"`)
}
}
@@ -192,7 +192,7 @@ json.dumps(my_dict)`
const chain = new LLMChain({
llm: model,
prompt: PromptTemplate.fromTemplate(finalSystemPrompt),
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
const inputs = {
question: input,
diff --git a/packages/components/nodes/agents/AutoGPT/AutoGPT.ts b/packages/components/nodes/agents/AutoGPT/AutoGPT.ts
index c41a52965..04fd8e926 100644
--- a/packages/components/nodes/agents/AutoGPT/AutoGPT.ts
+++ b/packages/components/nodes/agents/AutoGPT/AutoGPT.ts
@@ -23,6 +23,7 @@ class AutoGPT_Agents implements INode {
category: string
baseClasses: string[]
inputs: INodeParams[]
+ badge: string
constructor() {
this.label = 'AutoGPT'
@@ -30,6 +31,7 @@ class AutoGPT_Agents implements INode {
this.version = 2.0
this.type = 'AutoGPT'
this.category = 'Agents'
+ this.badge = 'DEPRECATING'
this.icon = 'autogpt.svg'
this.description = 'Autonomous agent with chain of thoughts for self-guided task completion'
this.baseClasses = ['AutoGPT']
diff --git a/packages/components/nodes/agents/BabyAGI/BabyAGI.ts b/packages/components/nodes/agents/BabyAGI/BabyAGI.ts
index 87d5cd289..d3bad9039 100644
--- a/packages/components/nodes/agents/BabyAGI/BabyAGI.ts
+++ b/packages/components/nodes/agents/BabyAGI/BabyAGI.ts
@@ -15,6 +15,7 @@ class BabyAGI_Agents implements INode {
category: string
baseClasses: string[]
inputs: INodeParams[]
+ badge: string
constructor() {
this.label = 'BabyAGI'
@@ -23,6 +24,7 @@ class BabyAGI_Agents implements INode {
this.type = 'BabyAGI'
this.category = 'Agents'
this.icon = 'babyagi.svg'
+ this.badge = 'DEPRECATING'
this.description = 'Task Driven Autonomous Agent which creates new task and reprioritizes task list based on objective'
this.baseClasses = ['BabyAGI']
this.inputs = [
diff --git a/packages/components/nodes/agents/CSVAgent/CSVAgent.ts b/packages/components/nodes/agents/CSVAgent/CSVAgent.ts
index fbe85afc7..b94d91ad1 100644
--- a/packages/components/nodes/agents/CSVAgent/CSVAgent.ts
+++ b/packages/components/nodes/agents/CSVAgent/CSVAgent.ts
@@ -97,7 +97,7 @@ class CSV_Agents implements INode {
}
}
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const shouldStreamResponse = options.shouldStreamResponse
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
const chatId = options.chatId
@@ -114,11 +114,12 @@ class CSV_Agents implements INode {
} else {
files = [fileName]
}
+ const orgId = options.orgId
const chatflowid = options.chatflowid
for (const file of files) {
if (!file) continue
- const fileData = await getFileFromStorage(file, chatflowid)
+ const fileData = await getFileFromStorage(file, orgId, chatflowid)
base64String += fileData.toString('base64')
}
} else {
@@ -170,7 +171,7 @@ json.dumps(my_dict)`
const chain = new LLMChain({
llm: model,
prompt: PromptTemplate.fromTemplate(systemPrompt),
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
const inputs = {
dict: dataframeColDict,
@@ -201,7 +202,7 @@ json.dumps(my_dict)`
prompt: PromptTemplate.fromTemplate(
systemMessagePrompt ? `${systemMessagePrompt}\n${finalSystemPrompt}` : finalSystemPrompt
),
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
const inputs = {
question: input,
diff --git a/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts b/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts
index 4a5d91087..8583826da 100644
--- a/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts
+++ b/packages/components/nodes/agents/ConversationalAgent/ConversationalAgent.ts
@@ -132,7 +132,7 @@ class ConversationalAgent_Agents implements INode {
}
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
let res: ChainValues = {}
diff --git a/packages/components/nodes/agents/ConversationalRetrievalToolAgent/ConversationalRetrievalToolAgent.ts b/packages/components/nodes/agents/ConversationalRetrievalToolAgent/ConversationalRetrievalToolAgent.ts
index 54698ca13..7a8966e14 100644
--- a/packages/components/nodes/agents/ConversationalRetrievalToolAgent/ConversationalRetrievalToolAgent.ts
+++ b/packages/components/nodes/agents/ConversationalRetrievalToolAgent/ConversationalRetrievalToolAgent.ts
@@ -5,7 +5,7 @@ import { RunnableSequence } from '@langchain/core/runnables'
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
import { ChatPromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate, PromptTemplate } from '@langchain/core/prompts'
import { formatToOpenAIToolMessages } from 'langchain/agents/format_scratchpad/openai_tools'
-import { getBaseClasses, transformBracesWithColon } from '../../../src/utils'
+import { getBaseClasses, transformBracesWithColon, convertChatHistoryToText, convertBaseMessagetoIMessage } from '../../../src/utils'
import { type ToolsAgentStep } from 'langchain/agents/openai/output_parser'
import {
FlowiseMemory,
@@ -23,8 +23,10 @@ import { Moderation, checkInputs, streamResponse } from '../../moderation/Modera
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
import type { Document } from '@langchain/core/documents'
import { BaseRetriever } from '@langchain/core/retrievers'
-import { RESPONSE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
+import { RESPONSE_TEMPLATE, REPHRASE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
import { addImagesToMessages, llmSupportsVision } from '../../../src/multiModalUtils'
+import { StringOutputParser } from '@langchain/core/output_parsers'
+import { Tool } from '@langchain/core/tools'
class ConversationalRetrievalToolAgent_Agents implements INode {
label: string
@@ -42,7 +44,7 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
constructor(fields?: { sessionId?: string }) {
this.label = 'Conversational Retrieval Tool Agent'
this.name = 'conversationalRetrievalToolAgent'
- this.author = 'niztal(falkor)'
+ this.author = 'niztal(falkor) and nikitas-novatix'
this.version = 1.0
this.type = 'AgentExecutor'
this.category = 'Agents'
@@ -79,6 +81,26 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
optional: true,
default: RESPONSE_TEMPLATE
},
+ {
+ label: 'Rephrase Prompt',
+ name: 'rephrasePrompt',
+ type: 'string',
+ description: 'Using previous chat history, rephrase question into a standalone question',
+ warning: 'Prompt must include input variables: {chat_history} and {question}',
+ rows: 4,
+ additionalParams: true,
+ optional: true,
+ default: REPHRASE_TEMPLATE
+ },
+ {
+ label: 'Rephrase Model',
+ name: 'rephraseModel',
+ type: 'BaseChatModel',
+ description:
+ 'Optional: Use a different (faster/cheaper) model for rephrasing. If not specified, uses the main Tool Calling Chat Model.',
+ optional: true,
+ additionalParams: true
+ },
{
label: 'Input Moderation',
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
@@ -103,8 +125,9 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
this.sessionId = fields?.sessionId
}
- async init(nodeData: INodeData, input: string, options: ICommonObject): Promise {
- return prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
+ // The agent will be prepared in run() with the correct user message - it needs the actual runtime input for rephrasing
+ async init(_nodeData: INodeData, _input: string, _options: ICommonObject): Promise {
+ return null
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise {
@@ -130,7 +153,7 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
let res: ChainValues = {}
@@ -148,6 +171,23 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
sseStreamer.streamUsedToolsEvent(chatId, res.usedTools)
usedTools = res.usedTools
}
+
+ // If the tool is set to returnDirect, stream the output to the client
+ if (res.usedTools && res.usedTools.length) {
+ let inputTools = nodeData.inputs?.tools
+ inputTools = flatten(inputTools)
+ for (const tool of res.usedTools) {
+ const inputTool = inputTools.find((inputTool: Tool) => inputTool.name === tool.tool)
+ if (inputTool && (inputTool as any).returnDirect && shouldStreamResponse) {
+ sseStreamer.streamTokenEvent(chatId, tool.toolOutput)
+ // Prevent CustomChainHandler from streaming the same output again
+ if (res.output === tool.toolOutput) {
+ res.output = ''
+ }
+ }
+ }
+ }
+ // The CustomChainHandler will send the stream end event
} else {
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
if (res.sourceDocuments) {
@@ -210,9 +250,11 @@ const prepareAgent = async (
flowObj: { sessionId?: string; chatId?: string; input?: string }
) => {
const model = nodeData.inputs?.model as BaseChatModel
+ const rephraseModel = (nodeData.inputs?.rephraseModel as BaseChatModel) || model // Use main model if not specified
const maxIterations = nodeData.inputs?.maxIterations as string
const memory = nodeData.inputs?.memory as FlowiseMemory
let systemMessage = nodeData.inputs?.systemMessage as string
+ let rephrasePrompt = nodeData.inputs?.rephrasePrompt as string
let tools = nodeData.inputs?.tools
tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
@@ -220,6 +262,9 @@ const prepareAgent = async (
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
systemMessage = transformBracesWithColon(systemMessage)
+ if (rephrasePrompt) {
+ rephrasePrompt = transformBracesWithColon(rephrasePrompt)
+ }
const prompt = ChatPromptTemplate.fromMessages([
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
@@ -263,6 +308,37 @@ const prepareAgent = async (
const modelWithTools = model.bindTools(tools)
+ // Function to get standalone question (either rephrased or original)
+ const getStandaloneQuestion = async (input: string): Promise => {
+ // If no rephrase prompt, return the original input
+ if (!rephrasePrompt) {
+ return input
+ }
+
+ // Get chat history (use empty string if none)
+ const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
+ const iMessages = convertBaseMessagetoIMessage(messages)
+ const chatHistoryString = convertChatHistoryToText(iMessages)
+
+ // Always rephrase to normalize/expand user queries for better retrieval
+ try {
+ const CONDENSE_QUESTION_PROMPT = PromptTemplate.fromTemplate(rephrasePrompt)
+ const condenseQuestionChain = RunnableSequence.from([CONDENSE_QUESTION_PROMPT, rephraseModel, new StringOutputParser()])
+ const res = await condenseQuestionChain.invoke({
+ question: input,
+ chat_history: chatHistoryString
+ })
+ return res
+ } catch (error) {
+ console.error('Error rephrasing question:', error)
+ // On error, fall back to original input
+ return input
+ }
+ }
+
+ // Get standalone question before creating runnable
+ const standaloneQuestion = await getStandaloneQuestion(flowObj?.input || '')
+
const runnableAgent = RunnableSequence.from([
{
[inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
@@ -272,7 +348,9 @@ const prepareAgent = async (
return messages ?? []
},
context: async (i: { input: string; chatHistory?: string }) => {
- const relevantDocs = await vectorStoreRetriever.invoke(i.input)
+ // Use the standalone question (rephrased or original) for retrieval
+ const retrievalQuery = standaloneQuestion || i.input
+ const relevantDocs = await vectorStoreRetriever.invoke(retrievalQuery)
const formattedDocs = formatDocs(relevantDocs)
return formattedDocs
}
@@ -288,11 +366,13 @@ const prepareAgent = async (
sessionId: flowObj?.sessionId,
chatId: flowObj?.chatId,
input: flowObj?.input,
- verbose: process.env.DEBUG === 'true',
+ verbose: process.env.DEBUG === 'true' ? true : false,
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})
return executor
}
-module.exports = { nodeClass: ConversationalRetrievalToolAgent_Agents }
+module.exports = {
+ nodeClass: ConversationalRetrievalToolAgent_Agents
+}
diff --git a/packages/components/nodes/agents/LlamaIndexAgents/AnthropicAgent/AnthropicAgent_LlamaIndex.ts b/packages/components/nodes/agents/LlamaIndexAgents/AnthropicAgent/AnthropicAgent_LlamaIndex.ts
index c218ff654..257250020 100644
--- a/packages/components/nodes/agents/LlamaIndexAgents/AnthropicAgent/AnthropicAgent_LlamaIndex.ts
+++ b/packages/components/nodes/agents/LlamaIndexAgents/AnthropicAgent/AnthropicAgent_LlamaIndex.ts
@@ -2,6 +2,7 @@ import { flatten } from 'lodash'
import { MessageContentTextDetail, ChatMessage, AnthropicAgent, Anthropic } from 'llamaindex'
import { getBaseClasses } from '../../../../src/utils'
import { FlowiseMemory, ICommonObject, IMessage, INode, INodeData, INodeParams, IUsedTool } from '../../../../src/Interface'
+import { EvaluationRunTracerLlama } from '../../../../evaluation/EvaluationRunTracerLlama'
class AnthropicAgent_LlamaIndex_Agents implements INode {
label: string
@@ -96,13 +97,16 @@ class AnthropicAgent_LlamaIndex_Agents implements INode {
tools,
llm: model,
chatHistory: chatHistory,
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
+ // these are needed for evaluation runs
+ await EvaluationRunTracerLlama.injectEvaluationMetadata(nodeData, options, agent)
+
let text = ''
const usedTools: IUsedTool[] = []
- const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' })
+ const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' ? true : false })
if (response.sources.length) {
for (const sourceTool of response.sources) {
diff --git a/packages/components/nodes/agents/LlamaIndexAgents/OpenAIToolAgent/OpenAIToolAgent_LlamaIndex.ts b/packages/components/nodes/agents/LlamaIndexAgents/OpenAIToolAgent/OpenAIToolAgent_LlamaIndex.ts
index 07b2578bd..657fed6bf 100644
--- a/packages/components/nodes/agents/LlamaIndexAgents/OpenAIToolAgent/OpenAIToolAgent_LlamaIndex.ts
+++ b/packages/components/nodes/agents/LlamaIndexAgents/OpenAIToolAgent/OpenAIToolAgent_LlamaIndex.ts
@@ -1,6 +1,7 @@
import { flatten } from 'lodash'
import { ChatMessage, OpenAI, OpenAIAgent } from 'llamaindex'
import { getBaseClasses } from '../../../../src/utils'
+import { EvaluationRunTracerLlama } from '../../../../evaluation/EvaluationRunTracerLlama'
import {
FlowiseMemory,
ICommonObject,
@@ -107,9 +108,12 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
tools,
llm: model,
chatHistory: chatHistory,
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
+ // these are needed for evaluation runs
+ await EvaluationRunTracerLlama.injectEvaluationMetadata(nodeData, options, agent)
+
let text = ''
let isStreamingStarted = false
const usedTools: IUsedTool[] = []
@@ -119,10 +123,9 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
message: input,
chatHistory,
stream: true,
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
for await (const chunk of stream) {
- //console.log('chunk', chunk)
text += chunk.response.delta
if (!isStreamingStarted) {
isStreamingStarted = true
@@ -147,7 +150,7 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
}
}
} else {
- const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' })
+ const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' ? true : false })
if (response.sources.length) {
for (const sourceTool of response.sources) {
usedTools.push({
diff --git a/packages/components/nodes/agents/OpenAIAssistant/OpenAIAssistant.ts b/packages/components/nodes/agents/OpenAIAssistant/OpenAIAssistant.ts
index f8886983d..e87745492 100644
--- a/packages/components/nodes/agents/OpenAIAssistant/OpenAIAssistant.ts
+++ b/packages/components/nodes/agents/OpenAIAssistant/OpenAIAssistant.ts
@@ -107,7 +107,11 @@ class OpenAIAssistant_Agents implements INode {
return returnData
}
- const assistants = await appDataSource.getRepository(databaseEntities['Assistant']).find()
+ const searchOptions = options.searchOptions || {}
+ const assistants = await appDataSource.getRepository(databaseEntities['Assistant']).findBy({
+ ...searchOptions,
+ type: 'OPENAI'
+ })
for (let i = 0; i < assistants.length; i += 1) {
const assistantDetails = JSON.parse(assistants[i].details)
@@ -130,13 +134,14 @@ class OpenAIAssistant_Agents implements INode {
const selectedAssistantId = nodeData.inputs?.selectedAssistant as string
const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity
+ const orgId = options.orgId
const assistant = await appDataSource.getRepository(databaseEntities['Assistant']).findOneBy({
id: selectedAssistantId
})
if (!assistant) {
- options.logger.error(`Assistant ${selectedAssistantId} not found`)
+ options.logger.error(`[${orgId}]: Assistant ${selectedAssistantId} not found`)
return
}
@@ -149,7 +154,7 @@ class OpenAIAssistant_Agents implements INode {
chatId
})
if (!chatmsg) {
- options.logger.error(`Chat Message with Chat Id: ${chatId} not found`)
+ options.logger.error(`[${orgId}]: Chat Message with Chat Id: ${chatId} not found`)
return
}
sessionId = chatmsg.sessionId
@@ -160,21 +165,21 @@ class OpenAIAssistant_Agents implements INode {
const credentialData = await getCredentialData(assistant.credential ?? '', options)
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, nodeData)
if (!openAIApiKey) {
- options.logger.error(`OpenAI ApiKey not found`)
+ options.logger.error(`[${orgId}]: OpenAI ApiKey not found`)
return
}
const openai = new OpenAI({ apiKey: openAIApiKey })
- options.logger.info(`Clearing OpenAI Thread ${sessionId}`)
+ options.logger.info(`[${orgId}]: Clearing OpenAI Thread ${sessionId}`)
try {
if (sessionId && sessionId.startsWith('thread_')) {
await openai.beta.threads.del(sessionId)
- options.logger.info(`Successfully cleared OpenAI Thread ${sessionId}`)
+ options.logger.info(`[${orgId}]: Successfully cleared OpenAI Thread ${sessionId}`)
} else {
- options.logger.error(`Error clearing OpenAI Thread ${sessionId}`)
+ options.logger.error(`[${orgId}]: Error clearing OpenAI Thread ${sessionId}`)
}
} catch (e) {
- options.logger.error(`Error clearing OpenAI Thread ${sessionId}`)
+ options.logger.error(`[${orgId}]: Error clearing OpenAI Thread ${sessionId}`)
}
}
@@ -190,6 +195,17 @@ class OpenAIAssistant_Agents implements INode {
const shouldStreamResponse = options.shouldStreamResponse
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
const chatId = options.chatId
+ const checkStorage = options.checkStorage
+ ? (options.checkStorage as (orgId: string, subscriptionId: string, usageCacheManager: any) => Promise)
+ : undefined
+ const updateStorageUsage = options.updateStorageUsage
+ ? (options.updateStorageUsage as (
+ orgId: string,
+ workspaceId: string,
+ totalSize: number,
+ usageCacheManager: any
+ ) => Promise)
+ : undefined
if (moderations && moderations.length > 0) {
try {
@@ -380,17 +396,30 @@ class OpenAIAssistant_Agents implements INode {
// eslint-disable-next-line no-useless-escape
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
if (!disableFileDownload) {
- filePath = await downloadFile(
+ if (checkStorage)
+ await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
+
+ const { path, totalSize } = await downloadFile(
openAIApiKey,
cited_file,
fileName,
+ options.orgId,
options.chatflowid,
options.chatId
)
+ filePath = path
fileAnnotations.push({
filePath,
fileName
})
+
+ if (updateStorageUsage)
+ await updateStorageUsage(
+ options.orgId,
+ options.workspaceId,
+ totalSize,
+ options.usageCacheManager
+ )
}
} else {
const file_path = (annotation as OpenAI.Beta.Threads.Messages.FilePathAnnotation).file_path
@@ -399,17 +428,30 @@ class OpenAIAssistant_Agents implements INode {
// eslint-disable-next-line no-useless-escape
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
if (!disableFileDownload) {
- filePath = await downloadFile(
+ if (checkStorage)
+ await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
+
+ const { path, totalSize } = await downloadFile(
openAIApiKey,
cited_file,
fileName,
+ options.orgId,
options.chatflowid,
options.chatId
)
+ filePath = path
fileAnnotations.push({
filePath,
fileName
})
+
+ if (updateStorageUsage)
+ await updateStorageUsage(
+ options.orgId,
+ options.workspaceId,
+ totalSize,
+ options.usageCacheManager
+ )
}
}
}
@@ -467,15 +509,21 @@ class OpenAIAssistant_Agents implements INode {
const fileId = chunk.image_file.file_id
const fileObj = await openai.files.retrieve(fileId)
- const filePath = await downloadImg(
+ if (checkStorage) await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
+
+ const { filePath, totalSize } = await downloadImg(
openai,
fileId,
`${fileObj.filename}.png`,
+ options.orgId,
options.chatflowid,
options.chatId
)
artifacts.push({ type: 'png', data: filePath })
+ if (updateStorageUsage)
+ await updateStorageUsage(options.orgId, options.workspaceId, totalSize, options.usageCacheManager)
+
if (!isStreamingStarted) {
isStreamingStarted = true
if (sseStreamer) {
@@ -530,7 +578,7 @@ class OpenAIAssistant_Agents implements INode {
toolOutput
})
} catch (e) {
- await analyticHandlers.onToolEnd(toolIds, e)
+ await analyticHandlers.onToolError(toolIds, e)
console.error('Error executing tool', e)
throw new Error(
`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`
@@ -655,7 +703,7 @@ class OpenAIAssistant_Agents implements INode {
toolOutput
})
} catch (e) {
- await analyticHandlers.onToolEnd(toolIds, e)
+ await analyticHandlers.onToolError(toolIds, e)
console.error('Error executing tool', e)
clearInterval(timeout)
reject(
@@ -776,7 +824,21 @@ class OpenAIAssistant_Agents implements INode {
// eslint-disable-next-line no-useless-escape
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
if (!disableFileDownload) {
- filePath = await downloadFile(openAIApiKey, cited_file, fileName, options.chatflowid, options.chatId)
+ if (checkStorage) await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
+
+ const { path, totalSize } = await downloadFile(
+ openAIApiKey,
+ cited_file,
+ fileName,
+ options.orgId,
+ options.chatflowid,
+ options.chatId
+ )
+ filePath = path
+
+ if (updateStorageUsage)
+ await updateStorageUsage(options.orgId, options.workspaceId, totalSize, options.usageCacheManager)
+
fileAnnotations.push({
filePath,
fileName
@@ -789,13 +851,27 @@ class OpenAIAssistant_Agents implements INode {
// eslint-disable-next-line no-useless-escape
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
if (!disableFileDownload) {
- filePath = await downloadFile(
+ if (checkStorage)
+ await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
+
+ const { path, totalSize } = await downloadFile(
openAIApiKey,
cited_file,
fileName,
+ options.orgId,
options.chatflowid,
options.chatId
)
+ filePath = path
+
+ if (updateStorageUsage)
+ await updateStorageUsage(
+ options.orgId,
+ options.workspaceId,
+ totalSize,
+ options.usageCacheManager
+ )
+
fileAnnotations.push({
filePath,
fileName
@@ -822,7 +898,20 @@ class OpenAIAssistant_Agents implements INode {
const fileId = content.image_file.file_id
const fileObj = await openai.files.retrieve(fileId)
- const filePath = await downloadImg(openai, fileId, `${fileObj.filename}.png`, options.chatflowid, options.chatId)
+ if (checkStorage) await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
+
+ const { filePath, totalSize } = await downloadImg(
+ openai,
+ fileId,
+ `${fileObj.filename}.png`,
+ options.orgId,
+ options.chatflowid,
+ options.chatId
+ )
+
+ if (updateStorageUsage)
+ await updateStorageUsage(options.orgId, options.workspaceId, totalSize, options.usageCacheManager)
+
artifacts.push({ type: 'png', data: filePath })
}
}
@@ -847,7 +936,13 @@ class OpenAIAssistant_Agents implements INode {
}
}
-const downloadImg = async (openai: OpenAI, fileId: string, fileName: string, ...paths: string[]) => {
+const downloadImg = async (
+ openai: OpenAI,
+ fileId: string,
+ fileName: string,
+ orgId: string,
+ ...paths: string[]
+): Promise<{ filePath: string; totalSize: number }> => {
const response = await openai.files.content(fileId)
// Extract the binary data from the Response object
@@ -857,12 +952,18 @@ const downloadImg = async (openai: OpenAI, fileId: string, fileName: string, ...
const image_data_buffer = Buffer.from(image_data)
const mime = 'image/png'
- const res = await addSingleFileToStorage(mime, image_data_buffer, fileName, ...paths)
+ const { path, totalSize } = await addSingleFileToStorage(mime, image_data_buffer, fileName, orgId, ...paths)
- return res
+ return { filePath: path, totalSize }
}
-const downloadFile = async (openAIApiKey: string, fileObj: any, fileName: string, ...paths: string[]) => {
+const downloadFile = async (
+ openAIApiKey: string,
+ fileObj: any,
+ fileName: string,
+ orgId: string,
+ ...paths: string[]
+): Promise<{ path: string; totalSize: number }> => {
try {
const response = await fetch(`https://api.openai.com/v1/files/${fileObj.id}/content`, {
method: 'GET',
@@ -880,10 +981,12 @@ const downloadFile = async (openAIApiKey: string, fileObj: any, fileName: string
const data_buffer = Buffer.from(data)
const mime = 'application/octet-stream'
- return await addSingleFileToStorage(mime, data_buffer, fileName, ...paths)
+ const { path, totalSize } = await addSingleFileToStorage(mime, data_buffer, fileName, orgId, ...paths)
+
+ return { path, totalSize }
} catch (error) {
console.error('Error downloading or writing the file:', error)
- return ''
+ return { path: '', totalSize: 0 }
}
}
@@ -993,7 +1096,7 @@ async function handleToolSubmission(params: ToolSubmissionParams): Promise {
const chain = await initChain(nodeData, options)
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
const moderations = nodeData.inputs?.inputModeration as Moderation[]
const shouldStreamResponse = options.shouldStreamResponse
@@ -114,8 +116,9 @@ const initChain = async (nodeData: INodeData, options: ICommonObject) => {
} else {
if (yamlFileBase64.startsWith('FILE-STORAGE::')) {
const file = yamlFileBase64.replace('FILE-STORAGE::', '')
+ const orgId = options.orgId
const chatflowid = options.chatflowid
- const fileData = await getFileFromStorage(file, chatflowid)
+ const fileData = await getFileFromStorage(file, orgId, chatflowid)
yamlString = fileData.toString()
} else {
const splitDataURI = yamlFileBase64.split(',')
@@ -128,7 +131,7 @@ const initChain = async (nodeData: INodeData, options: ICommonObject) => {
return await createOpenAPIChain(yamlString, {
llm: model,
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {},
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
}
diff --git a/packages/components/nodes/chains/ApiChain/POSTApiChain.ts b/packages/components/nodes/chains/ApiChain/POSTApiChain.ts
index da033d2d2..b9a489c4d 100644
--- a/packages/components/nodes/chains/ApiChain/POSTApiChain.ts
+++ b/packages/components/nodes/chains/ApiChain/POSTApiChain.ts
@@ -15,6 +15,7 @@ class POSTApiChain_Chains implements INode {
baseClasses: string[]
description: string
inputs: INodeParams[]
+ badge: string
constructor() {
this.label = 'POST API Chain'
@@ -23,6 +24,7 @@ class POSTApiChain_Chains implements INode {
this.type = 'POSTApiChain'
this.icon = 'post.svg'
this.category = 'Chains'
+ this.badge = 'DEPRECATING'
this.description = 'Chain to run queries against POST API'
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
this.inputs = [
@@ -87,7 +89,7 @@ class POSTApiChain_Chains implements INode {
const ansPrompt = nodeData.inputs?.ansPrompt as string
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
const shouldStreamResponse = options.shouldStreamResponse
@@ -119,7 +121,7 @@ const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: s
const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
apiUrlPrompt,
apiResponsePrompt,
- verbose: process.env.DEBUG === 'true',
+ verbose: process.env.DEBUG === 'true' ? true : false,
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
})
return chain
diff --git a/packages/components/nodes/chains/ConversationChain/ConversationChain.ts b/packages/components/nodes/chains/ConversationChain/ConversationChain.ts
index f0d3de7aa..04e36daf3 100644
--- a/packages/components/nodes/chains/ConversationChain/ConversationChain.ts
+++ b/packages/components/nodes/chains/ConversationChain/ConversationChain.ts
@@ -132,7 +132,7 @@ class ConversationChain_Chains implements INode {
}
}
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const additionalCallback = await additionalCallbacks(nodeData, options)
let res = ''
diff --git a/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts b/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts
index 29528ae5c..31dfa8b1a 100644
--- a/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts
+++ b/packages/components/nodes/chains/ConversationalRetrievalQAChain/ConversationalRetrievalQAChain.ts
@@ -185,6 +185,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
const shouldStreamResponse = options.shouldStreamResponse
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
const chatId = options.chatId
+ const orgId = options.orgId
let customResponsePrompt = responsePrompt
// If the deprecated systemMessagePrompt is still exists
@@ -200,7 +201,8 @@ class ConversationalRetrievalQAChain_Chains implements INode {
memoryKey: 'chat_history',
appDataSource,
databaseEntities,
- chatflowid
+ chatflowid,
+ orgId
})
}
@@ -220,7 +222,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
const history = ((await memory.getChatMessages(this.sessionId, false, prependMessages)) as IMessage[]) ?? []
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const additionalCallback = await additionalCallbacks(nodeData, options)
let callbacks = [loggerHandler, ...additionalCallback]
@@ -407,18 +409,21 @@ interface BufferMemoryExtendedInput {
appDataSource: DataSource
databaseEntities: IDatabaseEntity
chatflowid: string
+ orgId: string
}
class BufferMemory extends FlowiseMemory implements MemoryMethods {
appDataSource: DataSource
databaseEntities: IDatabaseEntity
chatflowid: string
+ orgId: string
constructor(fields: BufferMemoryInput & BufferMemoryExtendedInput) {
super(fields)
this.appDataSource = fields.appDataSource
this.databaseEntities = fields.databaseEntities
this.chatflowid = fields.chatflowid
+ this.orgId = fields.orgId
}
async getChatMessages(
@@ -443,7 +448,7 @@ class BufferMemory extends FlowiseMemory implements MemoryMethods {
}
if (returnBaseMessages) {
- return await mapChatMessageToBaseMessage(chatMessage)
+ return await mapChatMessageToBaseMessage(chatMessage, this.orgId)
}
let returnIMessages: IMessage[] = []
diff --git a/packages/components/nodes/chains/GraphCypherQAChain/GraphCypherQAChain.ts b/packages/components/nodes/chains/GraphCypherQAChain/GraphCypherQAChain.ts
index fb7dc4a7d..5a2f16c09 100644
--- a/packages/components/nodes/chains/GraphCypherQAChain/GraphCypherQAChain.ts
+++ b/packages/components/nodes/chains/GraphCypherQAChain/GraphCypherQAChain.ts
@@ -215,7 +215,7 @@ class GraphCypherQA_Chain implements INode {
query: input
}
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbackHandlers = await additionalCallbacks(nodeData, options)
let callbacks = [loggerHandler, ...callbackHandlers]
diff --git a/packages/components/nodes/chains/LLMChain/LLMChain.ts b/packages/components/nodes/chains/LLMChain/LLMChain.ts
index f72603635..801358126 100644
--- a/packages/components/nodes/chains/LLMChain/LLMChain.ts
+++ b/packages/components/nodes/chains/LLMChain/LLMChain.ts
@@ -167,7 +167,7 @@ const runPrediction = async (
nodeData: INodeData,
disableStreaming?: boolean
) => {
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
const moderations = nodeData.inputs?.inputModeration as Moderation[]
diff --git a/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts b/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts
index 7863981c2..da6834005 100644
--- a/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts
+++ b/packages/components/nodes/chains/MultiPromptChain/MultiPromptChain.ts
@@ -16,11 +16,13 @@ class MultiPromptChain_Chains implements INode {
baseClasses: string[]
description: string
inputs: INodeParams[]
+ badge: string
constructor() {
this.label = 'Multi Prompt Chain'
this.name = 'multiPromptChain'
this.version = 2.0
+ this.badge = 'DEPRECATING'
this.type = 'MultiPromptChain'
this.icon = 'prompt.svg'
this.category = 'Chains'
@@ -66,7 +68,7 @@ class MultiPromptChain_Chains implements INode {
promptNames,
promptDescriptions,
promptTemplates,
- llmChainOpts: { verbose: process.env.DEBUG === 'true' }
+ llmChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false }
})
return chain
@@ -95,7 +97,7 @@ class MultiPromptChain_Chains implements INode {
}
const obj = { input }
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
if (shouldStreamResponse) {
diff --git a/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts b/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts
index eed73f4cc..bdcd37621 100644
--- a/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts
+++ b/packages/components/nodes/chains/MultiRetrievalQAChain/MultiRetrievalQAChain.ts
@@ -15,12 +15,14 @@ class MultiRetrievalQAChain_Chains implements INode {
category: string
baseClasses: string[]
description: string
+ badge: string
inputs: INodeParams[]
constructor() {
this.label = 'Multi Retrieval QA Chain'
this.name = 'multiRetrievalQAChain'
this.version = 2.0
+ this.badge = 'DEPRECATING'
this.type = 'MultiRetrievalQAChain'
this.icon = 'qa.svg'
this.category = 'Chains'
@@ -74,7 +76,7 @@ class MultiRetrievalQAChain_Chains implements INode {
retrieverNames,
retrieverDescriptions,
retrievers,
- retrievalQAChainOpts: { verbose: process.env.DEBUG === 'true', returnSourceDocuments }
+ retrievalQAChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false, returnSourceDocuments }
})
return chain
}
@@ -101,7 +103,7 @@ class MultiRetrievalQAChain_Chains implements INode {
}
}
const obj = { input }
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
if (shouldStreamResponse) {
diff --git a/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts b/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts
index 8e7453d75..f82d92e06 100644
--- a/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts
+++ b/packages/components/nodes/chains/RetrievalQAChain/RetrievalQAChain.ts
@@ -17,6 +17,7 @@ class RetrievalQAChain_Chains implements INode {
baseClasses: string[]
description: string
inputs: INodeParams[]
+ badge: string
constructor() {
this.label = 'Retrieval QA Chain'
@@ -24,6 +25,7 @@ class RetrievalQAChain_Chains implements INode {
this.version = 2.0
this.type = 'RetrievalQAChain'
this.icon = 'qa.svg'
+ this.badge = 'DEPRECATING'
this.category = 'Chains'
this.description = 'QA chain to answer a question based on the retrieved documents'
this.baseClasses = [this.type, ...getBaseClasses(RetrievalQAChain)]
@@ -53,7 +55,7 @@ class RetrievalQAChain_Chains implements INode {
const model = nodeData.inputs?.model as BaseLanguageModel
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
- const chain = RetrievalQAChain.fromLLM(model, vectorStoreRetriever, { verbose: process.env.DEBUG === 'true' })
+ const chain = RetrievalQAChain.fromLLM(model, vectorStoreRetriever, { verbose: process.env.DEBUG === 'true' ? true : false })
return chain
}
@@ -80,7 +82,7 @@ class RetrievalQAChain_Chains implements INode {
const obj = {
query: input
}
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
if (shouldStreamResponse) {
diff --git a/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts b/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts
index cc062fb76..539e2031d 100644
--- a/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts
+++ b/packages/components/nodes/chains/SqlDatabaseChain/SqlDatabaseChain.ts
@@ -194,7 +194,7 @@ class SqlDatabaseChain_Chains implements INode {
topK,
customPrompt
)
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
if (shouldStreamResponse) {
@@ -241,7 +241,7 @@ const getSQLDBChain = async (
const obj: SqlDatabaseChainInput = {
llm,
database: db,
- verbose: process.env.DEBUG === 'true',
+ verbose: process.env.DEBUG === 'true' ? true : false,
topK: topK
}
diff --git a/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts b/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts
index ec1b2cf8b..f111f6529 100644
--- a/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts
+++ b/packages/components/nodes/chains/VectorDBQAChain/VectorDBQAChain.ts
@@ -17,6 +17,7 @@ class VectorDBQAChain_Chains implements INode {
baseClasses: string[]
description: string
inputs: INodeParams[]
+ badge: string
constructor() {
this.label = 'VectorDB QA Chain'
@@ -25,6 +26,7 @@ class VectorDBQAChain_Chains implements INode {
this.type = 'VectorDBQAChain'
this.icon = 'vectordb.svg'
this.category = 'Chains'
+ this.badge = 'DEPRECATING'
this.description = 'QA chain for vector databases'
this.baseClasses = [this.type, ...getBaseClasses(VectorDBQAChain)]
this.inputs = [
@@ -55,7 +57,7 @@ class VectorDBQAChain_Chains implements INode {
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
k: (vectorStore as any)?.k ?? 4,
- verbose: process.env.DEBUG === 'true'
+ verbose: process.env.DEBUG === 'true' ? true : false
})
return chain
}
@@ -84,7 +86,7 @@ class VectorDBQAChain_Chains implements INode {
query: input
}
- const loggerHandler = new ConsoleCallbackHandler(options.logger)
+ const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
const callbacks = await additionalCallbacks(nodeData, options)
if (shouldStreamResponse) {
diff --git a/packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts b/packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts
index b48bc7f0d..915b2412b 100644
--- a/packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts
+++ b/packages/components/nodes/chatmodels/AWSBedrock/AWSChatBedrock.ts
@@ -23,7 +23,7 @@ class AWSChatBedrock_ChatModels implements INode {
constructor() {
this.label = 'AWS ChatBedrock'
this.name = 'awsChatBedrock'
- this.version = 6.0
+ this.version = 6.1
this.type = 'AWSChatBedrock'
this.icon = 'aws.svg'
this.category = 'Chat Models'
@@ -100,6 +100,16 @@ class AWSChatBedrock_ChatModels implements INode {
'Allow image input. Refer to the docs for more details.',
default: false,
optional: true
+ },
+ {
+ label: 'Latency Optimized',
+ name: 'latencyOptimized',
+ type: 'boolean',
+ description:
+ 'Enable latency optimized configuration for supported models. Refer to the supported latecny optimized models for more details.',
+ default: false,
+ optional: true,
+ additionalParams: true
}
]
}
@@ -122,6 +132,7 @@ class AWSChatBedrock_ChatModels implements INode {
const iMax_tokens_to_sample = nodeData.inputs?.max_tokens_to_sample as string
const cache = nodeData.inputs?.cache as BaseCache
const streaming = nodeData.inputs?.streaming as boolean
+ const latencyOptimized = nodeData.inputs?.latencyOptimized as boolean
const obj: ChatBedrockConverseInput = {
region: iRegion,
@@ -131,6 +142,10 @@ class AWSChatBedrock_ChatModels implements INode {
streaming: streaming ?? true
}
+ if (latencyOptimized) {
+ obj.performanceConfig = { latency: 'optimized' }
+ }
+
/**
* Long-term credentials specified in LLM configuration are optional.
* Bedrock's credential provider falls back to the AWS SDK to fetch
diff --git a/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts b/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts
index 02834a105..786a17d49 100644
--- a/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts
+++ b/packages/components/nodes/chatmodels/AzureChatOpenAI/AzureChatOpenAI.ts
@@ -1,9 +1,10 @@
-import { AzureOpenAIInput, AzureChatOpenAI as LangchainAzureChatOpenAI, ChatOpenAIFields, OpenAIClient } from '@langchain/openai'
+import { AzureOpenAIInput, AzureChatOpenAI as LangchainAzureChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
import { BaseCache } from '@langchain/core/caches'
import { ICommonObject, IMultiModalOption, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
import { AzureChatOpenAI } from './FlowiseAzureChatOpenAI'
+import { OpenAI as OpenAIClient } from 'openai'
const serverCredentialsExists =
!!process.env.AZURE_OPENAI_API_KEY &&
@@ -26,7 +27,7 @@ class AzureChatOpenAI_ChatModels implements INode {
constructor() {
this.label = 'Azure ChatOpenAI'
this.name = 'azureChatOpenAI'
- this.version = 7.0
+ this.version = 7.1
this.type = 'AzureChatOpenAI'
this.icon = 'Azure.svg'
this.category = 'Chat Models'
@@ -154,6 +155,15 @@ class AzureChatOpenAI_ChatModels implements INode {
optional: false,
additionalParams: true
},
+ {
+ label: 'Reasoning',
+ description: 'Whether the model supports reasoning. Only applicable for reasoning models.',
+ name: 'reasoning',
+ type: 'boolean',
+ default: false,
+ optional: true,
+ additionalParams: true
+ },
{
label: 'Reasoning Effort',
description: 'Constrains effort on reasoning for reasoning models. Only applicable for o1 and o3 models.',
@@ -173,9 +183,34 @@ class AzureChatOpenAI_ChatModels implements INode {
name: 'high'
}
],
- default: 'medium',
- optional: false,
- additionalParams: true
+ additionalParams: true,
+ show: {
+ reasoning: true
+ }
+ },
+ {
+ label: 'Reasoning Summary',
+ description: `A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process`,
+ name: 'reasoningSummary',
+ type: 'options',
+ options: [
+ {
+ label: 'Auto',
+ name: 'auto'
+ },
+ {
+ label: 'Concise',
+ name: 'concise'
+ },
+ {
+ label: 'Detailed',
+ name: 'detailed'
+ }
+ ],
+ additionalParams: true,
+ show: {
+ reasoning: true
+ }
}
]
}
@@ -199,7 +234,8 @@ class AzureChatOpenAI_ChatModels implements INode {
const topP = nodeData.inputs?.topP as string
const basePath = nodeData.inputs?.basepath as string
const baseOptions = nodeData.inputs?.baseOptions
- const reasoningEffort = nodeData.inputs?.reasoningEffort as OpenAIClient.Chat.ChatCompletionReasoningEffort
+ const reasoningEffort = nodeData.inputs?.reasoningEffort as OpenAIClient.Chat.ChatCompletionReasoningEffort | null
+ const reasoningSummary = nodeData.inputs?.reasoningSummary as 'auto' | 'concise' | 'detailed' | null
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const azureOpenAIApiKey = getCredentialParam('azureOpenAIApiKey', credentialData, nodeData)
@@ -237,11 +273,22 @@ class AzureChatOpenAI_ChatModels implements INode {
console.error('Error parsing base options', exception)
}
}
- if (modelName === 'o3-mini' || modelName.includes('o1')) {
+ if (modelName.includes('o1') || modelName.includes('o3') || modelName.includes('gpt-5')) {
delete obj.temperature
- }
- if ((modelName.includes('o1') || modelName.includes('o3')) && reasoningEffort) {
- obj.reasoningEffort = reasoningEffort
+ delete obj.stop
+ const reasoning: OpenAIClient.Reasoning = {}
+ if (reasoningEffort) {
+ reasoning.effort = reasoningEffort
+ }
+ if (reasoningSummary) {
+ reasoning.summary = reasoningSummary
+ }
+ obj.reasoning = reasoning
+
+ if (maxTokens) {
+ delete obj.maxTokens
+ obj.maxCompletionTokens = parseInt(maxTokens, 10)
+ }
}
const multiModalOption: IMultiModalOption = {
diff --git a/packages/components/nodes/chatmodels/AzureChatOpenAI/FlowiseAzureChatOpenAI.ts b/packages/components/nodes/chatmodels/AzureChatOpenAI/FlowiseAzureChatOpenAI.ts
index 7a86a3a37..b28f34f19 100644
--- a/packages/components/nodes/chatmodels/AzureChatOpenAI/FlowiseAzureChatOpenAI.ts
+++ b/packages/components/nodes/chatmodels/AzureChatOpenAI/FlowiseAzureChatOpenAI.ts
@@ -6,6 +6,7 @@ export class AzureChatOpenAI extends LangchainAzureChatOpenAI implements IVision
configuredModel: string
configuredMaxToken?: number
multiModalOption: IMultiModalOption
+ builtInTools: Record[] = []
id: string
constructor(
@@ -27,7 +28,7 @@ export class AzureChatOpenAI extends LangchainAzureChatOpenAI implements IVision
}
revertToOriginalModel(): void {
- this.modelName = this.configuredModel
+ this.model = this.configuredModel
this.maxTokens = this.configuredMaxToken
}
@@ -38,4 +39,8 @@ export class AzureChatOpenAI extends LangchainAzureChatOpenAI implements IVision
setVisionModel(): void {
// pass
}
+
+ addBuiltInTools(builtInTool: Record): void {
+ this.builtInTools.push(builtInTool)
+ }
}
diff --git a/packages/components/nodes/chatmodels/AzureChatOpenAI/README.md b/packages/components/nodes/chatmodels/AzureChatOpenAI/README.md
index f12f42dc1..3bfd33964 100644
--- a/packages/components/nodes/chatmodels/AzureChatOpenAI/README.md
+++ b/packages/components/nodes/chatmodels/AzureChatOpenAI/README.md
@@ -4,13 +4,13 @@ Azure OpenAI Chat Model integration for Flowise
## ๐ฑ Env Variables
-| Variable | Description | Type | Default |
-| ---------------------------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
-| AZURE_OPENAI_API_KEY | Default `credential.azureOpenAIApiKey` for Azure OpenAI Model | String | |
-| AZURE_OPENAI_API_INSTANCE_NAME | Default `credential.azureOpenAIApiInstanceName` for Azure OpenAI Model | String | |
-| AZURE_OPENAI_API_DEPLOYMENT_NAME | Default `credential.azureOpenAIApiDeploymentName` for Azure OpenAI Model | String | |
-| AZURE_OPENAI_API_VERSION | Default `credential.azureOpenAIApiVersion` for Azure OpenAI Model | String | |
+| Variable | Description | Type | Default |
+| -------------------------------- | ------------------------------------------------------------------------ | ------ | ------- |
+| AZURE_OPENAI_API_KEY | Default `credential.azureOpenAIApiKey` for Azure OpenAI Model | String | |
+| AZURE_OPENAI_API_INSTANCE_NAME | Default `credential.azureOpenAIApiInstanceName` for Azure OpenAI Model | String | |
+| AZURE_OPENAI_API_DEPLOYMENT_NAME | Default `credential.azureOpenAIApiDeploymentName` for Azure OpenAI Model | String | |
+| AZURE_OPENAI_API_VERSION | Default `credential.azureOpenAIApiVersion` for Azure OpenAI Model | String | |
## License
-Source code in this repository is made available under the [Apache License Version 2.0](https://github.com/FlowiseAI/Flowise/blob/master/LICENSE.md).
\ No newline at end of file
+Source code in this repository is made available under the [Apache License Version 2.0](https://github.com/FlowiseAI/Flowise/blob/master/LICENSE.md).
diff --git a/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts b/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts
index 7204801f9..27f2c7eb4 100644
--- a/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts
+++ b/packages/components/nodes/chatmodels/ChatAnthropic/ChatAnthropic.ts
@@ -91,7 +91,7 @@ class ChatAnthropic_ChatModels implements INode {
label: 'Extended Thinking',
name: 'extendedThinking',
type: 'boolean',
- description: 'Enable extended thinking for reasoning model such as Claude Sonnet 3.7',
+ description: 'Enable extended thinking for reasoning model such as Claude Sonnet 3.7 and Claude 4',
optional: true,
additionalParams: true
},
diff --git a/packages/components/nodes/chatmodels/ChatCerebras/ChatCerebras.ts b/packages/components/nodes/chatmodels/ChatCerebras/ChatCerebras.ts
index 2d65ebeb7..8da49a2cc 100644
--- a/packages/components/nodes/chatmodels/ChatCerebras/ChatCerebras.ts
+++ b/packages/components/nodes/chatmodels/ChatCerebras/ChatCerebras.ts
@@ -136,7 +136,8 @@ class ChatCerebras_ChatModels implements INode {
const obj: ChatOpenAIFields = {
temperature: parseFloat(temperature),
- modelName,
+ model: modelName,
+ apiKey: cerebrasAIApiKey,
openAIApiKey: cerebrasAIApiKey,
streaming: streaming ?? true
}
diff --git a/packages/components/nodes/chatmodels/ChatCometAPI/ChatCometAPI.ts b/packages/components/nodes/chatmodels/ChatCometAPI/ChatCometAPI.ts
new file mode 100644
index 000000000..295c5e7ce
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatCometAPI/ChatCometAPI.ts
@@ -0,0 +1,176 @@
+import { BaseCache } from '@langchain/core/caches'
+import { ChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+
+class ChatCometAPI_ChatModels implements INode {
+ readonly baseURL: string = 'https://api.cometapi.com/v1'
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'ChatCometAPI'
+ this.name = 'chatCometAPI'
+ this.version = 1.0
+ this.type = 'ChatCometAPI'
+ this.icon = 'cometapi.svg'
+ this.category = 'Chat Models'
+ this.description = 'Wrapper around CometAPI large language models that use the Chat endpoint'
+ this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['cometApi']
+ }
+ this.inputs = [
+ {
+ label: 'Cache',
+ name: 'cache',
+ type: 'BaseCache',
+ optional: true
+ },
+ {
+ label: 'Model Name',
+ name: 'modelName',
+ type: 'string',
+ default: 'gpt-5-mini',
+ description: 'Enter the model name (e.g., gpt-5-mini, claude-sonnet-4-20250514, gemini-2.0-flash)'
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ default: 0.7,
+ optional: true
+ },
+ {
+ label: 'Streaming',
+ name: 'streaming',
+ type: 'boolean',
+ default: true,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Max Tokens',
+ name: 'maxTokens',
+ type: 'number',
+ step: 1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Top Probability',
+ name: 'topP',
+ type: 'number',
+ step: 0.1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Frequency Penalty',
+ name: 'frequencyPenalty',
+ type: 'number',
+ step: 0.1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Presence Penalty',
+ name: 'presencePenalty',
+ type: 'number',
+ step: 0.1,
+ optional: true,
+ additionalParams: true
+ },
+ {
+ label: 'Base Options',
+ name: 'baseOptions',
+ type: 'json',
+ optional: true,
+ additionalParams: true,
+ description: 'Additional options to pass to the CometAPI client. This should be a JSON object.'
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const temperature = nodeData.inputs?.temperature as string
+ const modelName = nodeData.inputs?.modelName as string
+ const maxTokens = nodeData.inputs?.maxTokens as string
+ const topP = nodeData.inputs?.topP as string
+ const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
+ const presencePenalty = nodeData.inputs?.presencePenalty as string
+ const streaming = nodeData.inputs?.streaming as boolean
+ const baseOptions = nodeData.inputs?.baseOptions
+
+ if (nodeData.inputs?.credentialId) {
+ nodeData.credential = nodeData.inputs?.credentialId
+ }
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const openAIApiKey = getCredentialParam('cometApiKey', credentialData, nodeData)
+
+ // Custom error handling for missing API key
+ if (!openAIApiKey || openAIApiKey.trim() === '') {
+ throw new Error(
+ 'CometAPI API Key is missing or empty. Please provide a valid CometAPI API key in the credential configuration.'
+ )
+ }
+
+ // Custom error handling for missing model name
+ if (!modelName || modelName.trim() === '') {
+ throw new Error('Model Name is required. Please enter a valid model name (e.g., gpt-5-mini, claude-sonnet-4-20250514).')
+ }
+
+ const cache = nodeData.inputs?.cache as BaseCache
+
+ const obj: ChatOpenAIFields = {
+ temperature: parseFloat(temperature),
+ modelName,
+ openAIApiKey,
+ apiKey: openAIApiKey,
+ streaming: streaming ?? true
+ }
+
+ if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
+ if (topP) obj.topP = parseFloat(topP)
+ if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
+ if (presencePenalty) obj.presencePenalty = parseFloat(presencePenalty)
+ if (cache) obj.cache = cache
+
+ let parsedBaseOptions: any | undefined = undefined
+
+ if (baseOptions) {
+ try {
+ parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
+ if (parsedBaseOptions.baseURL) {
+ console.warn("The 'baseURL' parameter is not allowed when using the ChatCometAPI node.")
+ parsedBaseOptions.baseURL = undefined
+ }
+ } catch (exception) {
+ throw new Error('Invalid JSON in the BaseOptions: ' + exception)
+ }
+ }
+
+ const model = new ChatOpenAI({
+ ...obj,
+ configuration: {
+ baseURL: this.baseURL,
+ ...parsedBaseOptions
+ }
+ })
+ return model
+ }
+}
+
+module.exports = { nodeClass: ChatCometAPI_ChatModels }
diff --git a/packages/components/nodes/chatmodels/ChatCometAPI/cometapi.svg b/packages/components/nodes/chatmodels/ChatCometAPI/cometapi.svg
new file mode 100644
index 000000000..9f1d803d4
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatCometAPI/cometapi.svg
@@ -0,0 +1,7 @@
+
+
diff --git a/packages/components/nodes/chatmodels/ChatFireworks/ChatFireworks.ts b/packages/components/nodes/chatmodels/ChatFireworks/ChatFireworks.ts
index 2f8b6abee..b89d1de8c 100644
--- a/packages/components/nodes/chatmodels/ChatFireworks/ChatFireworks.ts
+++ b/packages/components/nodes/chatmodels/ChatFireworks/ChatFireworks.ts
@@ -1,7 +1,7 @@
import { BaseCache } from '@langchain/core/caches'
-import { ChatFireworks } from '@langchain/community/chat_models/fireworks'
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { ChatFireworks, ChatFireworksParams } from './core'
class ChatFireworks_ChatModels implements INode {
label: string
@@ -41,8 +41,8 @@ class ChatFireworks_ChatModels implements INode {
label: 'Model',
name: 'modelName',
type: 'string',
- default: 'accounts/fireworks/models/llama-v2-13b-chat',
- placeholder: 'accounts/fireworks/models/llama-v2-13b-chat'
+ default: 'accounts/fireworks/models/llama-v3p1-8b-instruct',
+ placeholder: 'accounts/fireworks/models/llama-v3p1-8b-instruct'
},
{
label: 'Temperature',
@@ -71,9 +71,8 @@ class ChatFireworks_ChatModels implements INode {
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const fireworksApiKey = getCredentialParam('fireworksApiKey', credentialData, nodeData)
- const obj: Partial = {
+ const obj: ChatFireworksParams = {
fireworksApiKey,
- model: modelName,
modelName,
temperature: temperature ? parseFloat(temperature) : undefined,
streaming: streaming ?? true
diff --git a/packages/components/nodes/chatmodels/ChatFireworks/core.ts b/packages/components/nodes/chatmodels/ChatFireworks/core.ts
new file mode 100644
index 000000000..a3b3cd111
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatFireworks/core.ts
@@ -0,0 +1,126 @@
+import type { BaseChatModelParams, LangSmithParams } from '@langchain/core/language_models/chat_models'
+import {
+ type OpenAIClient,
+ type ChatOpenAICallOptions,
+ type OpenAIChatInput,
+ type OpenAICoreRequestOptions,
+ ChatOpenAICompletions
+} from '@langchain/openai'
+
+import { getEnvironmentVariable } from '@langchain/core/utils/env'
+
+type FireworksUnsupportedArgs = 'frequencyPenalty' | 'presencePenalty' | 'logitBias' | 'functions'
+
+type FireworksUnsupportedCallOptions = 'functions' | 'function_call'
+
+export type ChatFireworksCallOptions = Partial>
+
+export type ChatFireworksParams = Partial> &
+ BaseChatModelParams & {
+ /**
+ * Prefer `apiKey`
+ */
+ fireworksApiKey?: string
+ /**
+ * The Fireworks API key to use.
+ */
+ apiKey?: string
+ }
+
+export class ChatFireworks extends ChatOpenAICompletions {
+ static lc_name() {
+ return 'ChatFireworks'
+ }
+
+ _llmType() {
+ return 'fireworks'
+ }
+
+ get lc_secrets(): { [key: string]: string } | undefined {
+ return {
+ fireworksApiKey: 'FIREWORKS_API_KEY',
+ apiKey: 'FIREWORKS_API_KEY'
+ }
+ }
+
+ lc_serializable = true
+
+ fireworksApiKey?: string
+
+ apiKey?: string
+
+ constructor(fields?: ChatFireworksParams) {
+ const fireworksApiKey = fields?.apiKey || fields?.fireworksApiKey || getEnvironmentVariable('FIREWORKS_API_KEY')
+
+ if (!fireworksApiKey) {
+ throw new Error(
+ `Fireworks API key not found. Please set the FIREWORKS_API_KEY environment variable or provide the key into "fireworksApiKey"`
+ )
+ }
+
+ super({
+ ...fields,
+ model: fields?.model || fields?.modelName || 'accounts/fireworks/models/llama-v3p1-8b-instruct',
+ apiKey: fireworksApiKey,
+ configuration: {
+ baseURL: 'https://api.fireworks.ai/inference/v1'
+ },
+ streamUsage: false
+ })
+
+ this.fireworksApiKey = fireworksApiKey
+ this.apiKey = fireworksApiKey
+ }
+
+ getLsParams(options: any): LangSmithParams {
+ const params = super.getLsParams(options)
+ params.ls_provider = 'fireworks'
+ return params
+ }
+
+ toJSON() {
+ const result = super.toJSON()
+
+ if ('kwargs' in result && typeof result.kwargs === 'object' && result.kwargs != null) {
+ delete result.kwargs.openai_api_key
+ delete result.kwargs.configuration
+ }
+
+ return result
+ }
+
+ // eslint-disable-next-line
+ async completionWithRetry(
+ request: OpenAIClient.Chat.ChatCompletionCreateParamsStreaming,
+ options?: OpenAICoreRequestOptions
+ ): Promise>
+
+ // eslint-disable-next-line
+ async completionWithRetry(
+ request: OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming,
+ options?: OpenAICoreRequestOptions
+ ): Promise
+
+ /**
+ * Calls the Fireworks API with retry logic in case of failures.
+ * @param request The request to send to the Fireworks API.
+ * @param options Optional configuration for the API call.
+ * @returns The response from the Fireworks API.
+ */
+ // eslint-disable-next-line
+ async completionWithRetry(
+ request: OpenAIClient.Chat.ChatCompletionCreateParamsStreaming | OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming,
+ options?: OpenAICoreRequestOptions
+ ): Promise | OpenAIClient.Chat.Completions.ChatCompletion> {
+ delete request.frequency_penalty
+ delete request.presence_penalty
+ delete request.logit_bias
+ delete request.functions
+
+ if (request.stream === true) {
+ return super.completionWithRetry(request, options)
+ }
+
+ return super.completionWithRetry(request, options)
+ }
+}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/ChatGoogleGenerativeAI.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/ChatGoogleGenerativeAI.ts
index 9d15abba6..d618254c0 100644
--- a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/ChatGoogleGenerativeAI.ts
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/ChatGoogleGenerativeAI.ts
@@ -2,10 +2,9 @@ import { HarmBlockThreshold, HarmCategory } from '@google/generative-ai'
import type { SafetySetting } from '@google/generative-ai'
import { BaseCache } from '@langchain/core/caches'
import { ICommonObject, IMultiModalOption, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
-import { convertMultiOptionsToStringArray, getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
import { ChatGoogleGenerativeAI, GoogleGenerativeAIChatInput } from './FlowiseChatGoogleGenerativeAI'
-import type FlowiseGoogleAICacheManager from '../../cache/GoogleGenerativeAIContextCache/FlowiseGoogleAICacheManager'
class GoogleGenerativeAI_ChatModels implements INode {
label: string
@@ -22,7 +21,7 @@ class GoogleGenerativeAI_ChatModels implements INode {
constructor() {
this.label = 'ChatGoogleGenerativeAI'
this.name = 'chatGoogleGenerativeAI'
- this.version = 3.0
+ this.version = 3.1
this.type = 'ChatGoogleGenerativeAI'
this.icon = 'GoogleGemini.svg'
this.category = 'Chat Models'
@@ -43,12 +42,6 @@ class GoogleGenerativeAI_ChatModels implements INode {
type: 'BaseCache',
optional: true
},
- {
- label: 'Context Cache',
- name: 'contextCache',
- type: 'GoogleAICacheManager',
- optional: true
- },
{
label: 'Model Name',
name: 'modelName',
@@ -107,62 +100,91 @@ class GoogleGenerativeAI_ChatModels implements INode {
additionalParams: true
},
{
- label: 'Harm Category',
- name: 'harmCategory',
- type: 'multiOptions',
+ label: 'Safety Settings',
+ name: 'safetySettings',
+ type: 'array',
description:
- 'Refer to official guide on how to use Harm Category',
- options: [
+ 'Safety settings for the model. Refer to the official guide on how to use Safety Settings',
+ array: [
{
- label: 'Dangerous',
- name: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT
+ label: 'Harm Category',
+ name: 'harmCategory',
+ type: 'options',
+ options: [
+ {
+ label: 'Dangerous',
+ name: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
+ description: 'Promotes, facilitates, or encourages harmful acts.'
+ },
+ {
+ label: 'Harassment',
+ name: HarmCategory.HARM_CATEGORY_HARASSMENT,
+ description: 'Negative or harmful comments targeting identity and/or protected attributes.'
+ },
+ {
+ label: 'Hate Speech',
+ name: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
+ description: 'Content that is rude, disrespectful, or profane.'
+ },
+ {
+ label: 'Sexually Explicit',
+ name: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
+ description: 'Contains references to sexual acts or other lewd content.'
+ },
+ {
+ label: 'Civic Integrity',
+ name: HarmCategory.HARM_CATEGORY_CIVIC_INTEGRITY,
+ description: 'Election-related queries.'
+ }
+ ]
},
{
- label: 'Harassment',
- name: HarmCategory.HARM_CATEGORY_HARASSMENT
- },
- {
- label: 'Hate Speech',
- name: HarmCategory.HARM_CATEGORY_HATE_SPEECH
- },
- {
- label: 'Sexually Explicit',
- name: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT
+ label: 'Harm Block Threshold',
+ name: 'harmBlockThreshold',
+ type: 'options',
+ options: [
+ {
+ label: 'None',
+ name: HarmBlockThreshold.BLOCK_NONE,
+ description: 'Always show regardless of probability of unsafe content'
+ },
+ {
+ label: 'Only High',
+ name: HarmBlockThreshold.BLOCK_ONLY_HIGH,
+ description: 'Block when high probability of unsafe content'
+ },
+ {
+ label: 'Medium and Above',
+ name: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
+ description: 'Block when medium or high probability of unsafe content'
+ },
+ {
+ label: 'Low and Above',
+ name: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
+ description: 'Block when low, medium or high probability of unsafe content'
+ },
+ {
+ label: 'Threshold Unspecified (Default Threshold)',
+ name: HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED,
+ description: 'Threshold is unspecified, block using default threshold'
+ }
+ ]
}
],
optional: true,
additionalParams: true
},
{
- label: 'Harm Block Threshold',
- name: 'harmBlockThreshold',
- type: 'multiOptions',
- description:
- 'Refer to official guide on how to use Harm Block Threshold',
- options: [
- {
- label: 'Low and Above',
- name: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE
- },
- {
- label: 'Medium and Above',
- name: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE
- },
- {
- label: 'None',
- name: HarmBlockThreshold.BLOCK_NONE
- },
- {
- label: 'Only High',
- name: HarmBlockThreshold.BLOCK_ONLY_HIGH
- },
- {
- label: 'Threshold Unspecified',
- name: HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED
- }
- ],
+ label: 'Thinking Budget',
+ name: 'thinkingBudget',
+ type: 'number',
+ description: 'Guides the number of thinking tokens. -1 for dynamic, 0 to disable, or positive integer (Gemini 2.5 models).',
+ step: 1,
optional: true,
- additionalParams: true
+ additionalParams: true,
+ show: {
+ modelName: ['gemini-2.5-pro', 'gemini-2.5-flash', 'gemini-2.5-flash-lite']
+ }
},
{
label: 'Base URL',
@@ -201,39 +223,59 @@ class GoogleGenerativeAI_ChatModels implements INode {
const maxOutputTokens = nodeData.inputs?.maxOutputTokens as string
const topP = nodeData.inputs?.topP as string
const topK = nodeData.inputs?.topK as string
- const harmCategory = nodeData.inputs?.harmCategory as string
- const harmBlockThreshold = nodeData.inputs?.harmBlockThreshold as string
+ const _safetySettings = nodeData.inputs?.safetySettings as string
+
const cache = nodeData.inputs?.cache as BaseCache
- const contextCache = nodeData.inputs?.contextCache as FlowiseGoogleAICacheManager
const streaming = nodeData.inputs?.streaming as boolean
const baseUrl = nodeData.inputs?.baseUrl as string | undefined
+ const thinkingBudget = nodeData.inputs?.thinkingBudget as string
const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
- const obj: Partial = {
+ const obj: GoogleGenerativeAIChatInput = {
apiKey: apiKey,
- modelName: customModelName || modelName,
+ model: customModelName || modelName,
streaming: streaming ?? true
}
+ // this extra metadata is needed, as langchain does not show the model name in the callbacks.
+ obj.metadata = {
+ fw_model_name: customModelName || modelName
+ }
if (maxOutputTokens) obj.maxOutputTokens = parseInt(maxOutputTokens, 10)
if (topP) obj.topP = parseFloat(topP)
if (topK) obj.topK = parseFloat(topK)
if (cache) obj.cache = cache
if (temperature) obj.temperature = parseFloat(temperature)
if (baseUrl) obj.baseUrl = baseUrl
+ if (thinkingBudget) obj.thinkingBudget = parseInt(thinkingBudget, 10)
- // Safety Settings
- let harmCategories: string[] = convertMultiOptionsToStringArray(harmCategory)
- let harmBlockThresholds: string[] = convertMultiOptionsToStringArray(harmBlockThreshold)
- if (harmCategories.length != harmBlockThresholds.length)
- throw new Error(`Harm Category & Harm Block Threshold are not the same length`)
- const safetySettings: SafetySetting[] = harmCategories.map((harmCategory, index) => {
- return {
- category: harmCategory as HarmCategory,
- threshold: harmBlockThresholds[index] as HarmBlockThreshold
+ let safetySettings: SafetySetting[] = []
+ if (_safetySettings) {
+ try {
+ const parsedSafetySettings = typeof _safetySettings === 'string' ? JSON.parse(_safetySettings) : _safetySettings
+ if (Array.isArray(parsedSafetySettings)) {
+ const validSettings = parsedSafetySettings
+ .filter((setting: any) => setting.harmCategory && setting.harmBlockThreshold)
+ .map((setting: any) => ({
+ category: setting.harmCategory as HarmCategory,
+ threshold: setting.harmBlockThreshold as HarmBlockThreshold
+ }))
+
+ // Remove duplicates by keeping only the first occurrence of each harm category
+ const seenCategories = new Set()
+ safetySettings = validSettings.filter((setting) => {
+ if (seenCategories.has(setting.category)) {
+ return false
+ }
+ seenCategories.add(setting.category)
+ return true
+ })
+ }
+ } catch (error) {
+ console.warn('Failed to parse safety settings:', error)
}
- })
+ }
if (safetySettings.length > 0) obj.safetySettings = safetySettings
const multiModalOption: IMultiModalOption = {
@@ -244,7 +286,6 @@ class GoogleGenerativeAI_ChatModels implements INode {
const model = new ChatGoogleGenerativeAI(nodeData.id, obj)
model.setMultiModalOption(multiModalOption)
- if (contextCache) model.setContextCache(contextCache)
return model
}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/FlowiseChatGoogleGenerativeAI.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/FlowiseChatGoogleGenerativeAI.ts
index 4824810eb..cdf3ac118 100644
--- a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/FlowiseChatGoogleGenerativeAI.ts
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/FlowiseChatGoogleGenerativeAI.ts
@@ -1,34 +1,42 @@
-import { BaseMessage, AIMessage, AIMessageChunk, isBaseMessage, ChatMessage, MessageContentComplex } from '@langchain/core/messages'
-import { CallbackManagerForLLMRun } from '@langchain/core/callbacks/manager'
-import { BaseChatModel, type BaseChatModelParams } from '@langchain/core/language_models/chat_models'
-import { ChatGeneration, ChatGenerationChunk, ChatResult } from '@langchain/core/outputs'
-import { ToolCallChunk } from '@langchain/core/messages/tool'
-import { NewTokenIndices } from '@langchain/core/callbacks/base'
import {
- EnhancedGenerateContentResponse,
- Content,
- Part,
- Tool,
GenerativeModel,
- GoogleGenerativeAI as GenerativeAI
-} from '@google/generative-ai'
-import type {
- FunctionCallPart,
- FunctionResponsePart,
- SafetySetting,
- UsageMetadata,
+ GoogleGenerativeAI as GenerativeAI,
FunctionDeclarationsTool as GoogleGenerativeAIFunctionDeclarationsTool,
- GenerateContentRequest
+ FunctionDeclaration as GenerativeAIFunctionDeclaration,
+ type FunctionDeclarationSchema as GenerativeAIFunctionDeclarationSchema,
+ GenerateContentRequest,
+ SafetySetting,
+ Part as GenerativeAIPart,
+ ModelParams,
+ RequestOptions,
+ type CachedContent,
+ Schema
} from '@google/generative-ai'
-import { ICommonObject, IMultiModalOption, IVisionChatModal } from '../../../src'
-import { StructuredToolInterface } from '@langchain/core/tools'
-import { isStructuredTool } from '@langchain/core/utils/function_calling'
-import { zodToJsonSchema } from 'zod-to-json-schema'
-import { BaseLanguageModelCallOptions } from '@langchain/core/language_models/base'
-import type FlowiseGoogleAICacheManager from '../../cache/GoogleGenerativeAIContextCache/FlowiseGoogleAICacheManager'
-
-const DEFAULT_IMAGE_MAX_TOKEN = 8192
-const DEFAULT_IMAGE_MODEL = 'gemini-1.5-flash-latest'
+import { CallbackManagerForLLMRun } from '@langchain/core/callbacks/manager'
+import { AIMessageChunk, BaseMessage, UsageMetadata } from '@langchain/core/messages'
+import { ChatGenerationChunk, ChatResult } from '@langchain/core/outputs'
+import { getEnvironmentVariable } from '@langchain/core/utils/env'
+import {
+ BaseChatModel,
+ type BaseChatModelCallOptions,
+ type LangSmithParams,
+ type BaseChatModelParams
+} from '@langchain/core/language_models/chat_models'
+import { NewTokenIndices } from '@langchain/core/callbacks/base'
+import { BaseLanguageModelInput, StructuredOutputMethodOptions } from '@langchain/core/language_models/base'
+import { Runnable, RunnablePassthrough, RunnableSequence } from '@langchain/core/runnables'
+import { InferInteropZodOutput, InteropZodType, isInteropZodSchema } from '@langchain/core/utils/types'
+import { BaseLLMOutputParser, JsonOutputParser } from '@langchain/core/output_parsers'
+import { schemaToGenerativeAIParameters, removeAdditionalProperties } from './utils/zod_to_genai_parameters.js'
+import {
+ convertBaseMessagesToContent,
+ convertResponseContentToChatGenerationChunk,
+ mapGenerateContentResultToChatResult
+} from './utils/common.js'
+import { GoogleGenerativeAIToolsOutputParser } from './utils/output_parsers.js'
+import { GoogleGenerativeAIToolType } from './utils/types.js'
+import { convertToolsToGenAI } from './utils/tools.js'
+import { IMultiModalOption, IVisionChatModal } from '../../../src'
interface TokenUsage {
completionTokens?: number
@@ -36,44 +44,549 @@ interface TokenUsage {
totalTokens?: number
}
-interface GoogleGenerativeAIChatCallOptions extends BaseLanguageModelCallOptions {
- tools?: StructuredToolInterface[] | GoogleGenerativeAIFunctionDeclarationsTool[]
+export type BaseMessageExamplePair = {
+ input: BaseMessage
+ output: BaseMessage
+}
+
+export interface GoogleGenerativeAIChatCallOptions extends BaseChatModelCallOptions {
+ tools?: GoogleGenerativeAIToolType[]
+ /**
+ * Allowed functions to call when the mode is "any".
+ * If empty, any one of the provided functions are called.
+ */
+ allowedFunctionNames?: string[]
/**
* Whether or not to include usage data, like token counts
* in the streamed response chunks.
* @default true
*/
streamUsage?: boolean
+
+ /**
+ * JSON schema to be returned by the model.
+ */
+ responseSchema?: Schema
}
+/**
+ * An interface defining the input to the ChatGoogleGenerativeAI class.
+ */
export interface GoogleGenerativeAIChatInput extends BaseChatModelParams, Pick {
- modelName?: string
- model?: string
+ /**
+ * Model Name to use
+ *
+ * Note: The format must follow the pattern - `{model}`
+ */
+ model: string
+
+ /**
+ * Controls the randomness of the output.
+ *
+ * Values can range from [0.0,2.0], inclusive. A value closer to 2.0
+ * will produce responses that are more varied and creative, while
+ * a value closer to 0.0 will typically result in less surprising
+ * responses from the model.
+ *
+ * Note: The default value varies by model
+ */
temperature?: number
+
+ /**
+ * Maximum number of tokens to generate in the completion.
+ */
maxOutputTokens?: number
+
+ /**
+ * Top-p changes how the model selects tokens for output.
+ *
+ * Tokens are selected from most probable to least until the sum
+ * of their probabilities equals the top-p value.
+ *
+ * For example, if tokens A, B, and C have a probability of
+ * .3, .2, and .1 and the top-p value is .5, then the model will
+ * select either A or B as the next token (using temperature).
+ *
+ * Note: The default value varies by model
+ */
topP?: number
+
+ /**
+ * Top-k changes how the model selects tokens for output.
+ *
+ * A top-k of 1 means the selected token is the most probable among
+ * all tokens in the model's vocabulary (also called greedy decoding),
+ * while a top-k of 3 means that the next token is selected from
+ * among the 3 most probable tokens (using temperature).
+ *
+ * Note: The default value varies by model
+ */
topK?: number
+
+ /**
+ * The set of character sequences (up to 5) that will stop output generation.
+ * If specified, the API will stop at the first appearance of a stop
+ * sequence.
+ *
+ * Note: The stop sequence will not be included as part of the response.
+ * Note: stopSequences is only supported for Gemini models
+ */
stopSequences?: string[]
+
+ /**
+ * A list of unique `SafetySetting` instances for blocking unsafe content. The API will block
+ * any prompts and responses that fail to meet the thresholds set by these settings. If there
+ * is no `SafetySetting` for a given `SafetyCategory` provided in the list, the API will use
+ * the default safety setting for that category.
+ */
safetySettings?: SafetySetting[]
+
+ /**
+ * Google API key to use
+ */
apiKey?: string
+
+ /**
+ * Google API version to use
+ */
apiVersion?: string
+
+ /**
+ * Google API base URL to use
+ */
baseUrl?: string
+
+ /** Whether to stream the results or not */
streaming?: boolean
+
+ /**
+ * Whether or not to force the model to respond with JSON.
+ * Available for `gemini-1.5` models and later.
+ * @default false
+ */
+ json?: boolean
+
+ /**
+ * Whether or not model supports system instructions.
+ * The following models support system instructions:
+ * - All Gemini 1.5 Pro model versions
+ * - All Gemini 1.5 Flash model versions
+ * - Gemini 1.0 Pro version gemini-1.0-pro-002
+ */
+ convertSystemMessageToHumanContent?: boolean | undefined
+
+ /** Thinking budget for Gemini 2.5 thinking models. Supports -1 (dynamic), 0 (off), or positive integers. */
+ thinkingBudget?: number
}
-class LangchainChatGoogleGenerativeAI
+/**
+ * Google Generative AI chat model integration.
+ *
+ * Setup:
+ * Install `@langchain/google-genai` and set an environment variable named `GOOGLE_API_KEY`.
+ *
+ * ```bash
+ * npm install @langchain/google-genai
+ * export GOOGLE_API_KEY="your-api-key"
+ * ```
+ *
+ * ## [Constructor args](https://api.js.langchain.com/classes/langchain_google_genai.ChatGoogleGenerativeAI.html#constructor)
+ *
+ * ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_google_genai.GoogleGenerativeAIChatCallOptions.html)
+ *
+ * Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc.
+ * They can also be passed via `.withConfig`, or the second arg in `.bindTools`, like shown in the examples below:
+ *
+ * ```typescript
+ * // When calling `.withConfig`, call options should be passed via the first argument
+ * const llmWithArgsBound = llm.withConfig({
+ * stop: ["\n"],
+ * });
+ *
+ * // When calling `.bindTools`, call options should be passed via the second argument
+ * const llmWithTools = llm.bindTools(
+ * [...],
+ * {
+ * stop: ["\n"],
+ * }
+ * );
+ * ```
+ *
+ * ## Examples
+ *
+ *
+ * Instantiate
+ *
+ * ```typescript
+ * import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
+ *
+ * const llm = new ChatGoogleGenerativeAI({
+ * model: "gemini-1.5-flash",
+ * temperature: 0,
+ * maxRetries: 2,
+ * // apiKey: "...",
+ * // other params...
+ * });
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Invoking
+ *
+ * ```typescript
+ * const input = `Translate "I love programming" into French.`;
+ *
+ * // Models also accept a list of chat messages or a formatted prompt
+ * const result = await llm.invoke(input);
+ * console.log(result);
+ * ```
+ *
+ * ```txt
+ * AIMessage {
+ * "content": "There are a few ways to translate \"I love programming\" into French, depending on the level of formality and nuance you want to convey:\n\n**Formal:**\n\n* **J'aime la programmation.** (This is the most literal and formal translation.)\n\n**Informal:**\n\n* **J'adore programmer.** (This is a more enthusiastic and informal translation.)\n* **J'aime beaucoup programmer.** (This is a slightly less enthusiastic but still informal translation.)\n\n**More specific:**\n\n* **J'aime beaucoup coder.** (This specifically refers to writing code.)\n* **J'aime beaucoup dรฉvelopper des logiciels.** (This specifically refers to developing software.)\n\nThe best translation will depend on the context and your intended audience. \n",
+ * "response_metadata": {
+ * "finishReason": "STOP",
+ * "index": 0,
+ * "safetyRatings": [
+ * {
+ * "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
+ * "probability": "NEGLIGIBLE"
+ * },
+ * {
+ * "category": "HARM_CATEGORY_HATE_SPEECH",
+ * "probability": "NEGLIGIBLE"
+ * },
+ * {
+ * "category": "HARM_CATEGORY_HARASSMENT",
+ * "probability": "NEGLIGIBLE"
+ * },
+ * {
+ * "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
+ * "probability": "NEGLIGIBLE"
+ * }
+ * ]
+ * },
+ * "usage_metadata": {
+ * "input_tokens": 10,
+ * "output_tokens": 149,
+ * "total_tokens": 159
+ * }
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Streaming Chunks
+ *
+ * ```typescript
+ * for await (const chunk of await llm.stream(input)) {
+ * console.log(chunk);
+ * }
+ * ```
+ *
+ * ```txt
+ * AIMessageChunk {
+ * "content": "There",
+ * "response_metadata": {
+ * "index": 0
+ * }
+ * "usage_metadata": {
+ * "input_tokens": 10,
+ * "output_tokens": 1,
+ * "total_tokens": 11
+ * }
+ * }
+ * AIMessageChunk {
+ * "content": " are a few ways to translate \"I love programming\" into French, depending on",
+ * }
+ * AIMessageChunk {
+ * "content": " the level of formality and nuance you want to convey:\n\n**Formal:**\n\n",
+ * }
+ * AIMessageChunk {
+ * "content": "* **J'aime la programmation.** (This is the most literal and formal translation.)\n\n**Informal:**\n\n* **J'adore programmer.** (This",
+ * }
+ * AIMessageChunk {
+ * "content": " is a more enthusiastic and informal translation.)\n* **J'aime beaucoup programmer.** (This is a slightly less enthusiastic but still informal translation.)\n\n**More",
+ * }
+ * AIMessageChunk {
+ * "content": " specific:**\n\n* **J'aime beaucoup coder.** (This specifically refers to writing code.)\n* **J'aime beaucoup dรฉvelopper des logiciels.** (This specifically refers to developing software.)\n\nThe best translation will depend on the context and",
+ * }
+ * AIMessageChunk {
+ * "content": " your intended audience. \n",
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Aggregate Streamed Chunks
+ *
+ * ```typescript
+ * import { AIMessageChunk } from '@langchain/core/messages';
+ * import { concat } from '@langchain/core/utils/stream';
+ *
+ * const stream = await llm.stream(input);
+ * let full: AIMessageChunk | undefined;
+ * for await (const chunk of stream) {
+ * full = !full ? chunk : concat(full, chunk);
+ * }
+ * console.log(full);
+ * ```
+ *
+ * ```txt
+ * AIMessageChunk {
+ * "content": "There are a few ways to translate \"I love programming\" into French, depending on the level of formality and nuance you want to convey:\n\n**Formal:**\n\n* **J'aime la programmation.** (This is the most literal and formal translation.)\n\n**Informal:**\n\n* **J'adore programmer.** (This is a more enthusiastic and informal translation.)\n* **J'aime beaucoup programmer.** (This is a slightly less enthusiastic but still informal translation.)\n\n**More specific:**\n\n* **J'aime beaucoup coder.** (This specifically refers to writing code.)\n* **J'aime beaucoup dรฉvelopper des logiciels.** (This specifically refers to developing software.)\n\nThe best translation will depend on the context and your intended audience. \n",
+ * "usage_metadata": {
+ * "input_tokens": 10,
+ * "output_tokens": 277,
+ * "total_tokens": 287
+ * }
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Bind tools
+ *
+ * ```typescript
+ * import { z } from 'zod';
+ *
+ * const GetWeather = {
+ * name: "GetWeather",
+ * description: "Get the current weather in a given location",
+ * schema: z.object({
+ * location: z.string().describe("The city and state, e.g. San Francisco, CA")
+ * }),
+ * }
+ *
+ * const GetPopulation = {
+ * name: "GetPopulation",
+ * description: "Get the current population in a given location",
+ * schema: z.object({
+ * location: z.string().describe("The city and state, e.g. San Francisco, CA")
+ * }),
+ * }
+ *
+ * const llmWithTools = llm.bindTools([GetWeather, GetPopulation]);
+ * const aiMsg = await llmWithTools.invoke(
+ * "Which city is hotter today and which is bigger: LA or NY?"
+ * );
+ * console.log(aiMsg.tool_calls);
+ * ```
+ *
+ * ```txt
+ * [
+ * {
+ * name: 'GetWeather',
+ * args: { location: 'Los Angeles, CA' },
+ * type: 'tool_call'
+ * },
+ * {
+ * name: 'GetWeather',
+ * args: { location: 'New York, NY' },
+ * type: 'tool_call'
+ * },
+ * {
+ * name: 'GetPopulation',
+ * args: { location: 'Los Angeles, CA' },
+ * type: 'tool_call'
+ * },
+ * {
+ * name: 'GetPopulation',
+ * args: { location: 'New York, NY' },
+ * type: 'tool_call'
+ * }
+ * ]
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Structured Output
+ *
+ * ```typescript
+ * const Joke = z.object({
+ * setup: z.string().describe("The setup of the joke"),
+ * punchline: z.string().describe("The punchline to the joke"),
+ * rating: z.number().optional().describe("How funny the joke is, from 1 to 10")
+ * }).describe('Joke to tell user.');
+ *
+ * const structuredLlm = llm.withStructuredOutput(Joke, { name: "Joke" });
+ * const jokeResult = await structuredLlm.invoke("Tell me a joke about cats");
+ * console.log(jokeResult);
+ * ```
+ *
+ * ```txt
+ * {
+ * setup: "Why don\\'t cats play poker?",
+ * punchline: "Why don\\'t cats play poker? Because they always have an ace up their sleeve!"
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Multimodal
+ *
+ * ```typescript
+ * import { HumanMessage } from '@langchain/core/messages';
+ *
+ * const imageUrl = "https://example.com/image.jpg";
+ * const imageData = await fetch(imageUrl).then(res => res.arrayBuffer());
+ * const base64Image = Buffer.from(imageData).toString('base64');
+ *
+ * const message = new HumanMessage({
+ * content: [
+ * { type: "text", text: "describe the weather in this image" },
+ * {
+ * type: "image_url",
+ * image_url: { url: `data:image/jpeg;base64,${base64Image}` },
+ * },
+ * ]
+ * });
+ *
+ * const imageDescriptionAiMsg = await llm.invoke([message]);
+ * console.log(imageDescriptionAiMsg.content);
+ * ```
+ *
+ * ```txt
+ * The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered white clouds, indicating fair weather. The bright sunlight is casting shadows on the green, grassy hill, suggesting it is a pleasant day with good visibility. There are no signs of rain or stormy conditions.
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Usage Metadata
+ *
+ * ```typescript
+ * const aiMsgForMetadata = await llm.invoke(input);
+ * console.log(aiMsgForMetadata.usage_metadata);
+ * ```
+ *
+ * ```txt
+ * { input_tokens: 10, output_tokens: 149, total_tokens: 159 }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Response Metadata
+ *
+ * ```typescript
+ * const aiMsgForResponseMetadata = await llm.invoke(input);
+ * console.log(aiMsgForResponseMetadata.response_metadata);
+ * ```
+ *
+ * ```txt
+ * {
+ * finishReason: 'STOP',
+ * index: 0,
+ * safetyRatings: [
+ * {
+ * category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
+ * probability: 'NEGLIGIBLE'
+ * },
+ * {
+ * category: 'HARM_CATEGORY_HATE_SPEECH',
+ * probability: 'NEGLIGIBLE'
+ * },
+ * { category: 'HARM_CATEGORY_HARASSMENT', probability: 'NEGLIGIBLE' },
+ * {
+ * category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
+ * probability: 'NEGLIGIBLE'
+ * }
+ * ]
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Document Messages
+ *
+ * This example will show you how to pass documents such as PDFs to Google
+ * Generative AI through messages.
+ *
+ * ```typescript
+ * const pdfPath = "/Users/my_user/Downloads/invoice.pdf";
+ * const pdfBase64 = await fs.readFile(pdfPath, "base64");
+ *
+ * const response = await llm.invoke([
+ * ["system", "Use the provided documents to answer the question"],
+ * [
+ * "user",
+ * [
+ * {
+ * type: "application/pdf", // If the `type` field includes a single slash (`/`), it will be treated as inline data.
+ * data: pdfBase64,
+ * },
+ * {
+ * type: "text",
+ * text: "Summarize the contents of this PDF",
+ * },
+ * ],
+ * ],
+ * ]);
+ *
+ * console.log(response.content);
+ * ```
+ *
+ * ```txt
+ * This is a billing invoice from Twitter Developers for X API Basic Access. The transaction date is January 7, 2025,
+ * and the amount is $194.34, which has been paid. The subscription period is from January 7, 2025 21:02 to February 7, 2025 00:00 (UTC).
+ * The tax is $0.00, with a tax rate of 0%. The total amount is $194.34. The payment was made using a Visa card ending in 7022,
+ * expiring in 12/2026. The billing address is Brace Sproul, 1234 Main Street, San Francisco, CA, US 94103. The company being billed is
+ * X Corp, located at 865 FM 1209 Building 2, Bastrop, TX, US 78602. Terms and conditions apply.
+ * ```
+ *
+ *
+ *
+ */
+export class LangchainChatGoogleGenerativeAI
extends BaseChatModel
implements GoogleGenerativeAIChatInput
{
- modelName = 'gemini-pro'
+ static lc_name() {
+ return 'ChatGoogleGenerativeAI'
+ }
- temperature?: number
+ lc_serializable = true
+
+ get lc_secrets(): { [key: string]: string } | undefined {
+ return {
+ apiKey: 'GOOGLE_API_KEY'
+ }
+ }
+
+ lc_namespace = ['langchain', 'chat_models', 'google_genai']
+
+ get lc_aliases() {
+ return {
+ apiKey: 'google_api_key'
+ }
+ }
+
+ model: string
+
+ temperature?: number // default value chosen based on model
maxOutputTokens?: number
- topP?: number
+ topP?: number // default value chosen based on model
- topK?: number
+ topK?: number // default value chosen based on model
stopSequences: string[] = []
@@ -81,37 +594,39 @@ class LangchainChatGoogleGenerativeAI
apiKey?: string
- baseUrl?: string
-
streaming = false
+ json?: boolean
+
streamUsage = true
+ convertSystemMessageToHumanContent: boolean | undefined
+
+ thinkingBudget?: number
+
private client: GenerativeModel
- private contextCache?: FlowiseGoogleAICacheManager
-
get _isMultimodalModel() {
- return this.modelName.includes('vision') || this.modelName.startsWith('gemini-1.5')
+ return this.model.includes('vision') || this.model.startsWith('gemini-1.5') || this.model.startsWith('gemini-2')
}
- constructor(fields?: GoogleGenerativeAIChatInput) {
- super(fields ?? {})
+ constructor(fields: GoogleGenerativeAIChatInput) {
+ super(fields)
- this.modelName = fields?.model?.replace(/^models\//, '') ?? fields?.modelName?.replace(/^models\//, '') ?? 'gemini-pro'
+ this.model = fields.model.replace(/^models\//, '')
- this.maxOutputTokens = fields?.maxOutputTokens ?? this.maxOutputTokens
+ this.maxOutputTokens = fields.maxOutputTokens ?? this.maxOutputTokens
if (this.maxOutputTokens && this.maxOutputTokens < 0) {
throw new Error('`maxOutputTokens` must be a positive integer')
}
- this.temperature = fields?.temperature ?? this.temperature
- if (this.temperature && (this.temperature < 0 || this.temperature > 1)) {
- throw new Error('`temperature` must be in the range of [0.0,1.0]')
+ this.temperature = fields.temperature ?? this.temperature
+ if (this.temperature && (this.temperature < 0 || this.temperature > 2)) {
+ throw new Error('`temperature` must be in the range of [0.0,2.0]')
}
- this.topP = fields?.topP ?? this.topP
+ this.topP = fields.topP ?? this.topP
if (this.topP && this.topP < 0) {
throw new Error('`topP` must be a positive integer')
}
@@ -120,14 +635,14 @@ class LangchainChatGoogleGenerativeAI
throw new Error('`topP` must be below 1.')
}
- this.topK = fields?.topK ?? this.topK
+ this.topK = fields.topK ?? this.topK
if (this.topK && this.topK < 0) {
throw new Error('`topK` must be a positive integer')
}
- this.stopSequences = fields?.stopSequences ?? this.stopSequences
+ this.stopSequences = fields.stopSequences ?? this.stopSequences
- this.apiKey = fields?.apiKey ?? process.env['GOOGLE_API_KEY']
+ this.apiKey = fields.apiKey ?? getEnvironmentVariable('GOOGLE_API_KEY')
if (!this.apiKey) {
throw new Error(
'Please set an API key for Google GenerativeAI ' +
@@ -137,7 +652,7 @@ class LangchainChatGoogleGenerativeAI
)
}
- this.safetySettings = fields?.safetySettings ?? this.safetySettings
+ this.safetySettings = fields.safetySettings ?? this.safetySettings
if (this.safetySettings && this.safetySettings.length > 0) {
const safetySettingsSet = new Set(this.safetySettings.map((s) => s.category))
if (safetySettingsSet.size !== this.safetySettings.length) {
@@ -145,39 +660,77 @@ class LangchainChatGoogleGenerativeAI
}
}
- this.streaming = fields?.streaming ?? this.streaming
+ this.streaming = fields.streaming ?? this.streaming
+ this.json = fields.json
+ this.thinkingBudget = fields.thinkingBudget
- this.streamUsage = fields?.streamUsage ?? this.streamUsage
-
- this.getClient()
- }
-
- async getClient(prompt?: Content[], tools?: Tool[]) {
- this.client = new GenerativeAI(this.apiKey ?? '').getGenerativeModel(
+ this.client = new GenerativeAI(this.apiKey).getGenerativeModel(
{
- model: this.modelName,
- tools,
+ model: this.model,
safetySettings: this.safetySettings as SafetySetting[],
generationConfig: {
- candidateCount: 1,
stopSequences: this.stopSequences,
maxOutputTokens: this.maxOutputTokens,
temperature: this.temperature,
topP: this.topP,
- topK: this.topK
+ topK: this.topK,
+ ...(this.json ? { responseMimeType: 'application/json' } : {})
}
},
{
- baseUrl: this.baseUrl
+ apiVersion: fields.apiVersion,
+ baseUrl: fields.baseUrl
}
)
- if (this.contextCache) {
- const cachedContent = await this.contextCache.lookup({
- contents: prompt ? [{ ...prompt[0], parts: prompt[0].parts.slice(0, 1) }] : [],
- model: this.modelName,
- tools
- })
- this.client.cachedContent = cachedContent as any
+ if (this.thinkingBudget !== undefined) {
+ ;(this.client.generationConfig as any).thinkingConfig = {
+ ...(this.thinkingBudget !== undefined ? { thinkingBudget: this.thinkingBudget } : {})
+ }
+ }
+ this.streamUsage = fields.streamUsage ?? this.streamUsage
+ }
+
+ useCachedContent(cachedContent: CachedContent, modelParams?: ModelParams, requestOptions?: RequestOptions): void {
+ if (!this.apiKey) return
+ this.client = new GenerativeAI(this.apiKey).getGenerativeModelFromCachedContent(cachedContent, modelParams, requestOptions)
+ if (this.thinkingBudget !== undefined) {
+ ;(this.client.generationConfig as any).thinkingConfig = {
+ ...(this.thinkingBudget !== undefined ? { thinkingBudget: this.thinkingBudget } : {})
+ }
+ }
+ }
+
+ get useSystemInstruction(): boolean {
+ return typeof this.convertSystemMessageToHumanContent === 'boolean'
+ ? !this.convertSystemMessageToHumanContent
+ : this.computeUseSystemInstruction
+ }
+
+ get computeUseSystemInstruction(): boolean {
+ // This works on models from April 2024 and later
+ // Vertex AI: gemini-1.5-pro and gemini-1.0-002 and later
+ // AI Studio: gemini-1.5-pro-latest
+ if (this.model === 'gemini-1.0-pro-001') {
+ return false
+ } else if (this.model.startsWith('gemini-pro-vision')) {
+ return false
+ } else if (this.model.startsWith('gemini-1.0-pro-vision')) {
+ return false
+ } else if (this.model === 'gemini-pro') {
+ // on AI Studio gemini-pro is still pointing at gemini-1.0-pro-001
+ return false
+ }
+ return true
+ }
+
+ getLsParams(options: this['ParsedCallOptions']): LangSmithParams {
+ return {
+ ls_provider: 'google_genai',
+ ls_model_name: this.model,
+ ls_model_type: 'chat',
+ ls_temperature: this.client.generationConfig.temperature,
+ ls_max_tokens: this.client.generationConfig.maxOutputTokens,
+ ls_stop: options.stop
}
}
@@ -189,86 +742,36 @@ class LangchainChatGoogleGenerativeAI
return 'googlegenerativeai'
}
- override bindTools(tools: (StructuredToolInterface | Record)[], kwargs?: Partial) {
- //@ts-ignore
- return this.bind({ tools: convertToGeminiTools(tools), ...kwargs })
+ override bindTools(
+ tools: GoogleGenerativeAIToolType[],
+ kwargs?: Partial
+ ): Runnable {
+ return this.withConfig({
+ tools: convertToolsToGenAI(tools)?.tools,
+ ...kwargs
+ })
}
invocationParams(options?: this['ParsedCallOptions']): Omit {
- const tools = options?.tools as GoogleGenerativeAIFunctionDeclarationsTool[] | StructuredToolInterface[] | undefined
- if (Array.isArray(tools) && !tools.some((t: any) => !('lc_namespace' in t))) {
- return {
- tools: convertToGeminiTools(options?.tools as StructuredToolInterface[]) as any
- }
- }
- return {
- tools: options?.tools as GoogleGenerativeAIFunctionDeclarationsTool[] | undefined
- }
- }
+ const toolsAndConfig = options?.tools?.length
+ ? convertToolsToGenAI(options.tools, {
+ toolChoice: options.tool_choice,
+ allowedFunctionNames: options.allowedFunctionNames
+ })
+ : undefined
- convertFunctionResponse(prompts: Content[]) {
- for (let i = 0; i < prompts.length; i += 1) {
- if (prompts[i].role === 'function') {
- if (prompts[i - 1].role === 'model') {
- const toolName = prompts[i - 1].parts[0].functionCall?.name ?? ''
- prompts[i].parts = [
- {
- functionResponse: {
- name: toolName,
- response: {
- name: toolName,
- content: prompts[i].parts[0].text
- }
- }
- }
- ]
- }
- }
- }
- }
-
- setContextCache(contextCache: FlowiseGoogleAICacheManager): void {
- this.contextCache = contextCache
- }
-
- async getNumTokens(prompt: BaseMessage[]) {
- const contents = convertBaseMessagesToContent(prompt, this._isMultimodalModel)
- const { totalTokens } = await this.client.countTokens({ contents })
- return totalTokens
- }
-
- async _generateNonStreaming(
- prompt: Content[],
- options: this['ParsedCallOptions'],
- _runManager?: CallbackManagerForLLMRun
- ): Promise {
- //@ts-ignore
- const tools = options.tools ?? []
-
- this.convertFunctionResponse(prompt)
-
- if (tools.length > 0) {
- await this.getClient(prompt, tools as Tool[])
+ if (options?.responseSchema) {
+ this.client.generationConfig.responseSchema = options.responseSchema
+ this.client.generationConfig.responseMimeType = 'application/json'
} else {
- await this.getClient(prompt)
+ this.client.generationConfig.responseSchema = undefined
+ this.client.generationConfig.responseMimeType = this.json ? 'application/json' : undefined
+ }
+
+ return {
+ ...(toolsAndConfig?.tools ? { tools: toolsAndConfig.tools } : {}),
+ ...(toolsAndConfig?.toolConfig ? { toolConfig: toolsAndConfig.toolConfig } : {})
}
- const res = await this.caller.callWithOptions({ signal: options?.signal }, async () => {
- let output
- try {
- output = await this.client.generateContent({
- contents: prompt
- })
- } catch (e: any) {
- if (e.message?.includes('400 Bad Request')) {
- e.status = 400
- }
- throw e
- }
- return output
- })
- const generationResult = mapGenerateContentResultToChatResult(res.response)
- await _runManager?.handleLLMNewToken(generationResult.generations?.length ? generationResult.generations[0].text : '')
- return generationResult
}
async _generate(
@@ -276,8 +779,20 @@ class LangchainChatGoogleGenerativeAI
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): Promise {
- let prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel)
- prompt = checkIfEmptyContentAndSameRole(prompt)
+ const prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel, this.useSystemInstruction)
+ let actualPrompt = prompt
+ if (prompt[0].role === 'system') {
+ const [systemInstruction] = prompt
+ this.client.systemInstruction = systemInstruction
+ actualPrompt = prompt.slice(1)
+ }
+
+ // Ensure actualPrompt is never empty
+ if (actualPrompt.length === 0) {
+ actualPrompt = [{ role: 'user', parts: [{ text: '...' }] }]
+ }
+
+ const parameters = this.invocationParams(options)
// Handle streaming
if (this.streaming) {
@@ -299,7 +814,34 @@ class LangchainChatGoogleGenerativeAI
return { generations, llmOutput: { estimatedTokenUsage: tokenUsage } }
}
- return this._generateNonStreaming(prompt, options, runManager)
+
+ const res = await this.completionWithRetry({
+ ...parameters,
+ contents: actualPrompt
+ })
+
+ let usageMetadata: UsageMetadata | undefined
+ if ('usageMetadata' in res.response) {
+ const genAIUsageMetadata = res.response.usageMetadata as {
+ promptTokenCount: number | undefined
+ candidatesTokenCount: number | undefined
+ totalTokenCount: number | undefined
+ }
+ usageMetadata = {
+ input_tokens: genAIUsageMetadata.promptTokenCount ?? 0,
+ output_tokens: genAIUsageMetadata.candidatesTokenCount ?? 0,
+ total_tokens: genAIUsageMetadata.totalTokenCount ?? 0
+ }
+ }
+
+ const generationResult = mapGenerateContentResultToChatResult(res.response, {
+ usageMetadata
+ })
+ // may not have generations in output if there was a refusal for safety reasons, malformed function call, etc.
+ if (generationResult.generations?.length > 0) {
+ await runManager?.handleLLMNewToken(generationResult.generations[0]?.text ?? '')
+ }
+ return generationResult
}
async *_streamResponseChunks(
@@ -307,46 +849,48 @@ class LangchainChatGoogleGenerativeAI
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): AsyncGenerator {
- let prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel)
- prompt = checkIfEmptyContentAndSameRole(prompt)
+ const prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel, this.useSystemInstruction)
+ let actualPrompt = prompt
+ if (prompt[0].role === 'system') {
+ const [systemInstruction] = prompt
+ this.client.systemInstruction = systemInstruction
+ actualPrompt = prompt.slice(1)
+ }
+
+ // Ensure actualPrompt is never empty
+ if (actualPrompt.length === 0) {
+ actualPrompt = [{ role: 'user', parts: [{ text: '...' }] }]
+ }
const parameters = this.invocationParams(options)
const request = {
...parameters,
- contents: prompt
+ contents: actualPrompt
}
-
- const tools = options.tools ?? []
- if (tools.length > 0) {
- await this.getClient(prompt, tools as Tool[])
- } else {
- await this.getClient(prompt)
- }
-
const stream = await this.caller.callWithOptions({ signal: options?.signal }, async () => {
const { stream } = await this.client.generateContentStream(request)
return stream
})
- let usageMetadata: UsageMetadata | ICommonObject | undefined
+ let usageMetadata: UsageMetadata | undefined
let index = 0
for await (const response of stream) {
if ('usageMetadata' in response && this.streamUsage !== false && options.streamUsage !== false) {
const genAIUsageMetadata = response.usageMetadata as {
- promptTokenCount: number
- candidatesTokenCount: number
- totalTokenCount: number
+ promptTokenCount: number | undefined
+ candidatesTokenCount: number | undefined
+ totalTokenCount: number | undefined
}
if (!usageMetadata) {
usageMetadata = {
- input_tokens: genAIUsageMetadata.promptTokenCount,
- output_tokens: genAIUsageMetadata.candidatesTokenCount,
- total_tokens: genAIUsageMetadata.totalTokenCount
+ input_tokens: genAIUsageMetadata.promptTokenCount ?? 0,
+ output_tokens: genAIUsageMetadata.candidatesTokenCount ?? 0,
+ total_tokens: genAIUsageMetadata.totalTokenCount ?? 0
}
} else {
// Under the hood, LangChain combines the prompt tokens. Google returns the updated
// total each time, so we need to find the difference between the tokens.
- const outputTokenDiff = genAIUsageMetadata.candidatesTokenCount - (usageMetadata as ICommonObject).output_tokens
+ const outputTokenDiff = (genAIUsageMetadata.candidatesTokenCount ?? 0) - usageMetadata.output_tokens
usageMetadata = {
input_tokens: 0,
output_tokens: outputTokenDiff,
@@ -356,7 +900,7 @@ class LangchainChatGoogleGenerativeAI
}
const chunk = convertResponseContentToChatGenerationChunk(response, {
- usageMetadata: usageMetadata as UsageMetadata,
+ usageMetadata,
index
})
index += 1
@@ -368,6 +912,132 @@ class LangchainChatGoogleGenerativeAI
await runManager?.handleLLMNewToken(chunk.text ?? '')
}
}
+
+ async completionWithRetry(
+ request: string | GenerateContentRequest | (string | GenerativeAIPart)[],
+ options?: this['ParsedCallOptions']
+ ) {
+ return this.caller.callWithOptions({ signal: options?.signal }, async () => {
+ try {
+ return await this.client.generateContent(request)
+ } catch (e: any) {
+ // TODO: Improve error handling
+ if (e.message?.includes('400 Bad Request')) {
+ e.status = 400
+ }
+ throw e
+ }
+ })
+ }
+
+ // eslint-disable-next-line
+ withStructuredOutput = Record>(
+ outputSchema: InteropZodType | Record,
+ config?: StructuredOutputMethodOptions
+ ): Runnable
+
+ // eslint-disable-next-line
+ withStructuredOutput = Record>(
+ outputSchema: InteropZodType | Record,
+ config?: StructuredOutputMethodOptions
+ ): Runnable
+
+ // eslint-disable-next-line
+ withStructuredOutput = Record>(
+ outputSchema: InteropZodType | Record,
+ config?: StructuredOutputMethodOptions
+ ): Runnable | Runnable {
+ const schema: InteropZodType | Record = outputSchema
+ const name = config?.name
+ const method = config?.method
+ const includeRaw = config?.includeRaw
+ if (method === 'jsonMode') {
+ throw new Error(`ChatGoogleGenerativeAI only supports "jsonSchema" or "functionCalling" as a method.`)
+ }
+
+ let llm
+ let outputParser: BaseLLMOutputParser
+ if (method === 'functionCalling') {
+ let functionName = name ?? 'extract'
+ let tools: GoogleGenerativeAIFunctionDeclarationsTool[]
+ if (isInteropZodSchema(schema)) {
+ const jsonSchema = schemaToGenerativeAIParameters(schema)
+ tools = [
+ {
+ functionDeclarations: [
+ {
+ name: functionName,
+ description: jsonSchema.description ?? 'A function available to call.',
+ parameters: jsonSchema as GenerativeAIFunctionDeclarationSchema
+ }
+ ]
+ }
+ ]
+ outputParser = new GoogleGenerativeAIToolsOutputParser>({
+ returnSingle: true,
+ keyName: functionName,
+ zodSchema: schema
+ })
+ } else {
+ let geminiFunctionDefinition: GenerativeAIFunctionDeclaration
+ if (typeof schema.name === 'string' && typeof schema.parameters === 'object' && schema.parameters != null) {
+ geminiFunctionDefinition = schema as GenerativeAIFunctionDeclaration
+ geminiFunctionDefinition.parameters = removeAdditionalProperties(
+ schema.parameters
+ ) as GenerativeAIFunctionDeclarationSchema
+ functionName = schema.name
+ } else {
+ geminiFunctionDefinition = {
+ name: functionName,
+ description: schema.description ?? '',
+ parameters: removeAdditionalProperties(schema) as GenerativeAIFunctionDeclarationSchema
+ }
+ }
+ tools = [
+ {
+ functionDeclarations: [geminiFunctionDefinition]
+ }
+ ]
+ outputParser = new GoogleGenerativeAIToolsOutputParser({
+ returnSingle: true,
+ keyName: functionName
+ })
+ }
+ llm = this.bindTools(tools).withConfig({
+ allowedFunctionNames: [functionName]
+ })
+ } else {
+ const jsonSchema = schemaToGenerativeAIParameters(schema)
+ llm = this.withConfig({
+ responseSchema: jsonSchema as Schema
+ })
+ outputParser = new JsonOutputParser()
+ }
+
+ if (!includeRaw) {
+ return llm.pipe(outputParser).withConfig({
+ runName: 'ChatGoogleGenerativeAIStructuredOutput'
+ }) as Runnable
+ }
+
+ const parserAssign = RunnablePassthrough.assign({
+ parsed: (input: any, config) => outputParser.invoke(input.raw, config)
+ })
+ const parserNone = RunnablePassthrough.assign({
+ parsed: () => null
+ })
+ const parsedWithFallback = parserAssign.withFallbacks({
+ fallbacks: [parserNone]
+ })
+ return RunnableSequence.from([
+ {
+ raw: llm
+ },
+ parsedWithFallback
+ ]).withConfig({
+ runName: 'StructuredOutputRunnable'
+ })
+ }
}
export class ChatGoogleGenerativeAI extends LangchainChatGoogleGenerativeAI implements IVisionChatModal {
@@ -376,15 +1046,15 @@ export class ChatGoogleGenerativeAI extends LangchainChatGoogleGenerativeAI impl
multiModalOption: IMultiModalOption
id: string
- constructor(id: string, fields?: GoogleGenerativeAIChatInput) {
+ constructor(id: string, fields: GoogleGenerativeAIChatInput) {
super(fields)
this.id = id
- this.configuredModel = fields?.modelName ?? ''
+ this.configuredModel = fields?.model ?? ''
this.configuredMaxToken = fields?.maxOutputTokens
}
revertToOriginalModel(): void {
- this.modelName = this.configuredModel
+ this.model = this.configuredModel
this.maxOutputTokens = this.configuredMaxToken
}
@@ -393,346 +1063,6 @@ export class ChatGoogleGenerativeAI extends LangchainChatGoogleGenerativeAI impl
}
setVisionModel(): void {
- if (this.modelName === 'gemini-1.0-pro-latest') {
- this.modelName = DEFAULT_IMAGE_MODEL
- this.maxOutputTokens = this.configuredMaxToken ? this.configuredMaxToken : DEFAULT_IMAGE_MAX_TOKEN
- }
+ // pass
}
}
-
-function messageContentMedia(content: MessageContentComplex): Part {
- if ('mimeType' in content && 'data' in content) {
- return {
- inlineData: {
- mimeType: content.mimeType,
- data: content.data
- }
- }
- }
-
- throw new Error('Invalid media content')
-}
-
-function getMessageAuthor(message: BaseMessage) {
- const type = message._getType()
- if (ChatMessage.isInstance(message)) {
- return message.role
- }
- return message.name ?? type
-}
-
-function convertAuthorToRole(author: string) {
- switch (author.toLowerCase()) {
- case 'ai':
- case 'assistant':
- case 'model':
- return 'model'
- case 'function':
- case 'tool':
- return 'function'
- case 'system':
- case 'human':
- default:
- return 'user'
- }
-}
-
-function convertMessageContentToParts(message: BaseMessage, isMultimodalModel: boolean): Part[] {
- if (typeof message.content === 'string' && message.content !== '') {
- return [{ text: message.content }]
- }
-
- let functionCalls: FunctionCallPart[] = []
- let functionResponses: FunctionResponsePart[] = []
- let messageParts: Part[] = []
-
- if ('tool_calls' in message && Array.isArray(message.tool_calls) && message.tool_calls.length > 0) {
- functionCalls = message.tool_calls.map((tc) => ({
- functionCall: {
- name: tc.name,
- args: tc.args
- }
- }))
- } else if (message._getType() === 'tool' && message.name && message.content) {
- functionResponses = [
- {
- functionResponse: {
- name: message.name,
- response: message.content
- }
- }
- ]
- } else if (Array.isArray(message.content)) {
- messageParts = message.content.map((c) => {
- if (c.type === 'text') {
- return {
- text: c.text
- }
- }
-
- if (c.type === 'image_url') {
- if (!isMultimodalModel) {
- throw new Error(`This model does not support images`)
- }
- let source
- if (typeof c.image_url === 'string') {
- source = c.image_url
- } else if (typeof c.image_url === 'object' && 'url' in c.image_url) {
- source = c.image_url.url
- } else {
- throw new Error('Please provide image as base64 encoded data URL')
- }
- const [dm, data] = source.split(',')
- if (!dm.startsWith('data:')) {
- throw new Error('Please provide image as base64 encoded data URL')
- }
-
- const [mimeType, encoding] = dm.replace(/^data:/, '').split(';')
- if (encoding !== 'base64') {
- throw new Error('Please provide image as base64 encoded data URL')
- }
-
- return {
- inlineData: {
- data,
- mimeType
- }
- }
- } else if (c.type === 'media') {
- return messageContentMedia(c)
- } else if (c.type === 'tool_use') {
- return {
- functionCall: {
- name: c.name,
- args: c.input
- }
- }
- }
- throw new Error(`Unknown content type ${(c as { type: string }).type}`)
- })
- }
-
- return [...messageParts, ...functionCalls, ...functionResponses]
-}
-
-/*
- * This is a dedicated logic for Multi Agent Supervisor to handle the case where the content is empty, and the role is the same
- */
-
-function checkIfEmptyContentAndSameRole(contents: Content[]) {
- let prevRole = ''
- const validContents: Content[] = []
-
- for (const content of contents) {
- // Skip only if completely empty
- if (!content.parts || !content.parts.length) {
- continue
- }
-
- // Ensure role is always either 'user' or 'model'
- content.role = content.role === 'model' ? 'model' : 'user'
-
- // Handle consecutive messages
- if (content.role === prevRole && validContents.length > 0) {
- // Merge with previous content if same role
- validContents[validContents.length - 1].parts.push(...content.parts)
- continue
- }
-
- validContents.push(content)
- prevRole = content.role
- }
-
- return validContents
-}
-
-function convertBaseMessagesToContent(messages: BaseMessage[], isMultimodalModel: boolean) {
- return messages.reduce<{
- content: Content[]
- mergeWithPreviousContent: boolean
- }>(
- (acc, message, index) => {
- if (!isBaseMessage(message)) {
- throw new Error('Unsupported message input')
- }
- const author = getMessageAuthor(message)
- if (author === 'system' && index !== 0) {
- throw new Error('System message should be the first one')
- }
- const role = convertAuthorToRole(author)
-
- const prevContent = acc.content[acc.content.length]
- if (!acc.mergeWithPreviousContent && prevContent && prevContent.role === role) {
- throw new Error('Google Generative AI requires alternate messages between authors')
- }
-
- const parts = convertMessageContentToParts(message, isMultimodalModel)
-
- if (acc.mergeWithPreviousContent) {
- const prevContent = acc.content[acc.content.length - 1]
- if (!prevContent) {
- throw new Error('There was a problem parsing your system message. Please try a prompt without one.')
- }
- prevContent.parts.push(...parts)
-
- return {
- mergeWithPreviousContent: false,
- content: acc.content
- }
- }
- let actualRole = role
- if (actualRole === 'function' || actualRole === 'tool') {
- // GenerativeAI API will throw an error if the role is not "user" or "model."
- actualRole = 'user'
- }
- const content: Content = {
- role: actualRole,
- parts
- }
- return {
- mergeWithPreviousContent: author === 'system',
- content: [...acc.content, content]
- }
- },
- { content: [], mergeWithPreviousContent: false }
- ).content
-}
-
-function mapGenerateContentResultToChatResult(
- response: EnhancedGenerateContentResponse,
- extra?: {
- usageMetadata: UsageMetadata | undefined
- }
-): ChatResult {
- // if rejected or error, return empty generations with reason in filters
- if (!response.candidates || response.candidates.length === 0 || !response.candidates[0]) {
- return {
- generations: [],
- llmOutput: {
- filters: response.promptFeedback
- }
- }
- }
-
- const functionCalls = response.functionCalls()
- const [candidate] = response.candidates
- const { content, ...generationInfo } = candidate
- const text = content?.parts[0]?.text ?? ''
-
- const generation: ChatGeneration = {
- text,
- message: new AIMessage({
- content: text,
- tool_calls: functionCalls,
- additional_kwargs: {
- ...generationInfo
- },
- usage_metadata: extra?.usageMetadata as any
- }),
- generationInfo
- }
-
- return {
- generations: [generation]
- }
-}
-
-function convertResponseContentToChatGenerationChunk(
- response: EnhancedGenerateContentResponse,
- extra: {
- usageMetadata?: UsageMetadata | undefined
- index: number
- }
-): ChatGenerationChunk | null {
- if (!response || !response.candidates || response.candidates.length === 0) {
- return null
- }
- const functionCalls = response.functionCalls()
- const [candidate] = response.candidates
- const { content, ...generationInfo } = candidate
- const text = content?.parts?.[0]?.text ?? ''
-
- const toolCallChunks: ToolCallChunk[] = []
- if (functionCalls) {
- toolCallChunks.push(
- ...functionCalls.map((fc) => ({
- ...fc,
- args: JSON.stringify(fc.args),
- index: extra.index
- }))
- )
- }
- return new ChatGenerationChunk({
- text,
- message: new AIMessageChunk({
- content: text,
- name: !content ? undefined : content.role,
- tool_call_chunks: toolCallChunks,
- // Each chunk can have unique "generationInfo", and merging strategy is unclear,
- // so leave blank for now.
- additional_kwargs: {},
- usage_metadata: extra.usageMetadata as any
- }),
- generationInfo
- })
-}
-
-function zodToGeminiParameters(zodObj: any) {
- // Gemini doesn't accept either the $schema or additionalProperties
- // attributes, so we need to explicitly remove them.
- const jsonSchema: any = zodToJsonSchema(zodObj)
- // eslint-disable-next-line unused-imports/no-unused-vars
- const { $schema, additionalProperties, ...rest } = jsonSchema
-
- // Ensure all properties have type specified
- if (rest.properties) {
- Object.keys(rest.properties).forEach((key) => {
- const prop = rest.properties[key]
-
- // Handle enum types
- if (prop.enum?.length) {
- rest.properties[key] = {
- type: 'string',
- format: 'enum',
- enum: prop.enum
- }
- }
- // Handle missing type
- else if (!prop.type && !prop.oneOf && !prop.anyOf && !prop.allOf) {
- // Infer type from other properties
- if (prop.minimum !== undefined || prop.maximum !== undefined) {
- prop.type = 'number'
- } else if (prop.format === 'date-time') {
- prop.type = 'string'
- } else if (prop.items) {
- prop.type = 'array'
- } else if (prop.properties) {
- prop.type = 'object'
- } else {
- // Default to string if type can't be inferred
- prop.type = 'string'
- }
- }
- })
- }
-
- return rest
-}
-
-function convertToGeminiTools(structuredTools: (StructuredToolInterface | Record)[]) {
- return [
- {
- functionDeclarations: structuredTools.map((structuredTool) => {
- if (isStructuredTool(structuredTool)) {
- const jsonSchema = zodToGeminiParameters(structuredTool.schema)
- return {
- name: structuredTool.name,
- description: structuredTool.description,
- parameters: jsonSchema
- }
- }
- return structuredTool
- })
- }
- ]
-}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/common.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/common.ts
new file mode 100644
index 000000000..92c5f0b5a
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/common.ts
@@ -0,0 +1,632 @@
+import {
+ EnhancedGenerateContentResponse,
+ Content,
+ Part,
+ type FunctionDeclarationsTool as GoogleGenerativeAIFunctionDeclarationsTool,
+ type FunctionDeclaration as GenerativeAIFunctionDeclaration,
+ POSSIBLE_ROLES,
+ FunctionCallPart,
+ TextPart,
+ FileDataPart,
+ InlineDataPart
+} from '@google/generative-ai'
+import {
+ AIMessage,
+ AIMessageChunk,
+ BaseMessage,
+ ChatMessage,
+ ToolMessage,
+ ToolMessageChunk,
+ MessageContent,
+ MessageContentComplex,
+ UsageMetadata,
+ isAIMessage,
+ isBaseMessage,
+ isToolMessage,
+ StandardContentBlockConverter,
+ parseBase64DataUrl,
+ convertToProviderContentBlock,
+ isDataContentBlock
+} from '@langchain/core/messages'
+import { ChatGeneration, ChatGenerationChunk, ChatResult } from '@langchain/core/outputs'
+import { isLangChainTool } from '@langchain/core/utils/function_calling'
+import { isOpenAITool } from '@langchain/core/language_models/base'
+import { ToolCallChunk } from '@langchain/core/messages/tool'
+import { v4 as uuidv4 } from 'uuid'
+import { jsonSchemaToGeminiParameters, schemaToGenerativeAIParameters } from './zod_to_genai_parameters.js'
+import { GoogleGenerativeAIToolType } from './types.js'
+
+export function getMessageAuthor(message: BaseMessage) {
+ const type = message._getType()
+ if (ChatMessage.isInstance(message)) {
+ return message.role
+ }
+ if (type === 'tool') {
+ return type
+ }
+ return message.name ?? type
+}
+
+/**
+ * !!! IMPORTANT: Must return 'user' as default instead of throwing error
+ * https://github.com/FlowiseAI/Flowise/issues/4743
+ * Maps a message type to a Google Generative AI chat author.
+ * @param message The message to map.
+ * @param model The model to use for mapping.
+ * @returns The message type mapped to a Google Generative AI chat author.
+ */
+export function convertAuthorToRole(author: string): (typeof POSSIBLE_ROLES)[number] {
+ switch (author) {
+ /**
+ * Note: Gemini currently is not supporting system messages
+ * we will convert them to human messages and merge with following
+ * */
+ case 'supervisor':
+ case 'ai':
+ case 'model': // getMessageAuthor returns message.name. code ex.: return message.name ?? type;
+ return 'model'
+ case 'system':
+ return 'system'
+ case 'human':
+ return 'user'
+ case 'tool':
+ case 'function':
+ return 'function'
+ default:
+ return 'user' // return user as default instead of throwing error
+ }
+}
+
+function messageContentMedia(content: MessageContentComplex): Part {
+ if ('mimeType' in content && 'data' in content) {
+ return {
+ inlineData: {
+ mimeType: content.mimeType,
+ data: content.data
+ }
+ }
+ }
+ if ('mimeType' in content && 'fileUri' in content) {
+ return {
+ fileData: {
+ mimeType: content.mimeType,
+ fileUri: content.fileUri
+ }
+ }
+ }
+
+ throw new Error('Invalid media content')
+}
+
+function inferToolNameFromPreviousMessages(message: ToolMessage | ToolMessageChunk, previousMessages: BaseMessage[]): string | undefined {
+ return previousMessages
+ .map((msg) => {
+ if (isAIMessage(msg)) {
+ return msg.tool_calls ?? []
+ }
+ return []
+ })
+ .flat()
+ .find((toolCall) => {
+ return toolCall.id === message.tool_call_id
+ })?.name
+}
+
+function _getStandardContentBlockConverter(isMultimodalModel: boolean) {
+ const standardContentBlockConverter: StandardContentBlockConverter<{
+ text: TextPart
+ image: FileDataPart | InlineDataPart
+ audio: FileDataPart | InlineDataPart
+ file: FileDataPart | InlineDataPart | TextPart
+ }> = {
+ providerName: 'Google Gemini',
+
+ fromStandardTextBlock(block) {
+ return {
+ text: block.text
+ }
+ },
+
+ fromStandardImageBlock(block): FileDataPart | InlineDataPart {
+ if (!isMultimodalModel) {
+ throw new Error('This model does not support images')
+ }
+ if (block.source_type === 'url') {
+ const data = parseBase64DataUrl({ dataUrl: block.url })
+ if (data) {
+ return {
+ inlineData: {
+ mimeType: data.mime_type,
+ data: data.data
+ }
+ }
+ } else {
+ return {
+ fileData: {
+ mimeType: block.mime_type ?? '',
+ fileUri: block.url
+ }
+ }
+ }
+ }
+
+ if (block.source_type === 'base64') {
+ return {
+ inlineData: {
+ mimeType: block.mime_type ?? '',
+ data: block.data
+ }
+ }
+ }
+
+ throw new Error(`Unsupported source type: ${block.source_type}`)
+ },
+
+ fromStandardAudioBlock(block): FileDataPart | InlineDataPart {
+ if (!isMultimodalModel) {
+ throw new Error('This model does not support audio')
+ }
+ if (block.source_type === 'url') {
+ const data = parseBase64DataUrl({ dataUrl: block.url })
+ if (data) {
+ return {
+ inlineData: {
+ mimeType: data.mime_type,
+ data: data.data
+ }
+ }
+ } else {
+ return {
+ fileData: {
+ mimeType: block.mime_type ?? '',
+ fileUri: block.url
+ }
+ }
+ }
+ }
+
+ if (block.source_type === 'base64') {
+ return {
+ inlineData: {
+ mimeType: block.mime_type ?? '',
+ data: block.data
+ }
+ }
+ }
+
+ throw new Error(`Unsupported source type: ${block.source_type}`)
+ },
+
+ fromStandardFileBlock(block): FileDataPart | InlineDataPart | TextPart {
+ if (!isMultimodalModel) {
+ throw new Error('This model does not support files')
+ }
+ if (block.source_type === 'text') {
+ return {
+ text: block.text
+ }
+ }
+ if (block.source_type === 'url') {
+ const data = parseBase64DataUrl({ dataUrl: block.url })
+ if (data) {
+ return {
+ inlineData: {
+ mimeType: data.mime_type,
+ data: data.data
+ }
+ }
+ } else {
+ return {
+ fileData: {
+ mimeType: block.mime_type ?? '',
+ fileUri: block.url
+ }
+ }
+ }
+ }
+
+ if (block.source_type === 'base64') {
+ return {
+ inlineData: {
+ mimeType: block.mime_type ?? '',
+ data: block.data
+ }
+ }
+ }
+ throw new Error(`Unsupported source type: ${block.source_type}`)
+ }
+ }
+ return standardContentBlockConverter
+}
+
+function _convertLangChainContentToPart(content: MessageContentComplex, isMultimodalModel: boolean): Part | undefined {
+ if (isDataContentBlock(content)) {
+ return convertToProviderContentBlock(content, _getStandardContentBlockConverter(isMultimodalModel))
+ }
+
+ if (content.type === 'text') {
+ return { text: content.text }
+ } else if (content.type === 'executableCode') {
+ return { executableCode: content.executableCode }
+ } else if (content.type === 'codeExecutionResult') {
+ return { codeExecutionResult: content.codeExecutionResult }
+ } else if (content.type === 'image_url') {
+ if (!isMultimodalModel) {
+ throw new Error(`This model does not support images`)
+ }
+ let source
+ if (typeof content.image_url === 'string') {
+ source = content.image_url
+ } else if (typeof content.image_url === 'object' && 'url' in content.image_url) {
+ source = content.image_url.url
+ } else {
+ throw new Error('Please provide image as base64 encoded data URL')
+ }
+ const [dm, data] = source.split(',')
+ if (!dm.startsWith('data:')) {
+ throw new Error('Please provide image as base64 encoded data URL')
+ }
+
+ const [mimeType, encoding] = dm.replace(/^data:/, '').split(';')
+ if (encoding !== 'base64') {
+ throw new Error('Please provide image as base64 encoded data URL')
+ }
+
+ return {
+ inlineData: {
+ data,
+ mimeType
+ }
+ }
+ } else if (content.type === 'media') {
+ return messageContentMedia(content)
+ } else if (content.type === 'tool_use') {
+ return {
+ functionCall: {
+ name: content.name,
+ args: content.input
+ }
+ }
+ } else if (
+ content.type?.includes('/') &&
+ // Ensure it's a single slash.
+ content.type.split('/').length === 2 &&
+ 'data' in content &&
+ typeof content.data === 'string'
+ ) {
+ return {
+ inlineData: {
+ mimeType: content.type,
+ data: content.data
+ }
+ }
+ } else if ('functionCall' in content) {
+ // No action needed here โ function calls will be added later from message.tool_calls
+ return undefined
+ } else {
+ if ('type' in content) {
+ throw new Error(`Unknown content type ${content.type}`)
+ } else {
+ throw new Error(`Unknown content ${JSON.stringify(content)}`)
+ }
+ }
+}
+
+export function convertMessageContentToParts(message: BaseMessage, isMultimodalModel: boolean, previousMessages: BaseMessage[]): Part[] {
+ if (isToolMessage(message)) {
+ const messageName = message.name ?? inferToolNameFromPreviousMessages(message, previousMessages)
+ if (messageName === undefined) {
+ throw new Error(
+ `Google requires a tool name for each tool call response, and we could not infer a called tool name for ToolMessage "${message.id}" from your passed messages. Please populate a "name" field on that ToolMessage explicitly.`
+ )
+ }
+
+ const result = Array.isArray(message.content)
+ ? (message.content.map((c) => _convertLangChainContentToPart(c, isMultimodalModel)).filter((p) => p !== undefined) as Part[])
+ : message.content
+
+ if (message.status === 'error') {
+ return [
+ {
+ functionResponse: {
+ name: messageName,
+ // The API expects an object with an `error` field if the function call fails.
+ // `error` must be a valid object (not a string or array), so we wrap `message.content` here
+ response: { error: { details: result } }
+ }
+ }
+ ]
+ }
+
+ return [
+ {
+ functionResponse: {
+ name: messageName,
+ // again, can't have a string or array value for `response`, so we wrap it as an object here
+ response: { result }
+ }
+ }
+ ]
+ }
+
+ let functionCalls: FunctionCallPart[] = []
+ const messageParts: Part[] = []
+
+ if (typeof message.content === 'string' && message.content) {
+ messageParts.push({ text: message.content })
+ }
+
+ if (Array.isArray(message.content)) {
+ messageParts.push(
+ ...(message.content.map((c) => _convertLangChainContentToPart(c, isMultimodalModel)).filter((p) => p !== undefined) as Part[])
+ )
+ }
+
+ if (isAIMessage(message) && message.tool_calls?.length) {
+ functionCalls = message.tool_calls.map((tc) => {
+ return {
+ functionCall: {
+ name: tc.name,
+ args: tc.args
+ }
+ }
+ })
+ }
+
+ return [...messageParts, ...functionCalls]
+}
+
+export function convertBaseMessagesToContent(
+ messages: BaseMessage[],
+ isMultimodalModel: boolean,
+ convertSystemMessageToHumanContent: boolean = false
+) {
+ return messages.reduce<{
+ content: Content[]
+ mergeWithPreviousContent: boolean
+ }>(
+ (acc, message, index) => {
+ if (!isBaseMessage(message)) {
+ throw new Error('Unsupported message input')
+ }
+ const author = getMessageAuthor(message)
+ if (author === 'system' && index !== 0) {
+ throw new Error('System message should be the first one')
+ }
+ const role = convertAuthorToRole(author)
+
+ const prevContent = acc.content[acc.content.length]
+ if (!acc.mergeWithPreviousContent && prevContent && prevContent.role === role) {
+ throw new Error('Google Generative AI requires alternate messages between authors')
+ }
+
+ const parts = convertMessageContentToParts(message, isMultimodalModel, messages.slice(0, index))
+
+ if (acc.mergeWithPreviousContent) {
+ const prevContent = acc.content[acc.content.length - 1]
+ if (!prevContent) {
+ throw new Error('There was a problem parsing your system message. Please try a prompt without one.')
+ }
+ prevContent.parts.push(...parts)
+
+ return {
+ mergeWithPreviousContent: false,
+ content: acc.content
+ }
+ }
+ let actualRole = role
+ if (actualRole === 'function' || (actualRole === 'system' && !convertSystemMessageToHumanContent)) {
+ // GenerativeAI API will throw an error if the role is not "user" or "model."
+ actualRole = 'user'
+ }
+ const content: Content = {
+ role: actualRole,
+ parts
+ }
+ return {
+ mergeWithPreviousContent: author === 'system' && !convertSystemMessageToHumanContent,
+ content: [...acc.content, content]
+ }
+ },
+ { content: [], mergeWithPreviousContent: false }
+ ).content
+}
+
+export function mapGenerateContentResultToChatResult(
+ response: EnhancedGenerateContentResponse,
+ extra?: {
+ usageMetadata: UsageMetadata | undefined
+ }
+): ChatResult {
+ // if rejected or error, return empty generations with reason in filters
+ if (!response.candidates || response.candidates.length === 0 || !response.candidates[0]) {
+ return {
+ generations: [],
+ llmOutput: {
+ filters: response.promptFeedback
+ }
+ }
+ }
+
+ const functionCalls = response.functionCalls()
+ const [candidate] = response.candidates
+ const { content: candidateContent, ...generationInfo } = candidate
+ let content: MessageContent | undefined
+
+ if (Array.isArray(candidateContent?.parts) && candidateContent.parts.length === 1 && candidateContent.parts[0].text) {
+ content = candidateContent.parts[0].text
+ } else if (Array.isArray(candidateContent?.parts) && candidateContent.parts.length > 0) {
+ content = candidateContent.parts.map((p) => {
+ if ('text' in p) {
+ return {
+ type: 'text',
+ text: p.text
+ }
+ } else if ('executableCode' in p) {
+ return {
+ type: 'executableCode',
+ executableCode: p.executableCode
+ }
+ } else if ('codeExecutionResult' in p) {
+ return {
+ type: 'codeExecutionResult',
+ codeExecutionResult: p.codeExecutionResult
+ }
+ }
+ return p
+ })
+ } else {
+ // no content returned - likely due to abnormal stop reason, e.g. malformed function call
+ content = []
+ }
+
+ let text = ''
+ if (typeof content === 'string') {
+ text = content
+ } else if (Array.isArray(content) && content.length > 0) {
+ const block = content.find((b) => 'text' in b) as { text: string } | undefined
+ text = block?.text ?? text
+ }
+
+ const generation: ChatGeneration = {
+ text,
+ message: new AIMessage({
+ content: content ?? '',
+ tool_calls: functionCalls?.map((fc) => {
+ return {
+ ...fc,
+ type: 'tool_call',
+ id: 'id' in fc && typeof fc.id === 'string' ? fc.id : uuidv4()
+ }
+ }),
+ additional_kwargs: {
+ ...generationInfo
+ },
+ usage_metadata: extra?.usageMetadata
+ }),
+ generationInfo
+ }
+
+ return {
+ generations: [generation],
+ llmOutput: {
+ tokenUsage: {
+ promptTokens: extra?.usageMetadata?.input_tokens,
+ completionTokens: extra?.usageMetadata?.output_tokens,
+ totalTokens: extra?.usageMetadata?.total_tokens
+ }
+ }
+ }
+}
+
+export function convertResponseContentToChatGenerationChunk(
+ response: EnhancedGenerateContentResponse,
+ extra: {
+ usageMetadata?: UsageMetadata | undefined
+ index: number
+ }
+): ChatGenerationChunk | null {
+ if (!response.candidates || response.candidates.length === 0) {
+ return null
+ }
+ const functionCalls = response.functionCalls()
+ const [candidate] = response.candidates
+ const { content: candidateContent, ...generationInfo } = candidate
+ let content: MessageContent | undefined
+ // Checks if some parts do not have text. If false, it means that the content is a string.
+ if (Array.isArray(candidateContent?.parts) && candidateContent.parts.every((p) => 'text' in p)) {
+ content = candidateContent.parts.map((p) => p.text).join('')
+ } else if (Array.isArray(candidateContent?.parts)) {
+ content = candidateContent.parts.map((p) => {
+ if ('text' in p) {
+ return {
+ type: 'text',
+ text: p.text
+ }
+ } else if ('executableCode' in p) {
+ return {
+ type: 'executableCode',
+ executableCode: p.executableCode
+ }
+ } else if ('codeExecutionResult' in p) {
+ return {
+ type: 'codeExecutionResult',
+ codeExecutionResult: p.codeExecutionResult
+ }
+ }
+ return p
+ })
+ } else {
+ // no content returned - likely due to abnormal stop reason, e.g. malformed function call
+ content = []
+ }
+
+ let text = ''
+ if (content && typeof content === 'string') {
+ text = content
+ } else if (Array.isArray(content)) {
+ const block = content.find((b) => 'text' in b) as { text: string } | undefined
+ text = block?.text ?? ''
+ }
+
+ const toolCallChunks: ToolCallChunk[] = []
+ if (functionCalls) {
+ toolCallChunks.push(
+ ...functionCalls.map((fc) => ({
+ ...fc,
+ args: JSON.stringify(fc.args),
+ index: extra.index,
+ type: 'tool_call_chunk' as const,
+ id: 'id' in fc && typeof fc.id === 'string' ? fc.id : uuidv4()
+ }))
+ )
+ }
+
+ return new ChatGenerationChunk({
+ text,
+ message: new AIMessageChunk({
+ content: content || '',
+ name: !candidateContent ? undefined : candidateContent.role,
+ tool_call_chunks: toolCallChunks,
+ // Each chunk can have unique "generationInfo", and merging strategy is unclear,
+ // so leave blank for now.
+ additional_kwargs: {},
+ usage_metadata: extra.usageMetadata
+ }),
+ generationInfo
+ })
+}
+
+export function convertToGenerativeAITools(tools: GoogleGenerativeAIToolType[]): GoogleGenerativeAIFunctionDeclarationsTool[] {
+ if (tools.every((tool) => 'functionDeclarations' in tool && Array.isArray(tool.functionDeclarations))) {
+ return tools as GoogleGenerativeAIFunctionDeclarationsTool[]
+ }
+ return [
+ {
+ functionDeclarations: tools.map((tool): GenerativeAIFunctionDeclaration => {
+ if (isLangChainTool(tool)) {
+ const jsonSchema = schemaToGenerativeAIParameters(tool.schema)
+ if (jsonSchema.type === 'object' && 'properties' in jsonSchema && Object.keys(jsonSchema.properties).length === 0) {
+ return {
+ name: tool.name,
+ description: tool.description
+ }
+ }
+ return {
+ name: tool.name,
+ description: tool.description,
+ parameters: jsonSchema
+ }
+ }
+ if (isOpenAITool(tool)) {
+ return {
+ name: tool.function.name,
+ description: tool.function.description ?? `A function available to call.`,
+ parameters: jsonSchemaToGeminiParameters(tool.function.parameters)
+ }
+ }
+ return tool as unknown as GenerativeAIFunctionDeclaration
+ })
+ }
+ ]
+}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/output_parsers.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/output_parsers.ts
new file mode 100644
index 000000000..102596aa7
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/output_parsers.ts
@@ -0,0 +1,63 @@
+import { BaseLLMOutputParser, OutputParserException } from '@langchain/core/output_parsers'
+import { ChatGeneration } from '@langchain/core/outputs'
+import { ToolCall } from '@langchain/core/messages/tool'
+import { InteropZodType, interopSafeParseAsync } from '@langchain/core/utils/types'
+import { JsonOutputKeyToolsParserParamsInterop } from '@langchain/core/output_parsers/openai_tools'
+
+interface GoogleGenerativeAIToolsOutputParserParams> extends JsonOutputKeyToolsParserParamsInterop {}
+
+export class GoogleGenerativeAIToolsOutputParser = Record> extends BaseLLMOutputParser {
+ static lc_name() {
+ return 'GoogleGenerativeAIToolsOutputParser'
+ }
+
+ lc_namespace = ['langchain', 'google_genai', 'output_parsers']
+
+ returnId = false
+
+ /** The type of tool calls to return. */
+ keyName: string
+
+ /** Whether to return only the first tool call. */
+ returnSingle = false
+
+ zodSchema?: InteropZodType
+
+ constructor(params: GoogleGenerativeAIToolsOutputParserParams) {
+ super(params)
+ this.keyName = params.keyName
+ this.returnSingle = params.returnSingle ?? this.returnSingle
+ this.zodSchema = params.zodSchema
+ }
+
+ protected async _validateResult(result: unknown): Promise {
+ if (this.zodSchema === undefined) {
+ return result as T
+ }
+ const zodParsedResult = await interopSafeParseAsync(this.zodSchema, result)
+ if (zodParsedResult.success) {
+ return zodParsedResult.data
+ } else {
+ throw new OutputParserException(
+ `Failed to parse. Text: "${JSON.stringify(result, null, 2)}". Error: ${JSON.stringify(zodParsedResult.error.issues)}`,
+ JSON.stringify(result, null, 2)
+ )
+ }
+ }
+
+ async parseResult(generations: ChatGeneration[]): Promise {
+ const tools = generations.flatMap((generation) => {
+ const { message } = generation
+ if (!('tool_calls' in message) || !Array.isArray(message.tool_calls)) {
+ return []
+ }
+ return message.tool_calls as ToolCall[]
+ })
+ if (tools[0] === undefined) {
+ throw new Error('No parseable tool calls provided to GoogleGenerativeAIToolsOutputParser.')
+ }
+ const [tool] = tools
+ const validatedResult = await this._validateResult(tool.args)
+ return validatedResult
+ }
+}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/tools.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/tools.ts
new file mode 100644
index 000000000..a356252e1
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/tools.ts
@@ -0,0 +1,136 @@
+import {
+ Tool as GenerativeAITool,
+ ToolConfig,
+ FunctionCallingMode,
+ FunctionDeclaration,
+ FunctionDeclarationsTool,
+ FunctionDeclarationSchema
+} from '@google/generative-ai'
+import { ToolChoice } from '@langchain/core/language_models/chat_models'
+import { StructuredToolInterface } from '@langchain/core/tools'
+import { isLangChainTool } from '@langchain/core/utils/function_calling'
+import { isOpenAITool, ToolDefinition } from '@langchain/core/language_models/base'
+import { convertToGenerativeAITools } from './common.js'
+import { GoogleGenerativeAIToolType } from './types.js'
+import { removeAdditionalProperties } from './zod_to_genai_parameters.js'
+
+export function convertToolsToGenAI(
+ tools: GoogleGenerativeAIToolType[],
+ extra?: {
+ toolChoice?: ToolChoice
+ allowedFunctionNames?: string[]
+ }
+): {
+ tools: GenerativeAITool[]
+ toolConfig?: ToolConfig
+} {
+ // Extract function declaration processing to a separate function
+ const genAITools = processTools(tools)
+
+ // Simplify tool config creation
+ const toolConfig = createToolConfig(genAITools, extra)
+
+ return { tools: genAITools, toolConfig }
+}
+
+function processTools(tools: GoogleGenerativeAIToolType[]): GenerativeAITool[] {
+ let functionDeclarationTools: FunctionDeclaration[] = []
+ const genAITools: GenerativeAITool[] = []
+
+ tools.forEach((tool) => {
+ if (isLangChainTool(tool)) {
+ const [convertedTool] = convertToGenerativeAITools([tool as StructuredToolInterface])
+ if (convertedTool.functionDeclarations) {
+ functionDeclarationTools.push(...convertedTool.functionDeclarations)
+ }
+ } else if (isOpenAITool(tool)) {
+ const { functionDeclarations } = convertOpenAIToolToGenAI(tool)
+ if (functionDeclarations) {
+ functionDeclarationTools.push(...functionDeclarations)
+ } else {
+ throw new Error('Failed to convert OpenAI structured tool to GenerativeAI tool')
+ }
+ } else {
+ genAITools.push(tool as GenerativeAITool)
+ }
+ })
+
+ const genAIFunctionDeclaration = genAITools.find((t) => 'functionDeclarations' in t)
+ if (genAIFunctionDeclaration) {
+ return genAITools.map((tool) => {
+ if (functionDeclarationTools?.length > 0 && 'functionDeclarations' in tool) {
+ const newTool = {
+ functionDeclarations: [...(tool.functionDeclarations || []), ...functionDeclarationTools]
+ }
+ // Clear the functionDeclarationTools array so it is not passed again
+ functionDeclarationTools = []
+ return newTool
+ }
+ return tool
+ })
+ }
+
+ return [
+ ...genAITools,
+ ...(functionDeclarationTools.length > 0
+ ? [
+ {
+ functionDeclarations: functionDeclarationTools
+ }
+ ]
+ : [])
+ ]
+}
+
+function convertOpenAIToolToGenAI(tool: ToolDefinition): FunctionDeclarationsTool {
+ return {
+ functionDeclarations: [
+ {
+ name: tool.function.name,
+ description: tool.function.description,
+ parameters: removeAdditionalProperties(tool.function.parameters) as FunctionDeclarationSchema
+ }
+ ]
+ }
+}
+
+function createToolConfig(
+ genAITools: GenerativeAITool[],
+ extra?: {
+ toolChoice?: ToolChoice
+ allowedFunctionNames?: string[]
+ }
+): ToolConfig | undefined {
+ if (!genAITools.length || !extra) return undefined
+
+ const { toolChoice, allowedFunctionNames } = extra
+
+ const modeMap: Record = {
+ any: FunctionCallingMode.ANY,
+ auto: FunctionCallingMode.AUTO,
+ none: FunctionCallingMode.NONE
+ }
+
+ if (toolChoice && ['any', 'auto', 'none'].includes(toolChoice as string)) {
+ return {
+ functionCallingConfig: {
+ mode: modeMap[toolChoice as keyof typeof modeMap] ?? 'MODE_UNSPECIFIED',
+ allowedFunctionNames
+ }
+ }
+ }
+
+ if (typeof toolChoice === 'string' || allowedFunctionNames) {
+ return {
+ functionCallingConfig: {
+ mode: FunctionCallingMode.ANY,
+ allowedFunctionNames: [
+ ...(allowedFunctionNames ?? []),
+ ...(toolChoice && typeof toolChoice === 'string' ? [toolChoice] : [])
+ ]
+ }
+ }
+ }
+
+ return undefined
+}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/types.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/types.ts
new file mode 100644
index 000000000..f784f635f
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/types.ts
@@ -0,0 +1,12 @@
+import {
+ CodeExecutionTool,
+ FunctionDeclarationsTool as GoogleGenerativeAIFunctionDeclarationsTool,
+ GoogleSearchRetrievalTool
+} from '@google/generative-ai'
+import { BindToolsInput } from '@langchain/core/language_models/chat_models'
+
+export type GoogleGenerativeAIToolType =
+ | BindToolsInput
+ | GoogleGenerativeAIFunctionDeclarationsTool
+ | CodeExecutionTool
+ | GoogleSearchRetrievalTool
diff --git a/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/zod_to_genai_parameters.ts b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/zod_to_genai_parameters.ts
new file mode 100644
index 000000000..020c6359f
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatGoogleGenerativeAI/utils/zod_to_genai_parameters.ts
@@ -0,0 +1,67 @@
+import {
+ type FunctionDeclarationSchema as GenerativeAIFunctionDeclarationSchema,
+ type SchemaType as FunctionDeclarationSchemaType
+} from '@google/generative-ai'
+import { InteropZodType, isInteropZodSchema } from '@langchain/core/utils/types'
+import { type JsonSchema7Type, toJsonSchema } from '@langchain/core/utils/json_schema'
+
+export interface GenerativeAIJsonSchema extends Record {
+ properties?: Record
+ type: FunctionDeclarationSchemaType
+}
+
+export interface GenerativeAIJsonSchemaDirty extends GenerativeAIJsonSchema {
+ properties?: Record
+ additionalProperties?: boolean
+}
+
+export function removeAdditionalProperties(obj: Record): GenerativeAIJsonSchema {
+ if (typeof obj === 'object' && obj !== null) {
+ const newObj = { ...obj }
+
+ if ('additionalProperties' in newObj) {
+ delete newObj.additionalProperties
+ }
+ if ('$schema' in newObj) {
+ delete newObj.$schema
+ }
+ if ('strict' in newObj) {
+ delete newObj.strict
+ }
+
+ for (const key in newObj) {
+ if (key in newObj) {
+ if (Array.isArray(newObj[key])) {
+ newObj[key] = newObj[key].map(removeAdditionalProperties)
+ } else if (typeof newObj[key] === 'object' && newObj[key] !== null) {
+ newObj[key] = removeAdditionalProperties(newObj[key])
+ }
+ }
+ }
+
+ return newObj as GenerativeAIJsonSchema
+ }
+
+ return obj as GenerativeAIJsonSchema
+}
+
+export function schemaToGenerativeAIParameters = Record>(
+ schema: InteropZodType | JsonSchema7Type
+): GenerativeAIFunctionDeclarationSchema {
+ // GenerativeAI doesn't accept either the $schema or additionalProperties
+ // attributes, so we need to explicitly remove them.
+ const jsonSchema = removeAdditionalProperties(isInteropZodSchema(schema) ? toJsonSchema(schema) : schema)
+ const { _schema, ...rest } = jsonSchema
+
+ return rest as GenerativeAIFunctionDeclarationSchema
+}
+
+export function jsonSchemaToGeminiParameters(schema: Record): GenerativeAIFunctionDeclarationSchema {
+ // Gemini doesn't accept either the $schema or additionalProperties
+ // attributes, so we need to explicitly remove them.
+
+ const jsonSchema = removeAdditionalProperties(schema as GenerativeAIJsonSchemaDirty)
+ const { _schema, ...rest } = jsonSchema
+
+ return rest as GenerativeAIFunctionDeclarationSchema
+}
diff --git a/packages/components/nodes/chatmodels/ChatGoogleVertexAI/ChatGoogleVertexAI.ts b/packages/components/nodes/chatmodels/ChatGoogleVertexAI/ChatGoogleVertexAI.ts
index 44fed0b6a..00641da1d 100644
--- a/packages/components/nodes/chatmodels/ChatGoogleVertexAI/ChatGoogleVertexAI.ts
+++ b/packages/components/nodes/chatmodels/ChatGoogleVertexAI/ChatGoogleVertexAI.ts
@@ -1,5 +1,6 @@
import { BaseCache } from '@langchain/core/caches'
-import { ChatVertexAI as LcChatVertexAI, ChatVertexAIInput } from '@langchain/google-vertexai'
+import { ChatVertexAIInput, ChatVertexAI as LcChatVertexAI } from '@langchain/google-vertexai'
+import { buildGoogleCredentials } from '../../../src/google-utils'
import {
ICommonObject,
IMultiModalOption,
@@ -9,8 +10,8 @@ import {
INodeParams,
IVisionChatModal
} from '../../../src/Interface'
-import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
-import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
+import { getModels, getRegions, MODEL_TYPE } from '../../../src/modelLoader'
+import { getBaseClasses } from '../../../src/utils'
const DEFAULT_IMAGE_MAX_TOKEN = 8192
const DEFAULT_IMAGE_MODEL = 'gemini-1.5-flash-latest'
@@ -65,7 +66,7 @@ class GoogleVertexAI_ChatModels implements INode {
constructor() {
this.label = 'ChatGoogleVertexAI'
this.name = 'chatGoogleVertexAI'
- this.version = 5.1
+ this.version = 5.3
this.type = 'ChatGoogleVertexAI'
this.icon = 'GoogleVertex.svg'
this.category = 'Chat Models'
@@ -87,6 +88,14 @@ class GoogleVertexAI_ChatModels implements INode {
type: 'BaseCache',
optional: true
},
+ {
+ label: 'Region',
+ description: 'Region to use for the model.',
+ name: 'region',
+ type: 'asyncOptions',
+ loadMethod: 'listRegions',
+ optional: true
+ },
{
label: 'Model Name',
name: 'modelName',
@@ -151,6 +160,16 @@ class GoogleVertexAI_ChatModels implements INode {
step: 1,
optional: true,
additionalParams: true
+ },
+ {
+ label: 'Thinking Budget',
+ name: 'thinkingBudget',
+ type: 'number',
+ description: 'Number of tokens to use for thinking process (0 to disable)',
+ step: 1,
+ placeholder: '1024',
+ optional: true,
+ additionalParams: true
}
]
}
@@ -159,31 +178,13 @@ class GoogleVertexAI_ChatModels implements INode {
loadMethods = {
async listModels(): Promise {
return await getModels(MODEL_TYPE.CHAT, 'chatGoogleVertexAI')
+ },
+ async listRegions(): Promise {
+ return await getRegions(MODEL_TYPE.CHAT, 'chatGoogleVertexAI')
}
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
- const credentialData = await getCredentialData(nodeData.credential ?? '', options)
- const googleApplicationCredentialFilePath = getCredentialParam('googleApplicationCredentialFilePath', credentialData, nodeData)
- const googleApplicationCredential = getCredentialParam('googleApplicationCredential', credentialData, nodeData)
- const projectID = getCredentialParam('projectID', credentialData, nodeData)
-
- const authOptions: ICommonObject = {}
- if (Object.keys(credentialData).length !== 0) {
- if (!googleApplicationCredentialFilePath && !googleApplicationCredential)
- throw new Error('Please specify your Google Application Credential')
- if (!googleApplicationCredentialFilePath && !googleApplicationCredential)
- throw new Error(
- 'Error: More than one component has been inputted. Please use only one of the following: Google Application Credential File Path or Google Credential JSON Object'
- )
- if (googleApplicationCredentialFilePath && !googleApplicationCredential)
- authOptions.keyFile = googleApplicationCredentialFilePath
- else if (!googleApplicationCredentialFilePath && googleApplicationCredential)
- authOptions.credentials = JSON.parse(googleApplicationCredential)
-
- if (projectID) authOptions.projectId = projectID
- }
-
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
const customModelName = nodeData.inputs?.customModelName as string
@@ -192,6 +193,8 @@ class GoogleVertexAI_ChatModels implements INode {
const cache = nodeData.inputs?.cache as BaseCache
const topK = nodeData.inputs?.topK as string
const streaming = nodeData.inputs?.streaming as boolean
+ const thinkingBudget = nodeData.inputs?.thinkingBudget as string
+ const region = nodeData.inputs?.region as string
const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
@@ -206,11 +209,16 @@ class GoogleVertexAI_ChatModels implements INode {
modelName: customModelName || modelName,
streaming: streaming ?? true
}
- if (Object.keys(authOptions).length !== 0) obj.authOptions = authOptions
+
+ const authOptions = await buildGoogleCredentials(nodeData, options)
+ if (authOptions && Object.keys(authOptions).length !== 0) obj.authOptions = authOptions
+
if (maxOutputTokens) obj.maxOutputTokens = parseInt(maxOutputTokens, 10)
if (topP) obj.topP = parseFloat(topP)
if (cache) obj.cache = cache
if (topK) obj.topK = parseFloat(topK)
+ if (thinkingBudget) obj.thinkingBudget = parseInt(thinkingBudget, 10)
+ if (region) obj.location = region
const model = new ChatVertexAI(nodeData.id, obj)
model.setMultiModalOption(multiModalOption)
diff --git a/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts b/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts
index 29d1b74e5..4cda05716 100644
--- a/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts
+++ b/packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts
@@ -41,15 +41,17 @@ class ChatHuggingFace_ChatModels implements INode {
label: 'Model',
name: 'model',
type: 'string',
- description: 'If using own inference endpoint, leave this blank',
- placeholder: 'gpt2'
+ description:
+ 'Model name (e.g., deepseek-ai/DeepSeek-V3.2-Exp:novita). If model includes provider (:) or using router endpoint, leave Endpoint blank.',
+ placeholder: 'deepseek-ai/DeepSeek-V3.2-Exp:novita'
},
{
label: 'Endpoint',
name: 'endpoint',
type: 'string',
placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2',
- description: 'Using your own inference endpoint',
+ description:
+ 'Custom inference endpoint (optional). Not needed for models with providers (:) or router endpoints. Leave blank to use Inference Providers.',
optional: true
},
{
@@ -103,7 +105,7 @@ class ChatHuggingFace_ChatModels implements INode {
type: 'string',
rows: 4,
placeholder: 'AI assistant:',
- description: 'Sets the stop sequences to use. Use comma to seperate different sequences.',
+ description: 'Sets the stop sequences to use. Use comma to separate different sequences.',
optional: true,
additionalParams: true
}
@@ -124,6 +126,15 @@ class ChatHuggingFace_ChatModels implements INode {
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
+ if (!huggingFaceApiKey) {
+ console.error('[ChatHuggingFace] API key validation failed: No API key found')
+ throw new Error('HuggingFace API key is required. Please configure it in the credential settings.')
+ }
+
+ if (!huggingFaceApiKey.startsWith('hf_')) {
+ console.warn('[ChatHuggingFace] API key format warning: Key does not start with "hf_"')
+ }
+
const obj: Partial = {
model,
apiKey: huggingFaceApiKey
diff --git a/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts b/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts
index 2cf2de25d..522734cda 100644
--- a/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts
+++ b/packages/components/nodes/chatmodels/ChatHuggingFace/core.ts
@@ -56,9 +56,9 @@ export class HuggingFaceInference extends LLM implements HFInput {
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
this.endpointUrl = fields?.endpointUrl
this.includeCredentials = fields?.includeCredentials
- if (!this.apiKey) {
+ if (!this.apiKey || this.apiKey.trim() === '') {
throw new Error(
- 'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
+ 'Please set an API key for HuggingFace Hub. Either configure it in the credential settings in the UI, or set the environment variable HUGGINGFACEHUB_API_KEY.'
)
}
}
@@ -68,19 +68,21 @@ export class HuggingFaceInference extends LLM implements HFInput {
}
invocationParams(options?: this['ParsedCallOptions']) {
- return {
- model: this.model,
- parameters: {
- // make it behave similar to openai, returning only the generated text
- return_full_text: false,
- temperature: this.temperature,
- max_new_tokens: this.maxTokens,
- stop: options?.stop ?? this.stopSequences,
- top_p: this.topP,
- top_k: this.topK,
- repetition_penalty: this.frequencyPenalty
- }
+ // Return parameters compatible with chatCompletion API (OpenAI-compatible format)
+ const params: any = {
+ temperature: this.temperature,
+ max_tokens: this.maxTokens,
+ stop: options?.stop ?? this.stopSequences,
+ top_p: this.topP
}
+ // Include optional parameters if they are defined
+ if (this.topK !== undefined) {
+ params.top_k = this.topK
+ }
+ if (this.frequencyPenalty !== undefined) {
+ params.frequency_penalty = this.frequencyPenalty
+ }
+ return params
}
async *_streamResponseChunks(
@@ -88,51 +90,109 @@ export class HuggingFaceInference extends LLM implements HFInput {
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): AsyncGenerator {
- const hfi = await this._prepareHFInference()
- const stream = await this.caller.call(async () =>
- hfi.textGenerationStream({
- ...this.invocationParams(options),
- inputs: prompt
- })
- )
- for await (const chunk of stream) {
- const token = chunk.token.text
- yield new GenerationChunk({ text: token, generationInfo: chunk })
- await runManager?.handleLLMNewToken(token ?? '')
-
- // stream is done
- if (chunk.generated_text)
- yield new GenerationChunk({
- text: '',
- generationInfo: { finished: true }
+ try {
+ const client = await this._prepareHFInference()
+ const stream = await this.caller.call(async () =>
+ client.chatCompletionStream({
+ model: this.model,
+ messages: [{ role: 'user', content: prompt }],
+ ...this.invocationParams(options)
})
+ )
+ for await (const chunk of stream) {
+ const token = chunk.choices[0]?.delta?.content || ''
+ if (token) {
+ yield new GenerationChunk({ text: token, generationInfo: chunk })
+ await runManager?.handleLLMNewToken(token)
+ }
+ // stream is done when finish_reason is set
+ if (chunk.choices[0]?.finish_reason) {
+ yield new GenerationChunk({
+ text: '',
+ generationInfo: { finished: true }
+ })
+ break
+ }
+ }
+ } catch (error: any) {
+ console.error('[ChatHuggingFace] Error in _streamResponseChunks:', error)
+ // Provide more helpful error messages
+ if (error?.message?.includes('endpointUrl') || error?.message?.includes('third-party provider')) {
+ throw new Error(
+ `Cannot use custom endpoint with model "${this.model}" that includes a provider. Please leave the Endpoint field blank in the UI. Original error: ${error.message}`
+ )
+ }
+ throw error
}
}
/** @ignore */
async _call(prompt: string, options: this['ParsedCallOptions']): Promise {
- const hfi = await this._prepareHFInference()
- const args = { ...this.invocationParams(options), inputs: prompt }
- const res = await this.caller.callWithOptions({ signal: options.signal }, hfi.textGeneration.bind(hfi), args)
- return res.generated_text
+ try {
+ const client = await this._prepareHFInference()
+ // Use chatCompletion for chat models (v4 supports conversational models via Inference Providers)
+ const args = {
+ model: this.model,
+ messages: [{ role: 'user', content: prompt }],
+ ...this.invocationParams(options)
+ }
+ const res = await this.caller.callWithOptions({ signal: options.signal }, client.chatCompletion.bind(client), args)
+ const content = res.choices[0]?.message?.content || ''
+ if (!content) {
+ console.error('[ChatHuggingFace] No content in response:', JSON.stringify(res))
+ throw new Error(`No content received from HuggingFace API. Response: ${JSON.stringify(res)}`)
+ }
+ return content
+ } catch (error: any) {
+ console.error('[ChatHuggingFace] Error in _call:', error.message)
+ // Provide more helpful error messages
+ if (error?.message?.includes('endpointUrl') || error?.message?.includes('third-party provider')) {
+ throw new Error(
+ `Cannot use custom endpoint with model "${this.model}" that includes a provider. Please leave the Endpoint field blank in the UI. Original error: ${error.message}`
+ )
+ }
+ if (error?.message?.includes('Invalid username or password') || error?.message?.includes('authentication')) {
+ throw new Error(
+ `HuggingFace API authentication failed. Please verify your API key is correct and starts with "hf_". Original error: ${error.message}`
+ )
+ }
+ throw error
+ }
}
/** @ignore */
private async _prepareHFInference() {
- const { HfInference } = await HuggingFaceInference.imports()
- const hfi = new HfInference(this.apiKey, {
- includeCredentials: this.includeCredentials
- })
- return this.endpointUrl ? hfi.endpoint(this.endpointUrl) : hfi
+ if (!this.apiKey || this.apiKey.trim() === '') {
+ console.error('[ChatHuggingFace] API key validation failed: Empty or undefined')
+ throw new Error('HuggingFace API key is required. Please configure it in the credential settings.')
+ }
+
+ const { InferenceClient } = await HuggingFaceInference.imports()
+ // Use InferenceClient for chat models (works better with Inference Providers)
+ const client = new InferenceClient(this.apiKey)
+
+ // Don't override endpoint if model uses a provider (contains ':') or if endpoint is router-based
+ // When using Inference Providers, endpoint should be left blank - InferenceClient handles routing automatically
+ if (
+ this.endpointUrl &&
+ !this.model.includes(':') &&
+ !this.endpointUrl.includes('/v1/chat/completions') &&
+ !this.endpointUrl.includes('router.huggingface.co')
+ ) {
+ return client.endpoint(this.endpointUrl)
+ }
+
+ // Return client without endpoint override - InferenceClient will use Inference Providers automatically
+ return client
}
/** @ignore */
static async imports(): Promise<{
- HfInference: typeof import('@huggingface/inference').HfInference
+ InferenceClient: typeof import('@huggingface/inference').InferenceClient
}> {
try {
- const { HfInference } = await import('@huggingface/inference')
- return { HfInference }
+ const { InferenceClient } = await import('@huggingface/inference')
+ return { InferenceClient }
} catch (e) {
throw new Error('Please install huggingface as a dependency with, e.g. `pnpm install @huggingface/inference`')
}
diff --git a/packages/components/nodes/chatmodels/ChatIBMWatsonx/ChatIBMWatsonx.ts b/packages/components/nodes/chatmodels/ChatIBMWatsonx/ChatIBMWatsonx.ts
index f4655ace6..00adc75fb 100644
--- a/packages/components/nodes/chatmodels/ChatIBMWatsonx/ChatIBMWatsonx.ts
+++ b/packages/components/nodes/chatmodels/ChatIBMWatsonx/ChatIBMWatsonx.ts
@@ -161,12 +161,13 @@ class ChatIBMWatsonx_ChatModels implements INode {
watsonxAIBearerToken
}
- const obj: ChatWatsonxInput & WatsonxAuth = {
+ const obj = {
...auth,
streaming: streaming ?? true,
model: modelName,
temperature: temperature ? parseFloat(temperature) : undefined
- }
+ } as ChatWatsonxInput & WatsonxAuth
+
if (cache) obj.cache = cache
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
if (frequencyPenalty) obj.frequencyPenalty = parseInt(frequencyPenalty, 10)
diff --git a/packages/components/nodes/chatmodels/ChatLitellm/ChatLitellm.ts b/packages/components/nodes/chatmodels/ChatLitellm/ChatLitellm.ts
index 352f883c6..78fc40ec2 100644
--- a/packages/components/nodes/chatmodels/ChatLitellm/ChatLitellm.ts
+++ b/packages/components/nodes/chatmodels/ChatLitellm/ChatLitellm.ts
@@ -124,7 +124,10 @@ class ChatLitellm_ChatModels implements INode {
if (topP) obj.topP = parseFloat(topP)
if (timeout) obj.timeout = parseInt(timeout, 10)
if (cache) obj.cache = cache
- if (apiKey) obj.openAIApiKey = apiKey
+ if (apiKey) {
+ obj.openAIApiKey = apiKey
+ obj.apiKey = apiKey
+ }
const model = new ChatOpenAI(obj)
diff --git a/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts b/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts
index 3ce0efdfa..4040adc8f 100644
--- a/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts
+++ b/packages/components/nodes/chatmodels/ChatLocalAI/ChatLocalAI.ts
@@ -111,6 +111,7 @@ class ChatLocalAI_ChatModels implements INode {
temperature: parseFloat(temperature),
modelName,
openAIApiKey: 'sk-',
+ apiKey: 'sk-',
streaming: streaming ?? true
}
@@ -118,7 +119,10 @@ class ChatLocalAI_ChatModels implements INode {
if (topP) obj.topP = parseFloat(topP)
if (timeout) obj.timeout = parseInt(timeout, 10)
if (cache) obj.cache = cache
- if (localAIApiKey) obj.openAIApiKey = localAIApiKey
+ if (localAIApiKey) {
+ obj.openAIApiKey = localAIApiKey
+ obj.apiKey = localAIApiKey
+ }
if (basePath) obj.configuration = { baseURL: basePath }
const model = new ChatOpenAI(obj)
diff --git a/packages/components/nodes/chatmodels/ChatNvdiaNIM/ChatNvdiaNIM.ts b/packages/components/nodes/chatmodels/ChatNvdiaNIM/ChatNvdiaNIM.ts
index b4636ad3d..59f57c485 100644
--- a/packages/components/nodes/chatmodels/ChatNvdiaNIM/ChatNvdiaNIM.ts
+++ b/packages/components/nodes/chatmodels/ChatNvdiaNIM/ChatNvdiaNIM.ts
@@ -17,9 +17,9 @@ class ChatNvdiaNIM_ChatModels implements INode {
constructor() {
this.label = 'Chat NVIDIA NIM'
- this.name = 'Chat NVIDIA NIM'
+ this.name = 'chatNvidiaNIM'
this.version = 1.1
- this.type = 'Chat NVIDIA NIM'
+ this.type = 'ChatNvidiaNIM'
this.icon = 'nvdia.svg'
this.category = 'Chat Models'
this.description = 'Wrapper around NVIDIA NIM Inference API'
@@ -137,6 +137,7 @@ class ChatNvdiaNIM_ChatModels implements INode {
temperature: parseFloat(temperature),
modelName,
openAIApiKey: nvidiaNIMApiKey ?? 'sk-',
+ apiKey: nvidiaNIMApiKey ?? 'sk-',
streaming: streaming ?? true
}
diff --git a/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts b/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts
index 62c06d900..a7421dfda 100644
--- a/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts
+++ b/packages/components/nodes/chatmodels/ChatOpenAI/ChatOpenAI.ts
@@ -1,10 +1,11 @@
-import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields, OpenAIClient } from '@langchain/openai'
+import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
import { BaseCache } from '@langchain/core/caches'
import { ICommonObject, IMultiModalOption, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { ChatOpenAI } from './FlowiseChatOpenAI'
import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
import { HttpsProxyAgent } from 'https-proxy-agent'
+import { OpenAI as OpenAIClient } from 'openai'
class ChatOpenAI_ChatModels implements INode {
label: string
@@ -21,7 +22,7 @@ class ChatOpenAI_ChatModels implements INode {
constructor() {
this.label = 'ChatOpenAI'
this.name = 'chatOpenAI'
- this.version = 8.2
+ this.version = 8.3
this.type = 'ChatOpenAI'
this.icon = 'openai.svg'
this.category = 'Chat Models'
@@ -176,9 +177,18 @@ class ChatOpenAI_ChatModels implements INode {
allowImageUploads: true
}
},
+ {
+ label: 'Reasoning',
+ description: 'Whether the model supports reasoning. Only applicable for reasoning models.',
+ name: 'reasoning',
+ type: 'boolean',
+ default: false,
+ optional: true,
+ additionalParams: true
+ },
{
label: 'Reasoning Effort',
- description: 'Constrains effort on reasoning for reasoning models. Only applicable for o1 and o3 models.',
+ description: 'Constrains effort on reasoning for reasoning models',
name: 'reasoningEffort',
type: 'options',
options: [
@@ -195,9 +205,34 @@ class ChatOpenAI_ChatModels implements INode {
name: 'high'
}
],
- default: 'medium',
- optional: false,
- additionalParams: true
+ additionalParams: true,
+ show: {
+ reasoning: true
+ }
+ },
+ {
+ label: 'Reasoning Summary',
+ description: `A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process`,
+ name: 'reasoningSummary',
+ type: 'options',
+ options: [
+ {
+ label: 'Auto',
+ name: 'auto'
+ },
+ {
+ label: 'Concise',
+ name: 'concise'
+ },
+ {
+ label: 'Detailed',
+ name: 'detailed'
+ }
+ ],
+ additionalParams: true,
+ show: {
+ reasoning: true
+ }
}
]
}
@@ -223,7 +258,8 @@ class ChatOpenAI_ChatModels implements INode {
const basePath = nodeData.inputs?.basepath as string
const proxyUrl = nodeData.inputs?.proxyUrl as string
const baseOptions = nodeData.inputs?.baseOptions
- const reasoningEffort = nodeData.inputs?.reasoningEffort as OpenAIClient.Chat.ChatCompletionReasoningEffort
+ const reasoningEffort = nodeData.inputs?.reasoningEffort as OpenAIClient.ReasoningEffort | null
+ const reasoningSummary = nodeData.inputs?.reasoningSummary as 'auto' | 'concise' | 'detailed' | null
const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
const imageResolution = nodeData.inputs?.imageResolution as string
@@ -240,15 +276,10 @@ class ChatOpenAI_ChatModels implements INode {
temperature: parseFloat(temperature),
modelName,
openAIApiKey,
+ apiKey: openAIApiKey,
streaming: streaming ?? true
}
- if (modelName.includes('o3') || modelName.includes('o1')) {
- delete obj.temperature
- }
- if ((modelName.includes('o1') || modelName.includes('o3')) && reasoningEffort) {
- obj.reasoningEffort = reasoningEffort
- }
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
if (topP) obj.topP = parseFloat(topP)
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
@@ -261,6 +292,19 @@ class ChatOpenAI_ChatModels implements INode {
}
if (strictToolCalling) obj.supportsStrictToolCalling = strictToolCalling
+ if (modelName.includes('o1') || modelName.includes('o3') || modelName.includes('gpt-5')) {
+ delete obj.temperature
+ delete obj.stop
+ const reasoning: OpenAIClient.Reasoning = {}
+ if (reasoningEffort) {
+ reasoning.effort = reasoningEffort
+ }
+ if (reasoningSummary) {
+ reasoning.summary = reasoningSummary
+ }
+ obj.reasoning = reasoning
+ }
+
let parsedBaseOptions: any | undefined = undefined
if (baseOptions) {
diff --git a/packages/components/nodes/chatmodels/ChatOpenAI/FlowiseChatOpenAI.ts b/packages/components/nodes/chatmodels/ChatOpenAI/FlowiseChatOpenAI.ts
index adb57f312..cce58c2ca 100644
--- a/packages/components/nodes/chatmodels/ChatOpenAI/FlowiseChatOpenAI.ts
+++ b/packages/components/nodes/chatmodels/ChatOpenAI/FlowiseChatOpenAI.ts
@@ -5,6 +5,7 @@ export class ChatOpenAI extends LangchainChatOpenAI implements IVisionChatModal
configuredModel: string
configuredMaxToken?: number
multiModalOption: IMultiModalOption
+ builtInTools: Record[] = []
id: string
constructor(id: string, fields?: ChatOpenAIFields) {
@@ -15,7 +16,7 @@ export class ChatOpenAI extends LangchainChatOpenAI implements IVisionChatModal
}
revertToOriginalModel(): void {
- this.modelName = this.configuredModel
+ this.model = this.configuredModel
this.maxTokens = this.configuredMaxToken
}
@@ -26,4 +27,8 @@ export class ChatOpenAI extends LangchainChatOpenAI implements IVisionChatModal
setVisionModel(): void {
// pass
}
+
+ addBuiltInTools(builtInTool: Record): void {
+ this.builtInTools.push(builtInTool)
+ }
}
diff --git a/packages/components/nodes/chatmodels/ChatOpenAICustom/ChatOpenAICustom.ts b/packages/components/nodes/chatmodels/ChatOpenAICustom/ChatOpenAICustom.ts
index b076b461f..9c450ba01 100644
--- a/packages/components/nodes/chatmodels/ChatOpenAICustom/ChatOpenAICustom.ts
+++ b/packages/components/nodes/chatmodels/ChatOpenAICustom/ChatOpenAICustom.ts
@@ -137,6 +137,7 @@ class ChatOpenAICustom_ChatModels implements INode {
temperature: parseFloat(temperature),
modelName,
openAIApiKey,
+ apiKey: openAIApiKey,
streaming: streaming ?? true
}
diff --git a/packages/components/nodes/chatmodels/ChatOpenRouter/ChatOpenRouter.ts b/packages/components/nodes/chatmodels/ChatOpenRouter/ChatOpenRouter.ts
index 4ac3c4f4c..4defd19ed 100644
--- a/packages/components/nodes/chatmodels/ChatOpenRouter/ChatOpenRouter.ts
+++ b/packages/components/nodes/chatmodels/ChatOpenRouter/ChatOpenRouter.ts
@@ -1,7 +1,8 @@
-import { ChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
+import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
import { BaseCache } from '@langchain/core/caches'
-import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ICommonObject, IMultiModalOption, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { ChatOpenRouter } from './FlowiseChatOpenRouter'
class ChatOpenRouter_ChatModels implements INode {
label: string
@@ -23,7 +24,7 @@ class ChatOpenRouter_ChatModels implements INode {
this.icon = 'openRouter.svg'
this.category = 'Chat Models'
this.description = 'Wrapper around Open Router Inference API'
- this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
+ this.baseClasses = [this.type, ...getBaseClasses(LangchainChatOpenAI)]
this.credential = {
label: 'Connect Credential',
name: 'credential',
@@ -114,6 +115,40 @@ class ChatOpenRouter_ChatModels implements INode {
type: 'json',
optional: true,
additionalParams: true
+ },
+ {
+ label: 'Allow Image Uploads',
+ name: 'allowImageUploads',
+ type: 'boolean',
+ description:
+ 'Allow image input. Refer to the docs for more details.',
+ default: false,
+ optional: true
+ },
+ {
+ label: 'Image Resolution',
+ description: 'This parameter controls the resolution in which the model views the image.',
+ name: 'imageResolution',
+ type: 'options',
+ options: [
+ {
+ label: 'Low',
+ name: 'low'
+ },
+ {
+ label: 'High',
+ name: 'high'
+ },
+ {
+ label: 'Auto',
+ name: 'auto'
+ }
+ ],
+ default: 'low',
+ optional: false,
+ show: {
+ allowImageUploads: true
+ }
}
]
}
@@ -130,6 +165,8 @@ class ChatOpenRouter_ChatModels implements INode {
const basePath = (nodeData.inputs?.basepath as string) || 'https://openrouter.ai/api/v1'
const baseOptions = nodeData.inputs?.baseOptions
const cache = nodeData.inputs?.cache as BaseCache
+ const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
+ const imageResolution = nodeData.inputs?.imageResolution as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const openRouterApiKey = getCredentialParam('openRouterApiKey', credentialData, nodeData)
@@ -138,6 +175,7 @@ class ChatOpenRouter_ChatModels implements INode {
temperature: parseFloat(temperature),
modelName,
openAIApiKey: openRouterApiKey,
+ apiKey: openRouterApiKey,
streaming: streaming ?? true
}
@@ -154,7 +192,7 @@ class ChatOpenRouter_ChatModels implements INode {
try {
parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
} catch (exception) {
- throw new Error("Invalid JSON in the ChatCerebras's BaseOptions: " + exception)
+ throw new Error("Invalid JSON in the ChatOpenRouter's BaseOptions: " + exception)
}
}
@@ -165,7 +203,15 @@ class ChatOpenRouter_ChatModels implements INode {
}
}
- const model = new ChatOpenAI(obj)
+ const multiModalOption: IMultiModalOption = {
+ image: {
+ allowImageUploads: allowImageUploads ?? false,
+ imageResolution
+ }
+ }
+
+ const model = new ChatOpenRouter(nodeData.id, obj)
+ model.setMultiModalOption(multiModalOption)
return model
}
}
diff --git a/packages/components/nodes/chatmodels/ChatOpenRouter/FlowiseChatOpenRouter.ts b/packages/components/nodes/chatmodels/ChatOpenRouter/FlowiseChatOpenRouter.ts
new file mode 100644
index 000000000..bca0c5d16
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatOpenRouter/FlowiseChatOpenRouter.ts
@@ -0,0 +1,29 @@
+import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
+import { IMultiModalOption, IVisionChatModal } from '../../../src'
+
+export class ChatOpenRouter extends LangchainChatOpenAI implements IVisionChatModal {
+ configuredModel: string
+ configuredMaxToken?: number
+ multiModalOption: IMultiModalOption
+ id: string
+
+ constructor(id: string, fields?: ChatOpenAIFields) {
+ super(fields)
+ this.id = id
+ this.configuredModel = fields?.modelName ?? ''
+ this.configuredMaxToken = fields?.maxTokens
+ }
+
+ revertToOriginalModel(): void {
+ this.model = this.configuredModel
+ this.maxTokens = this.configuredMaxToken
+ }
+
+ setMultiModalOption(multiModalOption: IMultiModalOption): void {
+ this.multiModalOption = multiModalOption
+ }
+
+ setVisionModel(): void {
+ // pass - OpenRouter models don't need model switching
+ }
+}
diff --git a/packages/components/nodes/chatmodels/ChatSambanova/ChatSambanova.ts b/packages/components/nodes/chatmodels/ChatSambanova/ChatSambanova.ts
new file mode 100644
index 000000000..a62ebfb30
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatSambanova/ChatSambanova.ts
@@ -0,0 +1,123 @@
+import { BaseCache } from '@langchain/core/caches'
+import { ChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
+import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+
+class ChatSambanova_ChatModels implements INode {
+ label: string
+ name: string
+ version: number
+ type: string
+ icon: string
+ category: string
+ description: string
+ baseClasses: string[]
+ credential: INodeParams
+ inputs: INodeParams[]
+
+ constructor() {
+ this.label = 'ChatSambanova'
+ this.name = 'chatSambanova'
+ this.version = 1.0
+ this.type = 'ChatSambanova'
+ this.icon = 'sambanova.png'
+ this.category = 'Chat Models'
+ this.description = 'Wrapper around Sambanova Chat Endpoints'
+ this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
+ this.credential = {
+ label: 'Connect Credential',
+ name: 'credential',
+ type: 'credential',
+ credentialNames: ['sambanovaApi']
+ }
+ this.inputs = [
+ {
+ label: 'Cache',
+ name: 'cache',
+ type: 'BaseCache',
+ optional: true
+ },
+ {
+ label: 'Model',
+ name: 'modelName',
+ type: 'string',
+ default: 'Meta-Llama-3.3-70B-Instruct',
+ placeholder: 'Meta-Llama-3.3-70B-Instruct'
+ },
+ {
+ label: 'Temperature',
+ name: 'temperature',
+ type: 'number',
+ step: 0.1,
+ default: 0.9,
+ optional: true
+ },
+ {
+ label: 'Streaming',
+ name: 'streaming',
+ type: 'boolean',
+ default: true,
+ optional: true
+ },
+ {
+ label: 'BasePath',
+ name: 'basepath',
+ type: 'string',
+ optional: true,
+ default: 'htps://api.sambanova.ai/v1',
+ additionalParams: true
+ },
+ {
+ label: 'BaseOptions',
+ name: 'baseOptions',
+ type: 'json',
+ optional: true,
+ additionalParams: true
+ }
+ ]
+ }
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
+ const cache = nodeData.inputs?.cache as BaseCache
+ const temperature = nodeData.inputs?.temperature as string
+ const modelName = nodeData.inputs?.modelName as string
+ const streaming = nodeData.inputs?.streaming as boolean
+ const basePath = nodeData.inputs?.basepath as string
+ const baseOptions = nodeData.inputs?.baseOptions
+
+ const credentialData = await getCredentialData(nodeData.credential ?? '', options)
+ const sambanovaApiKey = getCredentialParam('sambanovaApiKey', credentialData, nodeData)
+
+ const obj: ChatOpenAIFields = {
+ temperature: temperature ? parseFloat(temperature) : undefined,
+ model: modelName,
+ apiKey: sambanovaApiKey,
+ openAIApiKey: sambanovaApiKey,
+ streaming: streaming ?? true
+ }
+
+ if (cache) obj.cache = cache
+
+ let parsedBaseOptions: any | undefined = undefined
+
+ if (baseOptions) {
+ try {
+ parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
+ } catch (exception) {
+ throw new Error("Invalid JSON in the ChatSambanova's BaseOptions: " + exception)
+ }
+ }
+
+ if (basePath || parsedBaseOptions) {
+ obj.configuration = {
+ baseURL: basePath,
+ defaultHeaders: parsedBaseOptions
+ }
+ }
+
+ const model = new ChatOpenAI(obj)
+ return model
+ }
+}
+
+module.exports = { nodeClass: ChatSambanova_ChatModels }
diff --git a/packages/components/nodes/chatmodels/ChatSambanova/sambanova.png b/packages/components/nodes/chatmodels/ChatSambanova/sambanova.png
new file mode 100644
index 000000000..8bc16c5d5
Binary files /dev/null and b/packages/components/nodes/chatmodels/ChatSambanova/sambanova.png differ
diff --git a/packages/components/nodes/chatmodels/ChatXAI/ChatXAI.ts b/packages/components/nodes/chatmodels/ChatXAI/ChatXAI.ts
index a6f41e884..2495522ce 100644
--- a/packages/components/nodes/chatmodels/ChatXAI/ChatXAI.ts
+++ b/packages/components/nodes/chatmodels/ChatXAI/ChatXAI.ts
@@ -1,7 +1,8 @@
import { BaseCache } from '@langchain/core/caches'
-import { ChatXAI, ChatXAIInput } from '@langchain/xai'
-import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
+import { ChatXAIInput } from '@langchain/xai'
+import { ICommonObject, IMultiModalOption, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
+import { ChatXAI } from './FlowiseChatXAI'
class ChatXAI_ChatModels implements INode {
label: string
@@ -18,7 +19,7 @@ class ChatXAI_ChatModels implements INode {
constructor() {
this.label = 'ChatXAI'
this.name = 'chatXAI'
- this.version = 1.0
+ this.version = 2.0
this.type = 'ChatXAI'
this.icon = 'xai.png'
this.category = 'Chat Models'
@@ -74,6 +75,15 @@ class ChatXAI_ChatModels implements INode {
step: 1,
optional: true,
additionalParams: true
+ },
+ {
+ label: 'Allow Image Uploads',
+ name: 'allowImageUploads',
+ type: 'boolean',
+ description:
+ 'Allow image input. Refer to the docs for more details.',
+ default: false,
+ optional: true
}
]
}
@@ -84,6 +94,7 @@ class ChatXAI_ChatModels implements INode {
const modelName = nodeData.inputs?.modelName as string
const maxTokens = nodeData.inputs?.maxTokens as string
const streaming = nodeData.inputs?.streaming as boolean
+ const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const xaiApiKey = getCredentialParam('xaiApiKey', credentialData, nodeData)
@@ -97,7 +108,15 @@ class ChatXAI_ChatModels implements INode {
if (cache) obj.cache = cache
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
- const model = new ChatXAI(obj)
+ const multiModalOption: IMultiModalOption = {
+ image: {
+ allowImageUploads: allowImageUploads ?? false
+ }
+ }
+
+ const model = new ChatXAI(nodeData.id, obj)
+ model.setMultiModalOption(multiModalOption)
+
return model
}
}
diff --git a/packages/components/nodes/chatmodels/ChatXAI/FlowiseChatXAI.ts b/packages/components/nodes/chatmodels/ChatXAI/FlowiseChatXAI.ts
new file mode 100644
index 000000000..e315a29a9
--- /dev/null
+++ b/packages/components/nodes/chatmodels/ChatXAI/FlowiseChatXAI.ts
@@ -0,0 +1,29 @@
+import { ChatXAI as LCChatXAI, ChatXAIInput } from '@langchain/xai'
+import { IMultiModalOption, IVisionChatModal } from '../../../src'
+
+export class ChatXAI extends LCChatXAI implements IVisionChatModal {
+ configuredModel: string
+ configuredMaxToken?: number
+ multiModalOption: IMultiModalOption
+ id: string
+
+ constructor(id: string, fields?: ChatXAIInput) {
+ super(fields)
+ this.id = id
+ this.configuredModel = fields?.model ?? ''
+ this.configuredMaxToken = fields?.maxTokens
+ }
+
+ revertToOriginalModel(): void {
+ this.modelName = this.configuredModel
+ this.maxTokens = this.configuredMaxToken
+ }
+
+ setMultiModalOption(multiModalOption: IMultiModalOption): void {
+ this.multiModalOption = multiModalOption
+ }
+
+ setVisionModel(): void {
+ // pass
+ }
+}
diff --git a/packages/components/nodes/chatmodels/Deepseek/Deepseek.ts b/packages/components/nodes/chatmodels/Deepseek/Deepseek.ts
index 5f5e95563..fa92a5633 100644
--- a/packages/components/nodes/chatmodels/Deepseek/Deepseek.ts
+++ b/packages/components/nodes/chatmodels/Deepseek/Deepseek.ts
@@ -153,6 +153,7 @@ class Deepseek_ChatModels implements INode {
temperature: parseFloat(temperature),
modelName,
openAIApiKey,
+ apiKey: openAIApiKey,
streaming: streaming ?? true
}
diff --git a/packages/components/nodes/documentloaders/API/APILoader.ts b/packages/components/nodes/documentloaders/API/APILoader.ts
index 02b77f789..479ad2e94 100644
--- a/packages/components/nodes/documentloaders/API/APILoader.ts
+++ b/packages/components/nodes/documentloaders/API/APILoader.ts
@@ -1,8 +1,10 @@
-import axios, { AxiosRequestConfig } from 'axios'
-import { omit } from 'lodash'
import { Document } from '@langchain/core/documents'
-import { TextSplitter } from 'langchain/text_splitter'
+import axios, { AxiosRequestConfig } from 'axios'
+import * as https from 'https'
import { BaseDocumentLoader } from 'langchain/document_loaders/base'
+import { TextSplitter } from 'langchain/text_splitter'
+import { omit } from 'lodash'
+import { getFileFromStorage } from '../../../src'
import { ICommonObject, IDocument, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
import { handleEscapeCharacters } from '../../../src/utils'
@@ -21,7 +23,7 @@ class API_DocumentLoaders implements INode {
constructor() {
this.label = 'API Loader'
this.name = 'apiLoader'
- this.version = 2.0
+ this.version = 2.1
this.type = 'Document'
this.icon = 'api.svg'
this.category = 'Document Loaders'
@@ -61,6 +63,15 @@ class API_DocumentLoaders implements INode {
additionalParams: true,
optional: true
},
+ {
+ label: 'SSL Certificate',
+ description: 'Please upload a SSL certificate file in either .pem or .crt',
+ name: 'caFile',
+ type: 'file',
+ fileType: '.pem, .crt',
+ additionalParams: true,
+ optional: true
+ },
{
label: 'Body',
name: 'body',
@@ -84,7 +95,7 @@ class API_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
- 'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
+ 'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, separated by comma. Use * to omit all metadata keys except the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@@ -105,8 +116,10 @@ class API_DocumentLoaders implements INode {
}
]
}
- async init(nodeData: INodeData): Promise {
+
+ async init(nodeData: INodeData, _: string, options: ICommonObject): Promise {
const headers = nodeData.inputs?.headers as string
+ const caFileBase64 = nodeData.inputs?.caFile as string
const url = nodeData.inputs?.url as string
const body = nodeData.inputs?.body as string
const method = nodeData.inputs?.method as string
@@ -120,22 +133,37 @@ class API_DocumentLoaders implements INode {
omitMetadataKeys = _omitMetadataKeys.split(',').map((key) => key.trim())
}
- const options: ApiLoaderParams = {
+ const apiLoaderParam: ApiLoaderParams = {
url,
method
}
if (headers) {
const parsedHeaders = typeof headers === 'object' ? headers : JSON.parse(headers)
- options.headers = parsedHeaders
+ apiLoaderParam.headers = parsedHeaders
+ }
+
+ if (caFileBase64.startsWith('FILE-STORAGE::')) {
+ let file = caFileBase64.replace('FILE-STORAGE::', '')
+ file = file.replace('[', '')
+ file = file.replace(']', '')
+ const orgId = options.orgId
+ const chatflowid = options.chatflowid
+ const fileData = await getFileFromStorage(file, orgId, chatflowid)
+ apiLoaderParam.ca = fileData.toString()
+ } else {
+ const splitDataURI = caFileBase64.split(',')
+ splitDataURI.pop()
+ const bf = Buffer.from(splitDataURI.pop() || '', 'base64')
+ apiLoaderParam.ca = bf.toString('utf-8')
}
if (body) {
const parsedBody = typeof body === 'object' ? body : JSON.parse(body)
- options.body = parsedBody
+ apiLoaderParam.body = parsedBody
}
- const loader = new ApiLoader(options)
+ const loader = new ApiLoader(apiLoaderParam)
let docs: IDocument[] = []
@@ -195,6 +223,7 @@ interface ApiLoaderParams {
method: string
headers?: ICommonObject
body?: ICommonObject
+ ca?: string
}
class ApiLoader extends BaseDocumentLoader {
@@ -206,28 +235,36 @@ class ApiLoader extends BaseDocumentLoader {
public readonly method: string
- constructor({ url, headers, body, method }: ApiLoaderParams) {
+ public readonly ca?: string
+
+ constructor({ url, headers, body, method, ca }: ApiLoaderParams) {
super()
this.url = url
this.headers = headers
this.body = body
this.method = method
+ this.ca = ca
}
public async load(): Promise {
if (this.method === 'POST') {
- return this.executePostRequest(this.url, this.headers, this.body)
+ return this.executePostRequest(this.url, this.headers, this.body, this.ca)
} else {
- return this.executeGetRequest(this.url, this.headers)
+ return this.executeGetRequest(this.url, this.headers, this.ca)
}
}
- protected async executeGetRequest(url: string, headers?: ICommonObject): Promise {
+ protected async executeGetRequest(url: string, headers?: ICommonObject, ca?: string): Promise {
try {
const config: AxiosRequestConfig = {}
if (headers) {
config.headers = headers
}
+ if (ca) {
+ config.httpsAgent = new https.Agent({
+ ca: ca
+ })
+ }
const response = await axios.get(url, config)
const responseJsonString = JSON.stringify(response.data, null, 2)
const doc = new Document({
@@ -242,12 +279,17 @@ class ApiLoader extends BaseDocumentLoader {
}
}
- protected async executePostRequest(url: string, headers?: ICommonObject, body?: ICommonObject): Promise {
+ protected async executePostRequest(url: string, headers?: ICommonObject, body?: ICommonObject, ca?: string): Promise {
try {
const config: AxiosRequestConfig = {}
if (headers) {
config.headers = headers
}
+ if (ca) {
+ config.httpsAgent = new https.Agent({
+ ca: ca
+ })
+ }
const response = await axios.post(url, body ?? {}, config)
const responseJsonString = JSON.stringify(response.data, null, 2)
const doc = new Document({
diff --git a/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg b/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg
index 457caaaaa..c8894c844 100644
--- a/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg
+++ b/packages/components/nodes/documentloaders/ApifyWebsiteContentCrawler/apify-symbol-transparent.svg
@@ -1,5 +1,12 @@
-