Merge branch 'main' into chore/Models-Update
# Conflicts: # packages/components/models.json
This commit is contained in:
commit
9b9798b451
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: '[BUG]'
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Flow**
|
||||
If applicable, add exported flow in order to help replicating the problem.
|
||||
|
||||
**Setup**
|
||||
|
||||
- Installation [e.g. docker, `npx flowise start`, `pnpm start`]
|
||||
- Flowise Version [e.g. 1.2.11]
|
||||
- OS: [e.g. macOS, Windows, Linux]
|
||||
- Browser [e.g. chrome, safari]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
name: Bug Report
|
||||
description: File a bug report to help us improve
|
||||
labels: ['bug']
|
||||
assignees: []
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Make sure to have a proper title and description.
|
||||
|
||||
- type: textarea
|
||||
id: bug-description
|
||||
attributes:
|
||||
label: Describe the bug
|
||||
description: A clear and concise description of what the bug is.
|
||||
placeholder: Tell us what you see!
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: reproduce
|
||||
attributes:
|
||||
label: To Reproduce
|
||||
description: Steps to reproduce the behavior
|
||||
placeholder: |
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: expected
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: A clear and concise description of what you expected to happen.
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: screenshots
|
||||
attributes:
|
||||
label: Screenshots
|
||||
description: If applicable, add screenshots to help explain your problem.
|
||||
placeholder: Drag and drop or paste screenshots here
|
||||
|
||||
- type: textarea
|
||||
id: flow
|
||||
attributes:
|
||||
label: Flow
|
||||
description: If applicable, add exported flow in order to help replicating the problem.
|
||||
placeholder: Paste your exported flow here
|
||||
|
||||
- type: dropdown
|
||||
id: method
|
||||
attributes:
|
||||
label: Use Method
|
||||
description: How did you use Flowise?
|
||||
options:
|
||||
- Flowise Cloud
|
||||
- Docker
|
||||
- npx flowise start
|
||||
- pnpm start
|
||||
|
||||
- type: input
|
||||
id: version
|
||||
attributes:
|
||||
label: Flowise Version
|
||||
description: What version of Flowise are you running?
|
||||
placeholder: e.g., 1.2.11
|
||||
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: Operating System
|
||||
description: What operating system are you using?
|
||||
options:
|
||||
- Windows
|
||||
- macOS
|
||||
- Linux
|
||||
- Other
|
||||
|
||||
- type: dropdown
|
||||
id: browser
|
||||
attributes:
|
||||
label: Browser
|
||||
description: What browser are you using?
|
||||
options:
|
||||
- Chrome
|
||||
- Firefox
|
||||
- Safari
|
||||
- Edge
|
||||
- Other
|
||||
|
||||
- type: textarea
|
||||
id: context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context about the problem here.
|
||||
placeholder: Any additional information that might be helpful
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: '[FEATURE]'
|
||||
labels: ''
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Describe the feature you'd like**
|
||||
A clear and concise description of what you would like Flowise to have.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
|
|
@ -0,0 +1,67 @@
|
|||
name: Feature Request
|
||||
description: Suggest a new feature or enhancement for Flowise
|
||||
labels: ['enhancement']
|
||||
assignees: []
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for suggesting a new feature! Please provide as much detail as possible to help us understand your request.
|
||||
|
||||
- type: textarea
|
||||
id: feature-description
|
||||
attributes:
|
||||
label: Feature Description
|
||||
description: A clear and concise description of the feature you'd like to see in Flowise.
|
||||
placeholder: Describe what you want to be added or improved...
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
id: feature-category
|
||||
attributes:
|
||||
label: Feature Category
|
||||
description: What category does this feature belong to?
|
||||
options:
|
||||
- UI/UX Improvement
|
||||
- New Node/Component
|
||||
- Integration
|
||||
- Performance
|
||||
- Security
|
||||
- Documentation
|
||||
- API Enhancement
|
||||
- Workflow/Flow Management
|
||||
- Authentication/Authorization
|
||||
- Database/Storage
|
||||
- Deployment/DevOps
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: problem-statement
|
||||
attributes:
|
||||
label: Problem Statement
|
||||
description: What problem does this feature solve? What's the current pain point?
|
||||
placeholder: Describe the problem or limitation you're facing...
|
||||
|
||||
- type: textarea
|
||||
id: proposed-solution
|
||||
attributes:
|
||||
label: Proposed Solution
|
||||
description: How would you like this feature to work? Be as specific as possible.
|
||||
placeholder: Describe your ideal solution in detail...
|
||||
|
||||
- type: textarea
|
||||
id: mockups-references
|
||||
attributes:
|
||||
label: Mockups or References
|
||||
description: Any mockups, screenshots, or references to similar features in other tools?
|
||||
placeholder: Upload images or provide links to examples...
|
||||
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: Additional Context
|
||||
description: Any other information, context, or examples that would help us understand this request.
|
||||
placeholder: Add any other relevant information...
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
name: autoSyncMergedPullRequest
|
||||
on:
|
||||
pull_request_target:
|
||||
types:
|
||||
- closed
|
||||
branches: ['main']
|
||||
jobs:
|
||||
autoSyncMergedPullRequest:
|
||||
if: github.event.pull_request.merged == true
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Show PR info
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
echo The PR #${{ github.event.pull_request.number }} was merged on main branch!
|
||||
- name: Repository Dispatch
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.AUTOSYNC_TOKEN }}
|
||||
repository: ${{ secrets.AUTOSYNC_CH_URL }}
|
||||
event-type: ${{ secrets.AUTOSYNC_PR_EVENT_TYPE }}
|
||||
client-payload: >-
|
||||
{
|
||||
"ref": "${{ github.ref }}",
|
||||
"prNumber": "${{ github.event.pull_request.number }}",
|
||||
"prTitle": "${{ github.event.pull_request.title }}",
|
||||
"prDescription": "",
|
||||
"sha": "${{ github.sha }}"
|
||||
}
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
name: autoSyncSingleCommit
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
doNotAutoSyncSingleCommit:
|
||||
if: github.event.commits[1] != null
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: IGNORE autoSyncSingleCommit
|
||||
run: |
|
||||
echo This single commit has came from a merged commit. We will ignore it. This case is handled in autoSyncMergedPullRequest workflow for merge commits comming from merged pull requests only! Beware, the regular merge commits are not handled by any workflow for the moment.
|
||||
autoSyncSingleCommit:
|
||||
if: github.event.commits[1] == null
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: autoSyncSingleCommit
|
||||
env:
|
||||
GITHUB_CONTEXT: ${{ toJSON(github) }}
|
||||
run: |
|
||||
echo Autosync a single commit with id: ${{ github.sha }} from openSource main branch towards cloud hosted version.
|
||||
- name: Repository Dispatch
|
||||
uses: peter-evans/repository-dispatch@v3
|
||||
with:
|
||||
token: ${{ secrets.AUTOSYNC_TOKEN }}
|
||||
repository: ${{ secrets.AUTOSYNC_CH_URL }}
|
||||
event-type: ${{ secrets.AUTOSYNC_SC_EVENT_TYPE }}
|
||||
client-payload: >-
|
||||
{
|
||||
"ref": "${{ github.ref }}",
|
||||
"sha": "${{ github.sha }}",
|
||||
"commitMessage": "${{ github.event.commits[0].id }}"
|
||||
}
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
name: Docker Image CI - Docker Hub
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
node_version:
|
||||
description: 'Node.js version to build this image with.'
|
||||
type: choice
|
||||
required: true
|
||||
default: '20'
|
||||
options:
|
||||
- '20'
|
||||
tag_version:
|
||||
description: 'Tag version of the image to be pushed.'
|
||||
type: string
|
||||
required: true
|
||||
default: 'latest'
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Set default values
|
||||
id: defaults
|
||||
run: |
|
||||
echo "node_version=${{ github.event.inputs.node_version || '20' }}" >> $GITHUB_OUTPUT
|
||||
echo "tag_version=${{ github.event.inputs.tag_version || 'latest' }}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4.1.1
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3.0.0
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3.0.0
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
# -------------------------
|
||||
# Build and push main image
|
||||
# -------------------------
|
||||
- name: Build and push main image
|
||||
uses: docker/build-push-action@v5.3.0
|
||||
with:
|
||||
context: .
|
||||
file: ./docker/Dockerfile
|
||||
build-args: |
|
||||
NODE_VERSION=${{ steps.defaults.outputs.node_version }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: |
|
||||
flowiseai/flowise:${{ steps.defaults.outputs.tag_version }}
|
||||
|
||||
# -------------------------
|
||||
# Build and push worker image
|
||||
# -------------------------
|
||||
- name: Build and push worker image
|
||||
uses: docker/build-push-action@v5.3.0
|
||||
with:
|
||||
context: .
|
||||
file: docker/worker/Dockerfile
|
||||
build-args: |
|
||||
NODE_VERSION=${{ steps.defaults.outputs.node_version }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: |
|
||||
flowiseai/flowise-worker:${{ steps.defaults.outputs.tag_version }}
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
name: Docker Image CI - AWS ECR
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
environment:
|
||||
description: 'Environment to push the image to.'
|
||||
required: true
|
||||
default: 'dev'
|
||||
type: choice
|
||||
options:
|
||||
- dev
|
||||
- prod
|
||||
node_version:
|
||||
description: 'Node.js version to build this image with.'
|
||||
type: choice
|
||||
required: true
|
||||
default: '20'
|
||||
options:
|
||||
- '20'
|
||||
tag_version:
|
||||
description: 'Tag version of the image to be pushed.'
|
||||
type: string
|
||||
required: true
|
||||
default: 'latest'
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: ubuntu-latest
|
||||
environment: ${{ github.event.inputs.environment }}
|
||||
steps:
|
||||
- name: Set default values
|
||||
id: defaults
|
||||
run: |
|
||||
echo "node_version=${{ github.event.inputs.node_version || '20' }}" >> $GITHUB_OUTPUT
|
||||
echo "tag_version=${{ github.event.inputs.tag_version || 'latest' }}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4.1.1
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3.0.0
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3.0.0
|
||||
|
||||
- name: Configure AWS Credentials
|
||||
uses: aws-actions/configure-aws-credentials@v3
|
||||
with:
|
||||
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
aws-region: ${{ secrets.AWS_REGION }}
|
||||
|
||||
- name: Login to Amazon ECR
|
||||
uses: aws-actions/amazon-ecr-login@v1
|
||||
|
||||
# -------------------------
|
||||
# Build and push main image
|
||||
# -------------------------
|
||||
- name: Build and push main image
|
||||
uses: docker/build-push-action@v5.3.0
|
||||
with:
|
||||
context: .
|
||||
file: Dockerfile
|
||||
build-args: |
|
||||
NODE_VERSION=${{ steps.defaults.outputs.node_version }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: |
|
||||
${{ format('{0}.dkr.ecr.{1}.amazonaws.com/flowise:{2}',
|
||||
secrets.AWS_ACCOUNT_ID,
|
||||
secrets.AWS_REGION,
|
||||
steps.defaults.outputs.tag_version) }}
|
||||
|
|
@ -1,43 +0,0 @@
|
|||
name: Docker Image CI
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
node_version:
|
||||
description: 'Node.js version to build this image with.'
|
||||
type: choice
|
||||
required: true
|
||||
default: '20'
|
||||
options:
|
||||
- '20'
|
||||
tag_version:
|
||||
description: 'Tag version of the image to be pushed.'
|
||||
type: string
|
||||
required: true
|
||||
default: 'latest'
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4.1.1
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3.0.0
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3.0.0
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v5.3.0
|
||||
with:
|
||||
context: .
|
||||
file: ./docker/Dockerfile
|
||||
build-args: |
|
||||
NODE_VERSION=${{github.event.inputs.node_version}}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: flowiseai/flowise:${{github.event.inputs.tag_version}}
|
||||
|
|
@ -6,6 +6,7 @@ on:
|
|||
pull_request:
|
||||
branches:
|
||||
- '*'
|
||||
workflow_dispatch:
|
||||
permissions:
|
||||
contents: read
|
||||
jobs:
|
||||
|
|
@ -31,6 +32,8 @@ jobs:
|
|||
- run: pnpm install
|
||||
- run: pnpm lint
|
||||
- run: pnpm build
|
||||
env:
|
||||
NODE_OPTIONS: '--max_old_space_size=4096'
|
||||
- name: Cypress install
|
||||
run: pnpm cypress install
|
||||
- name: Install dependencies (Cypress Action)
|
||||
|
|
|
|||
|
|
@ -8,13 +8,12 @@ on:
|
|||
pull_request:
|
||||
branches:
|
||||
- '*'
|
||||
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
PUPPETEER_SKIP_DOWNLOAD: true
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- uses: actions/checkout@v4
|
||||
- run: docker build --no-cache -t flowise .
|
||||
|
|
|
|||
|
|
@ -114,54 +114,52 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
to make sure everything works fine in production.
|
||||
|
||||
11. Commit code and submit Pull Request from forked branch pointing to [Flowise master](https://github.com/FlowiseAI/Flowise/tree/master).
|
||||
11. Commit code and submit Pull Request from forked branch pointing to [Flowise main](https://github.com/FlowiseAI/Flowise/tree/main).
|
||||
|
||||
## 🌱 Env Variables
|
||||
|
||||
Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://docs.flowiseai.com/environment-variables)
|
||||
|
||||
| Variable | Description | Type | Default |
|
||||
| ---------------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
|
||||
| PORT | The HTTP port Flowise runs on | Number | 3000 |
|
||||
| CORS_ORIGINS | The allowed origins for all cross-origin HTTP calls | String | |
|
||||
| IFRAME_ORIGINS | The allowed origins for iframe src embedding | String | |
|
||||
| FLOWISE_USERNAME | Username to login | String | |
|
||||
| FLOWISE_PASSWORD | Password to login | String | |
|
||||
| FLOWISE_FILE_SIZE_LIMIT | Upload File Size Limit | String | 50mb |
|
||||
| DEBUG | Print logs from components | Boolean | |
|
||||
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
|
||||
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
|
||||
| LOG_JSON_SPACES | Spaces to beautify JSON logs | | 2 |
|
||||
| APIKEY_STORAGE_TYPE | To store api keys on a JSON file or database. Default is `json` | Enum String: `json`, `db` | `json` |
|
||||
| APIKEY_PATH | Location where api keys are saved when `APIKEY_STORAGE_TYPE` is `json` | String | `your-path/Flowise/packages/server` |
|
||||
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
|
||||
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
|
||||
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
|
||||
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
|
||||
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_SSL_KEY_BASE64 | Database SSL client cert in base64 (takes priority over DATABASE_SSL) | Boolean | false |
|
||||
| DATABASE_SSL | Database connection overssl (When DATABASE_TYPE is postgre) | Boolean | false |
|
||||
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
|
||||
| FLOWISE_SECRETKEY_OVERWRITE | Encryption key to be used instead of the key stored in SECRETKEY_PATH | String | |
|
||||
| MODEL_LIST_CONFIG_JSON | File path to load list of models from your local config file | String | `/your_model_list_config_file_path` |
|
||||
| STORAGE_TYPE | Type of storage for uploaded files. default is `local` | Enum String: `s3`, `local`, `gcs` | `local` |
|
||||
| BLOB_STORAGE_PATH | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String | `your-home-dir/.flowise/storage` |
|
||||
| S3_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3` | String | |
|
||||
| S3_STORAGE_ACCESS_KEY_ID | AWS Access Key | String | |
|
||||
| S3_STORAGE_SECRET_ACCESS_KEY | AWS Secret Key | String | |
|
||||
| S3_STORAGE_REGION | Region for S3 bucket | String | |
|
||||
| S3_ENDPOINT_URL | Custom Endpoint for S3 | String | |
|
||||
| S3_FORCE_PATH_STYLE | Set this to true to force the request to use path-style addressing | Boolean | false |
|
||||
| GOOGLE_CLOUD_STORAGE_PROJ_ID | The GCP project id for cloud storage & logging when `STORAGE_TYPE` is `gcs` | String | |
|
||||
| GOOGLE_CLOUD_STORAGE_CREDENTIAL | The credential key file path when `STORAGE_TYPE` is `gcs` | String | |
|
||||
| GOOGLE_CLOUD_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `gcs` | String | |
|
||||
| GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS | Enable uniform bucket level access when `STORAGE_TYPE` is `gcs` | Boolean | true |
|
||||
| SHOW_COMMUNITY_NODES | Show nodes created by community | Boolean | |
|
||||
| DISABLED_NODES | Hide nodes from UI (comma separated list of node names) | String | |
|
||||
| Variable | Description | Type | Default |
|
||||
| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
|
||||
| PORT | The HTTP port Flowise runs on | Number | 3000 |
|
||||
| CORS_ORIGINS | The allowed origins for all cross-origin HTTP calls | String | |
|
||||
| IFRAME_ORIGINS | The allowed origins for iframe src embedding | String | |
|
||||
| FLOWISE_FILE_SIZE_LIMIT | Upload File Size Limit | String | 50mb |
|
||||
| DEBUG | Print logs from components | Boolean | |
|
||||
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
|
||||
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
|
||||
| LOG_JSON_SPACES | Spaces to beautify JSON logs | | 2 |
|
||||
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Custom Tool or Function | String | |
|
||||
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Custom Tool or Function | String | |
|
||||
| ALLOW_BUILTIN_DEP | Allow project dependencies to be used for Custom Tool or Function | Boolean | false |
|
||||
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
|
||||
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
|
||||
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
|
||||
| DATABASE_SSL_KEY_BASE64 | Database SSL client cert in base64 (takes priority over DATABASE_SSL) | Boolean | false |
|
||||
| DATABASE_SSL | Database connection overssl (When DATABASE_TYPE is postgre) | Boolean | false |
|
||||
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
|
||||
| FLOWISE_SECRETKEY_OVERWRITE | Encryption key to be used instead of the key stored in SECRETKEY_PATH | String | |
|
||||
| MODEL_LIST_CONFIG_JSON | File path to load list of models from your local config file | String | `/your_model_list_config_file_path` |
|
||||
| STORAGE_TYPE | Type of storage for uploaded files. default is `local` | Enum String: `s3`, `local`, `gcs` | `local` |
|
||||
| BLOB_STORAGE_PATH | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String | `your-home-dir/.flowise/storage` |
|
||||
| S3_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3` | String | |
|
||||
| S3_STORAGE_ACCESS_KEY_ID | AWS Access Key | String | |
|
||||
| S3_STORAGE_SECRET_ACCESS_KEY | AWS Secret Key | String | |
|
||||
| S3_STORAGE_REGION | Region for S3 bucket | String | |
|
||||
| S3_ENDPOINT_URL | Custom Endpoint for S3 | String | |
|
||||
| S3_FORCE_PATH_STYLE | Set this to true to force the request to use path-style addressing | Boolean | false |
|
||||
| GOOGLE_CLOUD_STORAGE_PROJ_ID | The GCP project id for cloud storage & logging when `STORAGE_TYPE` is `gcs` | String | |
|
||||
| GOOGLE_CLOUD_STORAGE_CREDENTIAL | The credential key file path when `STORAGE_TYPE` is `gcs` | String | |
|
||||
| GOOGLE_CLOUD_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `gcs` | String | |
|
||||
| GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS | Enable uniform bucket level access when `STORAGE_TYPE` is `gcs` | Boolean | true |
|
||||
| SHOW_COMMUNITY_NODES | Show nodes created by community | Boolean | |
|
||||
| DISABLED_NODES | Hide nodes from UI (comma separated list of node names) | String | |
|
||||
| TRUST_PROXY | Configure proxy trust settings for proper IP detection. Values: 'true' (trust all), 'false' (disable), number (hop count), or Express proxy values (e.g., 'loopback', 'linklocal', 'uniquelocal', IP addresses). [Learn More](https://expressjs.com/en/guide/behind-proxies.html) | Boolean/String/Number | true |
|
||||
|
||||
You can also specify the env variables when using `npx`. For example:
|
||||
|
||||
|
|
|
|||
37
Dockerfile
37
Dockerfile
|
|
@ -5,33 +5,40 @@
|
|||
# docker run -d -p 3000:3000 flowise
|
||||
|
||||
FROM node:20-alpine
|
||||
RUN apk add --update libc6-compat python3 make g++
|
||||
# needed for pdfjs-dist
|
||||
RUN apk add --no-cache build-base cairo-dev pango-dev
|
||||
|
||||
# Install Chromium
|
||||
RUN apk add --no-cache chromium
|
||||
|
||||
# Install curl for container-level health checks
|
||||
# Fixes: https://github.com/FlowiseAI/Flowise/issues/4126
|
||||
RUN apk add --no-cache curl
|
||||
|
||||
#install PNPM globaly
|
||||
RUN npm install -g pnpm
|
||||
# Install system dependencies and build tools
|
||||
RUN apk update && \
|
||||
apk add --no-cache \
|
||||
libc6-compat \
|
||||
python3 \
|
||||
make \
|
||||
g++ \
|
||||
build-base \
|
||||
cairo-dev \
|
||||
pango-dev \
|
||||
chromium \
|
||||
curl && \
|
||||
npm install -g pnpm
|
||||
|
||||
ENV PUPPETEER_SKIP_DOWNLOAD=true
|
||||
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
|
||||
|
||||
ENV NODE_OPTIONS=--max-old-space-size=8192
|
||||
|
||||
WORKDIR /usr/src
|
||||
WORKDIR /usr/src/flowise
|
||||
|
||||
# Copy app source
|
||||
COPY . .
|
||||
|
||||
RUN pnpm install
|
||||
# Install dependencies and build
|
||||
RUN pnpm install && \
|
||||
pnpm build
|
||||
|
||||
RUN pnpm build
|
||||
# Give the node user ownership of the application files
|
||||
RUN chown -R node:node .
|
||||
|
||||
# Switch to non-root user (node user already exists in node:20-alpine)
|
||||
USER node
|
||||
|
||||
EXPOSE 3000
|
||||
|
||||
|
|
|
|||
14
LICENSE.md
14
LICENSE.md
|
|
@ -1,6 +1,14 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
Copyright (c) 2023-present FlowiseAI, Inc.
|
||||
|
||||
Portions of this software are licensed as follows:
|
||||
|
||||
- All content that resides under https://github.com/FlowiseAI/Flowise/tree/main/packages/server/src/enterprise directory and files with explicit copyright notice such as [IdentityManager.ts](https://github.com/FlowiseAI/Flowise/tree/main/packages/server/src/IdentityManager.ts) are licensed under [Commercial License](https://github.com/FlowiseAI/Flowise/tree/main/packages/server/src/enterprise/LICENSE.md).
|
||||
- All third party components incorporated into the FlowiseAI Software are licensed under the original license provided by the owner of the applicable component.
|
||||
- Content outside of the above mentioned directories or restrictions above is available under the "Apache 2.0" license as defined below.
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
|
|
|
|||
78
README.md
78
README.md
|
|
@ -5,6 +5,8 @@
|
|||
<img src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_dark.svg#gh-dark-mode-only">
|
||||
</p>
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/FlowiseAI/Flowise/releases)
|
||||
[](https://discord.gg/jbaHfsRVBW)
|
||||
[](https://twitter.com/FlowiseAI)
|
||||
|
|
@ -13,10 +15,25 @@
|
|||
|
||||
English | [繁體中文](./i18n/README-TW.md) | [简体中文](./i18n/README-ZH.md) | [日本語](./i18n/README-JA.md) | [한국어](./i18n/README-KR.md)
|
||||
|
||||
</div>
|
||||
|
||||
<h3>Build AI Agents, Visually</h3>
|
||||
<a href="https://github.com/FlowiseAI/Flowise">
|
||||
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
|
||||
|
||||
## 📚 Table of Contents
|
||||
|
||||
- [⚡ Quick Start](#-quick-start)
|
||||
- [🐳 Docker](#-docker)
|
||||
- [👨💻 Developers](#-developers)
|
||||
- [🌱 Env Variables](#-env-variables)
|
||||
- [📖 Documentation](#-documentation)
|
||||
- [🌐 Self Host](#-self-host)
|
||||
- [☁️ Flowise Cloud](#️-flowise-cloud)
|
||||
- [🙋 Support](#-support)
|
||||
- [🙌 Contributing](#-contributing)
|
||||
- [📄 License](#-license)
|
||||
|
||||
## ⚡Quick Start
|
||||
|
||||
Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
|
||||
|
|
@ -31,12 +48,6 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
|
|||
npx flowise start
|
||||
```
|
||||
|
||||
With username & password
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. Open [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
## 🐳 Docker
|
||||
|
|
@ -53,9 +64,11 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
|
|||
### Docker Image
|
||||
|
||||
1. Build the image locally:
|
||||
|
||||
```bash
|
||||
docker build --no-cache -t flowise .
|
||||
```
|
||||
|
||||
2. Run image:
|
||||
|
||||
```bash
|
||||
|
|
@ -63,6 +76,7 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
|
|||
```
|
||||
|
||||
3. Stop image:
|
||||
|
||||
```bash
|
||||
docker stop flowise
|
||||
```
|
||||
|
|
@ -85,13 +99,13 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
### Setup
|
||||
|
||||
1. Clone the repository
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/FlowiseAI/Flowise.git
|
||||
```
|
||||
|
||||
2. Go into repository folder
|
||||
2. Go into repository folder:
|
||||
|
||||
```bash
|
||||
cd Flowise
|
||||
|
|
@ -111,10 +125,24 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
<details>
|
||||
<summary>Exit code 134 (JavaScript heap out of memory)</summary>
|
||||
If you get this error when running the above `build` script, try increasing the Node.js heap size and run the script again:
|
||||
If you get this error when running the above `build` script, try increasing the Node.js heap size and run the script again:
|
||||
|
||||
export NODE_OPTIONS="--max-old-space-size=4096"
|
||||
pnpm build
|
||||
```bash
|
||||
# macOS / Linux / Git Bash
|
||||
export NODE_OPTIONS="--max-old-space-size=4096"
|
||||
|
||||
# Windows PowerShell
|
||||
$env:NODE_OPTIONS="--max-old-space-size=4096"
|
||||
|
||||
# Windows CMD
|
||||
set NODE_OPTIONS=--max-old-space-size=4096
|
||||
```
|
||||
|
||||
Then run:
|
||||
|
||||
```bash
|
||||
pnpm build
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
|
@ -130,7 +158,7 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
- Create `.env` file and specify the `VITE_PORT` (refer to `.env.example`) in `packages/ui`
|
||||
- Create `.env` file and specify the `PORT` (refer to `.env.example`) in `packages/server`
|
||||
- Run
|
||||
- Run:
|
||||
|
||||
```bash
|
||||
pnpm dev
|
||||
|
|
@ -138,22 +166,13 @@ Flowise has 3 different modules in a single mono repository.
|
|||
|
||||
Any code changes will reload the app automatically on [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
## 🔒 Authentication
|
||||
|
||||
To enable app level authentication, add `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `.env` file in `packages/server`:
|
||||
|
||||
```
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 Env Variables
|
||||
|
||||
Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
Flowise supports different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
|
||||
## 📖 Documentation
|
||||
|
||||
[Flowise Docs](https://docs.flowiseai.com/)
|
||||
You can view the Flowise Docs [here](https://docs.flowiseai.com/)
|
||||
|
||||
## 🌐 Self Host
|
||||
|
||||
|
|
@ -171,6 +190,10 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
|
|||
|
||||
[](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
|
||||
|
||||
- [Northflank](https://northflank.com/stacks/deploy-flowiseai)
|
||||
|
||||
[](https://northflank.com/stacks/deploy-flowiseai)
|
||||
|
||||
- [Render](https://docs.flowiseai.com/configuration/deployment/render)
|
||||
|
||||
[](https://docs.flowiseai.com/configuration/deployment/render)
|
||||
|
|
@ -195,11 +218,11 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
|
|||
|
||||
## ☁️ Flowise Cloud
|
||||
|
||||
[Get Started with Flowise Cloud](https://flowiseai.com/)
|
||||
Get Started with [Flowise Cloud](https://flowiseai.com/).
|
||||
|
||||
## 🙋 Support
|
||||
|
||||
Feel free to ask any questions, raise problems, and request new features in [discussion](https://github.com/FlowiseAI/Flowise/discussions)
|
||||
Feel free to ask any questions, raise problems, and request new features in [Discussion](https://github.com/FlowiseAI/Flowise/discussions).
|
||||
|
||||
## 🙌 Contributing
|
||||
|
||||
|
|
@ -207,9 +230,10 @@ Thanks go to these awesome contributors
|
|||
|
||||
<a href="https://github.com/FlowiseAI/Flowise/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
</a><br><br>
|
||||
|
||||
See [Contributing Guide](CONTRIBUTING.md). Reach out to us at [Discord](https://discord.gg/jbaHfsRVBW) if you have any questions or issues.
|
||||
|
||||
See [contributing guide](CONTRIBUTING.md). Reach out to us at [Discord](https://discord.gg/jbaHfsRVBW) if you have any questions or issues.
|
||||
[](https://star-history.com/#FlowiseAI/Flowise&Date)
|
||||
|
||||
## 📄 License
|
||||
|
|
|
|||
46
SECURITY.md
46
SECURITY.md
|
|
@ -2,39 +2,37 @@
|
|||
|
||||
At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
|
||||
|
||||
### Vulnerabilities
|
||||
### Out of scope vulnerabilities
|
||||
|
||||
The following types of issues are some of the most common vulnerabilities:
|
||||
|
||||
- Clickjacking on pages without sensitive actions
|
||||
- CSRF on unauthenticated/logout/login pages
|
||||
- Attacks requiring MITM (Man-in-the-Middle) or physical device access
|
||||
- Social engineering attacks
|
||||
- Activities that cause service disruption (DoS)
|
||||
- Content spoofing and text injection without a valid attack vector
|
||||
- Email spoofing
|
||||
- Absence of DNSSEC, CAA, CSP headers
|
||||
- Missing Secure or HTTP-only flag on non-sensitive cookies
|
||||
- Deadlinks
|
||||
- User enumeration
|
||||
- Clickjacking on pages without sensitive actions
|
||||
- CSRF on unauthenticated/logout/login pages
|
||||
- Attacks requiring MITM (Man-in-the-Middle) or physical device access
|
||||
- Social engineering attacks
|
||||
- Activities that cause service disruption (DoS)
|
||||
- Content spoofing and text injection without a valid attack vector
|
||||
- Email spoofing
|
||||
- Absence of DNSSEC, CAA, CSP headers
|
||||
- Missing Secure or HTTP-only flag on non-sensitive cookies
|
||||
- Deadlinks
|
||||
- User enumeration
|
||||
|
||||
### Reporting Guidelines
|
||||
|
||||
- Submit your findings to https://github.com/FlowiseAI/Flowise/security
|
||||
- Provide clear details to help us reproduce and fix the issue quickly.
|
||||
- Submit your findings to https://github.com/FlowiseAI/Flowise/security
|
||||
- Provide clear details to help us reproduce and fix the issue quickly.
|
||||
|
||||
### Disclosure Guidelines
|
||||
|
||||
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
|
||||
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
|
||||
- Avoid including:
|
||||
- Data from any Flowise customer projects
|
||||
- Flowise user/customer information
|
||||
- Details about Flowise employees, contractors, or partners
|
||||
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
|
||||
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
|
||||
- Avoid including:
|
||||
- Data from any Flowise customer projects
|
||||
- Flowise user/customer information
|
||||
- Details about Flowise employees, contractors, or partners
|
||||
|
||||
### Response to Reports
|
||||
|
||||
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
|
||||
- Your report will be kept **confidential**, and your details will not be shared without your consent.
|
||||
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
|
||||
- Your report will be kept **confidential**, and your details will not be shared without your consent.
|
||||
|
||||
We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.
|
||||
|
|
|
|||
|
|
@ -1,16 +1,12 @@
|
|||
PORT=3000
|
||||
|
||||
# APIKEY_PATH=/your_apikey_path/.flowise # (will be deprecated by end of 2025)
|
||||
|
||||
############################################################################################################
|
||||
############################################## DATABASE ####################################################
|
||||
############################################################################################################
|
||||
|
||||
DATABASE_PATH=/root/.flowise
|
||||
APIKEY_PATH=/root/.flowise
|
||||
SECRETKEY_PATH=/root/.flowise
|
||||
LOG_PATH=/root/.flowise/logs
|
||||
BLOB_STORAGE_PATH=/root/.flowise/storage
|
||||
|
||||
# APIKEY_STORAGE_TYPE=json (json | db)
|
||||
|
||||
# NUMBER_OF_PROXIES= 1
|
||||
# CORS_ORIGINS=*
|
||||
# IFRAME_ORIGINS=*
|
||||
|
||||
# DATABASE_TYPE=postgres
|
||||
# DATABASE_PORT=5432
|
||||
# DATABASE_HOST=""
|
||||
|
|
@ -18,36 +14,43 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
|
|||
# DATABASE_USER=root
|
||||
# DATABASE_PASSWORD=mypassword
|
||||
# DATABASE_SSL=true
|
||||
# DATABASE_REJECT_UNAUTHORIZED=true
|
||||
# DATABASE_SSL_KEY_BASE64=<Self signed certificate in BASE64>
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## SECRET KEYS #################################################
|
||||
############################################################################################################
|
||||
|
||||
# SECRETKEY_STORAGE_TYPE=local #(local | aws)
|
||||
# SECRETKEY_PATH=/your_api_key_path/.flowise
|
||||
# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey
|
||||
SECRETKEY_PATH=/root/.flowise
|
||||
# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey # (if you want to overwrite the secret key)
|
||||
# SECRETKEY_AWS_ACCESS_KEY=<your-access-key>
|
||||
# SECRETKEY_AWS_SECRET_KEY=<your-secret-key>
|
||||
# SECRETKEY_AWS_REGION=us-west-2
|
||||
# SECRETKEY_AWS_NAME=FlowiseEncryptionKey
|
||||
|
||||
# FLOWISE_USERNAME=user
|
||||
# FLOWISE_PASSWORD=1234
|
||||
# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey
|
||||
# FLOWISE_FILE_SIZE_LIMIT=50mb
|
||||
|
||||
############################################################################################################
|
||||
############################################## LOGGING #####################################################
|
||||
############################################################################################################
|
||||
|
||||
# DEBUG=true
|
||||
# LOG_LEVEL=info (error | warn | info | verbose | debug)
|
||||
LOG_PATH=/root/.flowise/logs
|
||||
# LOG_LEVEL=info #(error | warn | info | verbose | debug)
|
||||
# LOG_SANITIZE_BODY_FIELDS=password,pwd,pass,secret,token,apikey,api_key,accesstoken,access_token,refreshtoken,refresh_token,clientsecret,client_secret,privatekey,private_key,secretkey,secret_key,auth,authorization,credential,credentials
|
||||
# LOG_SANITIZE_HEADER_FIELDS=authorization,x-api-key,x-auth-token,cookie
|
||||
# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
|
||||
# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash
|
||||
# ALLOW_BUILTIN_DEP=false
|
||||
|
||||
# LANGCHAIN_TRACING_V2=true
|
||||
# LANGCHAIN_ENDPOINT=https://api.smith.langchain.com
|
||||
# LANGCHAIN_API_KEY=your_api_key
|
||||
# LANGCHAIN_PROJECT=your_project
|
||||
|
||||
# Uncomment the following line to enable model list config, load the list of models from your local config file
|
||||
# see https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json for the format
|
||||
# MODEL_LIST_CONFIG_JSON=/your_model_list_config_file_path
|
||||
############################################################################################################
|
||||
############################################## STORAGE #####################################################
|
||||
############################################################################################################
|
||||
|
||||
# STORAGE_TYPE=local (local | s3 | gcs)
|
||||
# BLOB_STORAGE_PATH=/your_storage_path/.flowise/storage
|
||||
BLOB_STORAGE_PATH=/root/.flowise/storage
|
||||
# S3_STORAGE_BUCKET_NAME=flowise
|
||||
# S3_STORAGE_ACCESS_KEY_ID=<your-access-key>
|
||||
# S3_STORAGE_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
|
|
@ -59,12 +62,70 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
|
|||
# GOOGLE_CLOUD_STORAGE_BUCKET_NAME=<the-bucket-name>
|
||||
# GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=true
|
||||
|
||||
# SHOW_COMMUNITY_NODES=true
|
||||
# DISABLED_NODES=bufferMemory,chatOpenAI (comma separated list of node names to disable)
|
||||
|
||||
######################
|
||||
# METRICS COLLECTION
|
||||
#######################
|
||||
############################################################################################################
|
||||
############################################## SETTINGS ####################################################
|
||||
############################################################################################################
|
||||
|
||||
# NUMBER_OF_PROXIES= 1
|
||||
# CORS_ORIGINS=*
|
||||
# IFRAME_ORIGINS=*
|
||||
# FLOWISE_FILE_SIZE_LIMIT=50mb
|
||||
# SHOW_COMMUNITY_NODES=true
|
||||
# DISABLE_FLOWISE_TELEMETRY=true
|
||||
# DISABLED_NODES=bufferMemory,chatOpenAI (comma separated list of node names to disable)
|
||||
# Uncomment the following line to enable model list config, load the list of models from your local config file
|
||||
# see https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json for the format
|
||||
# MODEL_LIST_CONFIG_JSON=/your_model_list_config_file_path
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################ AUTH PARAMETERS ###############################################
|
||||
############################################################################################################
|
||||
|
||||
# APP_URL=http://localhost:3000
|
||||
|
||||
# SMTP_HOST=smtp.host.com
|
||||
# SMTP_PORT=465
|
||||
# SMTP_USER=smtp_user
|
||||
# SMTP_PASSWORD=smtp_password
|
||||
# SMTP_SECURE=true
|
||||
# ALLOW_UNAUTHORIZED_CERTS=false
|
||||
# SENDER_EMAIL=team@example.com
|
||||
|
||||
JWT_AUTH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
|
||||
JWT_REFRESH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
|
||||
JWT_ISSUER='ISSUER'
|
||||
JWT_AUDIENCE='AUDIENCE'
|
||||
JWT_TOKEN_EXPIRY_IN_MINUTES=360
|
||||
JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=43200
|
||||
# EXPIRE_AUTH_TOKENS_ON_RESTART=true # (if you need to expire all tokens on app restart)
|
||||
# EXPRESS_SESSION_SECRET=flowise
|
||||
# SECURE_COOKIES=
|
||||
|
||||
# INVITE_TOKEN_EXPIRY_IN_HOURS=24
|
||||
# PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=15
|
||||
# PASSWORD_SALT_HASH_ROUNDS=10
|
||||
# TOKEN_HASH_SECRET='popcorn'
|
||||
|
||||
# WORKSPACE_INVITE_TEMPLATE_PATH=/path/to/custom/workspace_invite.hbs
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################# ENTERPRISE ###################################################
|
||||
############################################################################################################
|
||||
|
||||
# LICENSE_URL=
|
||||
# FLOWISE_EE_LICENSE_KEY=
|
||||
# OFFLINE=
|
||||
|
||||
|
||||
############################################################################################################
|
||||
########################################### METRICS COLLECTION #############################################
|
||||
############################################################################################################
|
||||
|
||||
# POSTHOG_PUBLIC_API_KEY=your_posthog_public_api_key
|
||||
|
||||
# ENABLE_METRICS=false
|
||||
# METRICS_PROVIDER=prometheus # prometheus | open_telemetry
|
||||
# METRICS_INCLUDE_NODE_METRICS=true # default is true
|
||||
|
|
@ -75,15 +136,21 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
|
|||
# METRICS_OPEN_TELEMETRY_PROTOCOL=http # http | grpc | proto (default is http)
|
||||
# METRICS_OPEN_TELEMETRY_DEBUG=true # default is false
|
||||
|
||||
# Uncomment the following lines to enable global agent proxy
|
||||
# see https://www.npmjs.com/package/global-agent for more details
|
||||
|
||||
############################################################################################################
|
||||
############################################### PROXY ######################################################
|
||||
############################################################################################################
|
||||
|
||||
# Uncomment the following lines to enable global agent proxy, see https://www.npmjs.com/package/global-agent for more details
|
||||
# GLOBAL_AGENT_HTTP_PROXY=CorporateHttpProxyUrl
|
||||
# GLOBAL_AGENT_HTTPS_PROXY=CorporateHttpsProxyUrl
|
||||
# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
|
||||
|
||||
######################
|
||||
# QUEUE CONFIGURATION
|
||||
#######################
|
||||
|
||||
############################################################################################################
|
||||
########################################### QUEUE CONFIGURATION ############################################
|
||||
############################################################################################################
|
||||
|
||||
# MODE=queue #(queue | main)
|
||||
# QUEUE_NAME=flowise-queue
|
||||
# QUEUE_REDIS_EVENT_STREAM_MAX_LEN=100000
|
||||
|
|
@ -101,3 +168,13 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
|
|||
# REDIS_CA=
|
||||
# REDIS_KEEP_ALIVE=
|
||||
# ENABLE_BULLMQ_DASHBOARD=
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## SECURITY ####################################################
|
||||
############################################################################################################
|
||||
|
||||
# HTTP_DENY_LIST=
|
||||
# CUSTOM_MCP_SECURITY_CHECK=true
|
||||
# CUSTOM_MCP_PROTOCOL=sse #(stdio | sse)
|
||||
# TRUST_PROXY=true #(true | false | 1 | loopback| linklocal | uniquelocal | IP addresses | loopback, IP addresses)
|
||||
|
|
|
|||
|
|
@ -9,28 +9,43 @@ Starts Flowise from [DockerHub Image](https://hub.docker.com/r/flowiseai/flowise
|
|||
3. Open [http://localhost:3000](http://localhost:3000)
|
||||
4. You can bring the containers down by `docker compose stop`
|
||||
|
||||
## 🔒 Authentication
|
||||
|
||||
1. Create `.env` file and specify the `PORT`, `FLOWISE_USERNAME`, and `FLOWISE_PASSWORD` (refer to `.env.example`)
|
||||
2. Pass `FLOWISE_USERNAME` and `FLOWISE_PASSWORD` to the `docker-compose.yml` file:
|
||||
```
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
|
||||
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
|
||||
```
|
||||
3. `docker compose up -d`
|
||||
4. Open [http://localhost:3000](http://localhost:3000)
|
||||
5. You can bring the containers down by `docker compose stop`
|
||||
|
||||
## 🌱 Env Variables
|
||||
|
||||
If you like to persist your data (flows, logs, apikeys, credentials), set these variables in the `.env` file inside `docker` folder:
|
||||
If you like to persist your data (flows, logs, credentials, storage), set these variables in the `.env` file inside `docker` folder:
|
||||
|
||||
- DATABASE_PATH=/root/.flowise
|
||||
- APIKEY_PATH=/root/.flowise
|
||||
- LOG_PATH=/root/.flowise/logs
|
||||
- SECRETKEY_PATH=/root/.flowise
|
||||
- BLOB_STORAGE_PATH=/root/.flowise/storage
|
||||
|
||||
Flowise also support different environment variables to configure your instance. Read [more](https://docs.flowiseai.com/environment-variables)
|
||||
Flowise also support different environment variables to configure your instance. Read [more](https://docs.flowiseai.com/configuration/environment-variables)
|
||||
|
||||
## Queue Mode:
|
||||
|
||||
### Building from source:
|
||||
|
||||
You can build the images for worker and main from scratch with:
|
||||
|
||||
```
|
||||
docker compose -f docker-compose-queue-source.yml up -d
|
||||
```
|
||||
|
||||
Monitor Health:
|
||||
|
||||
```
|
||||
docker compose -f docker-compose-queue-source.yml ps
|
||||
```
|
||||
|
||||
### From pre-built images:
|
||||
|
||||
You can also use the pre-built images:
|
||||
|
||||
```
|
||||
docker compose -f docker-compose-queue-prebuilt.yml up -d
|
||||
```
|
||||
|
||||
Monitor Health:
|
||||
|
||||
```
|
||||
docker compose -f docker-compose-queue-prebuilt.yml ps
|
||||
```
|
||||
|
|
|
|||
|
|
@ -0,0 +1,316 @@
|
|||
version: '3.1'
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: flowise-redis
|
||||
ports:
|
||||
- '6379:6379'
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
networks:
|
||||
- flowise-net
|
||||
restart: always
|
||||
|
||||
flowise:
|
||||
image: flowiseai/flowise:latest
|
||||
container_name: flowise-main
|
||||
restart: always
|
||||
ports:
|
||||
- '${PORT:-3000}:${PORT:-3000}'
|
||||
volumes:
|
||||
- ~/.flowise:/root/.flowise
|
||||
environment:
|
||||
# --- Essential Flowise Vars ---
|
||||
- PORT=${PORT:-3000}
|
||||
- DATABASE_PATH=${DATABASE_PATH:-/root/.flowise}
|
||||
- DATABASE_TYPE=${DATABASE_TYPE}
|
||||
- DATABASE_PORT=${DATABASE_PORT}
|
||||
- DATABASE_HOST=${DATABASE_HOST}
|
||||
- DATABASE_NAME=${DATABASE_NAME}
|
||||
- DATABASE_USER=${DATABASE_USER}
|
||||
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
|
||||
- DATABASE_SSL=${DATABASE_SSL}
|
||||
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
|
||||
|
||||
# SECRET KEYS
|
||||
- SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
|
||||
- SECRETKEY_PATH=${SECRETKEY_PATH}
|
||||
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
|
||||
- SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
|
||||
- SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
|
||||
- SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
|
||||
- SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
|
||||
|
||||
# LOGGING
|
||||
- DEBUG=${DEBUG}
|
||||
- LOG_PATH=${LOG_PATH}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
|
||||
- LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
|
||||
|
||||
# CUSTOM TOOL/FUNCTION DEPENDENCIES
|
||||
- TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
|
||||
- TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
|
||||
- ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
|
||||
|
||||
# STORAGE
|
||||
- STORAGE_TYPE=${STORAGE_TYPE}
|
||||
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
|
||||
- S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
|
||||
- S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
|
||||
- S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
|
||||
- S3_STORAGE_REGION=${S3_STORAGE_REGION}
|
||||
- S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
|
||||
- S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
|
||||
- GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
|
||||
- GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
|
||||
- GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
|
||||
- GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
|
||||
|
||||
# SETTINGS
|
||||
- NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
|
||||
- CORS_ORIGINS=${CORS_ORIGINS}
|
||||
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
|
||||
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
|
||||
- SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
|
||||
- DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
|
||||
- DISABLED_NODES=${DISABLED_NODES}
|
||||
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
|
||||
|
||||
# AUTH PARAMETERS
|
||||
- APP_URL=${APP_URL}
|
||||
- JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
|
||||
- JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
|
||||
- JWT_ISSUER=${JWT_ISSUER}
|
||||
- JWT_AUDIENCE=${JWT_AUDIENCE}
|
||||
- JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
|
||||
- EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
|
||||
- PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
|
||||
- PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
|
||||
- TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
|
||||
- SECURE_COOKIES=${SECURE_COOKIES}
|
||||
|
||||
# EMAIL
|
||||
- SMTP_HOST=${SMTP_HOST}
|
||||
- SMTP_PORT=${SMTP_PORT}
|
||||
- SMTP_USER=${SMTP_USER}
|
||||
- SMTP_PASSWORD=${SMTP_PASSWORD}
|
||||
- SMTP_SECURE=${SMTP_SECURE}
|
||||
- ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
|
||||
- SENDER_EMAIL=${SENDER_EMAIL}
|
||||
|
||||
# ENTERPRISE
|
||||
- LICENSE_URL=${LICENSE_URL}
|
||||
- FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
|
||||
- OFFLINE=${OFFLINE}
|
||||
- INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
|
||||
- WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
|
||||
|
||||
# METRICS COLLECTION
|
||||
- POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
|
||||
- ENABLE_METRICS=${ENABLE_METRICS}
|
||||
- METRICS_PROVIDER=${METRICS_PROVIDER}
|
||||
- METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
|
||||
- METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
|
||||
- METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
|
||||
- METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
|
||||
- METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
|
||||
|
||||
# PROXY
|
||||
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
|
||||
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
|
||||
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
|
||||
|
||||
# --- Queue Configuration (Main Instance) ---
|
||||
- MODE=${MODE:-queue}
|
||||
- QUEUE_NAME=${QUEUE_NAME:-flowise-queue}
|
||||
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
|
||||
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
|
||||
- REMOVE_ON_AGE=${REMOVE_ON_AGE}
|
||||
- REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
|
||||
- REDIS_URL=${REDIS_URL:-redis://redis:6379}
|
||||
- REDIS_HOST=${REDIS_HOST}
|
||||
- REDIS_PORT=${REDIS_PORT}
|
||||
- REDIS_USERNAME=${REDIS_USERNAME}
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
- REDIS_TLS=${REDIS_TLS}
|
||||
- REDIS_CERT=${REDIS_CERT}
|
||||
- REDIS_KEY=${REDIS_KEY}
|
||||
- REDIS_CA=${REDIS_CA}
|
||||
- REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
|
||||
- ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
|
||||
|
||||
# SECURITY
|
||||
- CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
|
||||
- CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
|
||||
- HTTP_DENY_LIST=${HTTP_DENY_LIST}
|
||||
- TRUST_PROXY=${TRUST_PROXY}
|
||||
healthcheck:
|
||||
test: ['CMD', 'curl', '-f', 'http://localhost:${PORT:-3000}/api/v1/ping']
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
entrypoint: /bin/sh -c "sleep 3; flowise start"
|
||||
depends_on:
|
||||
- redis
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
flowise-worker:
|
||||
image: flowiseai/flowise-worker:latest
|
||||
container_name: flowise-worker
|
||||
restart: always
|
||||
volumes:
|
||||
- ~/.flowise:/root/.flowise
|
||||
environment:
|
||||
# --- Essential Flowise Vars ---
|
||||
- WORKER_PORT=${WORKER_PORT:-5566}
|
||||
- DATABASE_PATH=${DATABASE_PATH:-/root/.flowise}
|
||||
- DATABASE_TYPE=${DATABASE_TYPE}
|
||||
- DATABASE_PORT=${DATABASE_PORT}
|
||||
- DATABASE_HOST=${DATABASE_HOST}
|
||||
- DATABASE_NAME=${DATABASE_NAME}
|
||||
- DATABASE_USER=${DATABASE_USER}
|
||||
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
|
||||
- DATABASE_SSL=${DATABASE_SSL}
|
||||
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
|
||||
|
||||
# SECRET KEYS
|
||||
- SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
|
||||
- SECRETKEY_PATH=${SECRETKEY_PATH}
|
||||
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
|
||||
- SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
|
||||
- SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
|
||||
- SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
|
||||
- SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
|
||||
|
||||
# LOGGING
|
||||
- DEBUG=${DEBUG}
|
||||
- LOG_PATH=${LOG_PATH}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
|
||||
- LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
|
||||
|
||||
# CUSTOM TOOL/FUNCTION DEPENDENCIES
|
||||
- TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
|
||||
- TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
|
||||
- ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
|
||||
|
||||
# STORAGE
|
||||
- STORAGE_TYPE=${STORAGE_TYPE}
|
||||
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
|
||||
- S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
|
||||
- S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
|
||||
- S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
|
||||
- S3_STORAGE_REGION=${S3_STORAGE_REGION}
|
||||
- S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
|
||||
- S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
|
||||
- GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
|
||||
- GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
|
||||
- GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
|
||||
- GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
|
||||
|
||||
# SETTINGS
|
||||
- NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
|
||||
- CORS_ORIGINS=${CORS_ORIGINS}
|
||||
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
|
||||
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
|
||||
- SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
|
||||
- DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
|
||||
- DISABLED_NODES=${DISABLED_NODES}
|
||||
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
|
||||
|
||||
# AUTH PARAMETERS
|
||||
- APP_URL=${APP_URL}
|
||||
- JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
|
||||
- JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
|
||||
- JWT_ISSUER=${JWT_ISSUER}
|
||||
- JWT_AUDIENCE=${JWT_AUDIENCE}
|
||||
- JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
|
||||
- EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
|
||||
- PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
|
||||
- PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
|
||||
- TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
|
||||
- SECURE_COOKIES=${SECURE_COOKIES}
|
||||
|
||||
# EMAIL
|
||||
- SMTP_HOST=${SMTP_HOST}
|
||||
- SMTP_PORT=${SMTP_PORT}
|
||||
- SMTP_USER=${SMTP_USER}
|
||||
- SMTP_PASSWORD=${SMTP_PASSWORD}
|
||||
- SMTP_SECURE=${SMTP_SECURE}
|
||||
- ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
|
||||
- SENDER_EMAIL=${SENDER_EMAIL}
|
||||
|
||||
# ENTERPRISE
|
||||
- LICENSE_URL=${LICENSE_URL}
|
||||
- FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
|
||||
- OFFLINE=${OFFLINE}
|
||||
- INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
|
||||
- WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
|
||||
|
||||
# METRICS COLLECTION
|
||||
- POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
|
||||
- ENABLE_METRICS=${ENABLE_METRICS}
|
||||
- METRICS_PROVIDER=${METRICS_PROVIDER}
|
||||
- METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
|
||||
- METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
|
||||
- METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
|
||||
- METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
|
||||
- METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
|
||||
|
||||
# PROXY
|
||||
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
|
||||
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
|
||||
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
|
||||
|
||||
# --- Queue Configuration (Worker Instance) ---
|
||||
- MODE=${MODE:-queue}
|
||||
- QUEUE_NAME=${QUEUE_NAME:-flowise-queue}
|
||||
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
|
||||
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
|
||||
- REMOVE_ON_AGE=${REMOVE_ON_AGE}
|
||||
- REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
|
||||
- REDIS_URL=${REDIS_URL:-redis://redis:6379}
|
||||
- REDIS_HOST=${REDIS_HOST}
|
||||
- REDIS_PORT=${REDIS_PORT}
|
||||
- REDIS_USERNAME=${REDIS_USERNAME}
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
- REDIS_TLS=${REDIS_TLS}
|
||||
- REDIS_CERT=${REDIS_CERT}
|
||||
- REDIS_KEY=${REDIS_KEY}
|
||||
- REDIS_CA=${REDIS_CA}
|
||||
- REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
|
||||
- ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
|
||||
|
||||
# SECURITY
|
||||
- CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
|
||||
- CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
|
||||
- HTTP_DENY_LIST=${HTTP_DENY_LIST}
|
||||
- TRUST_PROXY=${TRUST_PROXY}
|
||||
healthcheck:
|
||||
test: ['CMD', 'curl', '-f', 'http://localhost:${WORKER_PORT:-5566}/healthz']
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
entrypoint: /bin/sh -c "node /app/healthcheck/healthcheck.js & sleep 5 && pnpm run start-worker"
|
||||
depends_on:
|
||||
- redis
|
||||
- flowise
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
volumes:
|
||||
redis_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
flowise-net:
|
||||
driver: bridge
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
version: '3.1'
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: flowise-redis
|
||||
ports:
|
||||
- '6379:6379'
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
flowise:
|
||||
container_name: flowise-main
|
||||
build:
|
||||
context: .. # Build using the Dockerfile in the root directory
|
||||
dockerfile: docker/Dockerfile
|
||||
ports:
|
||||
- '${PORT}:${PORT}'
|
||||
volumes:
|
||||
# Mount local .flowise to container's default location
|
||||
- ../.flowise:/root/.flowise
|
||||
environment:
|
||||
# --- Essential Flowise Vars ---
|
||||
- PORT=${PORT:-3000}
|
||||
- DATABASE_PATH=/root/.flowise
|
||||
- SECRETKEY_PATH=/root/.flowise
|
||||
- LOG_PATH=/root/.flowise/logs
|
||||
- BLOB_STORAGE_PATH=/root/.flowise/storage
|
||||
# --- Queue Vars (Main Instance) ---
|
||||
- MODE=queue
|
||||
- QUEUE_NAME=flowise-queue # Ensure this matches worker
|
||||
- REDIS_URL=redis://redis:6379 # Use service name 'redis'
|
||||
depends_on:
|
||||
- redis
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
flowise-worker:
|
||||
container_name: flowise-worker
|
||||
build:
|
||||
context: .. # Build context is still the root
|
||||
dockerfile: docker/worker/Dockerfile # Ensure this path is correct
|
||||
volumes:
|
||||
# Mount same local .flowise to worker
|
||||
- ../.flowise:/root/.flowise
|
||||
environment:
|
||||
# --- Essential Flowise Vars ---
|
||||
- WORKER_PORT=${WORKER_PORT:-5566} # Port for worker healthcheck
|
||||
- DATABASE_PATH=/root/.flowise
|
||||
- SECRETKEY_PATH=/root/.flowise
|
||||
- LOG_PATH=/root/.flowise/logs
|
||||
- BLOB_STORAGE_PATH=/root/.flowise/storage
|
||||
# --- Queue Vars (Main Instance) ---
|
||||
- MODE=queue
|
||||
- QUEUE_NAME=flowise-queue # Ensure this matches worker
|
||||
- REDIS_URL=redis://redis:6379 # Use service name 'redis'
|
||||
depends_on:
|
||||
- redis
|
||||
- flowise
|
||||
networks:
|
||||
- flowise-net
|
||||
|
||||
volumes:
|
||||
redis_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
flowise-net:
|
||||
driver: bridge
|
||||
|
|
@ -2,16 +2,12 @@ version: '3.1'
|
|||
|
||||
services:
|
||||
flowise:
|
||||
image: flowiseai/flowise
|
||||
image: flowiseai/flowise:latest
|
||||
restart: always
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- CORS_ORIGINS=${CORS_ORIGINS}
|
||||
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
|
||||
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
|
||||
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
|
||||
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
|
||||
- DEBUG=${DEBUG}
|
||||
|
||||
# DATABASE
|
||||
- DATABASE_PATH=${DATABASE_PATH}
|
||||
- DATABASE_TYPE=${DATABASE_TYPE}
|
||||
- DATABASE_PORT=${DATABASE_PORT}
|
||||
|
|
@ -21,35 +17,122 @@ services:
|
|||
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
|
||||
- DATABASE_SSL=${DATABASE_SSL}
|
||||
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
|
||||
- APIKEY_STORAGE_TYPE=${APIKEY_STORAGE_TYPE}
|
||||
- APIKEY_PATH=${APIKEY_PATH}
|
||||
|
||||
# SECRET KEYS
|
||||
- SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
|
||||
- SECRETKEY_PATH=${SECRETKEY_PATH}
|
||||
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
|
||||
- SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
|
||||
- SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
|
||||
- SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
|
||||
|
||||
# LOGGING
|
||||
- DEBUG=${DEBUG}
|
||||
- LOG_PATH=${LOG_PATH}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
|
||||
- LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
|
||||
|
||||
# CUSTOM TOOL/FUNCTION DEPENDENCIES
|
||||
- TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
|
||||
- TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
|
||||
- ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
|
||||
|
||||
# STORAGE
|
||||
- STORAGE_TYPE=${STORAGE_TYPE}
|
||||
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
|
||||
- S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
|
||||
- S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
|
||||
- S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
|
||||
- S3_STORAGE_REGION=${S3_STORAGE_REGION}
|
||||
- S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
|
||||
- S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
|
||||
- GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
|
||||
- GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
|
||||
- GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
|
||||
- GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
|
||||
|
||||
# SETTINGS
|
||||
- NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
|
||||
- CORS_ORIGINS=${CORS_ORIGINS}
|
||||
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
|
||||
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
|
||||
- SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
|
||||
- DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
|
||||
- DISABLED_NODES=${DISABLED_NODES}
|
||||
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
|
||||
|
||||
# AUTH PARAMETERS
|
||||
- APP_URL=${APP_URL}
|
||||
- JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
|
||||
- JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
|
||||
- JWT_ISSUER=${JWT_ISSUER}
|
||||
- JWT_AUDIENCE=${JWT_AUDIENCE}
|
||||
- JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
|
||||
- EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
|
||||
- PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
|
||||
- PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
|
||||
- TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
|
||||
- SECURE_COOKIES=${SECURE_COOKIES}
|
||||
|
||||
# EMAIL
|
||||
- SMTP_HOST=${SMTP_HOST}
|
||||
- SMTP_PORT=${SMTP_PORT}
|
||||
- SMTP_USER=${SMTP_USER}
|
||||
- SMTP_PASSWORD=${SMTP_PASSWORD}
|
||||
- SMTP_SECURE=${SMTP_SECURE}
|
||||
- ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
|
||||
- SENDER_EMAIL=${SENDER_EMAIL}
|
||||
|
||||
# ENTERPRISE
|
||||
- LICENSE_URL=${LICENSE_URL}
|
||||
- FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
|
||||
- OFFLINE=${OFFLINE}
|
||||
- INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
|
||||
- WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
|
||||
|
||||
# METRICS COLLECTION
|
||||
- POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
|
||||
- ENABLE_METRICS=${ENABLE_METRICS}
|
||||
- METRICS_PROVIDER=${METRICS_PROVIDER}
|
||||
- METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
|
||||
- METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
|
||||
- METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
|
||||
- METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
|
||||
- METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
|
||||
|
||||
# PROXY
|
||||
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
|
||||
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
|
||||
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
|
||||
- DISABLED_NODES=${DISABLED_NODES}
|
||||
|
||||
# QUEUE CONFIGURATION
|
||||
- MODE=${MODE}
|
||||
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
|
||||
- QUEUE_NAME=${QUEUE_NAME}
|
||||
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
|
||||
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
|
||||
- REMOVE_ON_AGE=${REMOVE_ON_AGE}
|
||||
- REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
|
||||
- REDIS_URL=${REDIS_URL}
|
||||
- REDIS_HOST=${REDIS_HOST}
|
||||
- REDIS_PORT=${REDIS_PORT}
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
- REDIS_USERNAME=${REDIS_USERNAME}
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
- REDIS_TLS=${REDIS_TLS}
|
||||
- REDIS_CERT=${REDIS_CERT}
|
||||
- REDIS_KEY=${REDIS_KEY}
|
||||
- REDIS_CA=${REDIS_CA}
|
||||
- REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
|
||||
- ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
|
||||
|
||||
# SECURITY
|
||||
- CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
|
||||
- CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
|
||||
- HTTP_DENY_LIST=${HTTP_DENY_LIST}
|
||||
- TRUST_PROXY=${TRUST_PROXY}
|
||||
ports:
|
||||
- '${PORT}:${PORT}'
|
||||
healthcheck:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,180 @@
|
|||
WORKER_PORT=5566
|
||||
|
||||
# APIKEY_PATH=/your_apikey_path/.flowise # (will be deprecated by end of 2025)
|
||||
|
||||
############################################################################################################
|
||||
############################################## DATABASE ####################################################
|
||||
############################################################################################################
|
||||
|
||||
DATABASE_PATH=/root/.flowise
|
||||
# DATABASE_TYPE=postgres
|
||||
# DATABASE_PORT=5432
|
||||
# DATABASE_HOST=""
|
||||
# DATABASE_NAME=flowise
|
||||
# DATABASE_USER=root
|
||||
# DATABASE_PASSWORD=mypassword
|
||||
# DATABASE_SSL=true
|
||||
# DATABASE_REJECT_UNAUTHORIZED=true
|
||||
# DATABASE_SSL_KEY_BASE64=<Self signed certificate in BASE64>
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## SECRET KEYS #################################################
|
||||
############################################################################################################
|
||||
|
||||
# SECRETKEY_STORAGE_TYPE=local #(local | aws)
|
||||
SECRETKEY_PATH=/root/.flowise
|
||||
# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey # (if you want to overwrite the secret key)
|
||||
# SECRETKEY_AWS_ACCESS_KEY=<your-access-key>
|
||||
# SECRETKEY_AWS_SECRET_KEY=<your-secret-key>
|
||||
# SECRETKEY_AWS_REGION=us-west-2
|
||||
# SECRETKEY_AWS_NAME=FlowiseEncryptionKey
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## LOGGING #####################################################
|
||||
############################################################################################################
|
||||
|
||||
# DEBUG=true
|
||||
LOG_PATH=/root/.flowise/logs
|
||||
# LOG_LEVEL=info #(error | warn | info | verbose | debug)
|
||||
# LOG_SANITIZE_BODY_FIELDS=password,pwd,pass,secret,token,apikey,api_key,accesstoken,access_token,refreshtoken,refresh_token,clientsecret,client_secret,privatekey,private_key,secretkey,secret_key,auth,authorization,credential,credentials
|
||||
# LOG_SANITIZE_HEADER_FIELDS=authorization,x-api-key,x-auth-token,cookie
|
||||
# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
|
||||
# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash
|
||||
# ALLOW_BUILTIN_DEP=false
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## STORAGE #####################################################
|
||||
############################################################################################################
|
||||
|
||||
# STORAGE_TYPE=local (local | s3 | gcs)
|
||||
BLOB_STORAGE_PATH=/root/.flowise/storage
|
||||
# S3_STORAGE_BUCKET_NAME=flowise
|
||||
# S3_STORAGE_ACCESS_KEY_ID=<your-access-key>
|
||||
# S3_STORAGE_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
# S3_STORAGE_REGION=us-west-2
|
||||
# S3_ENDPOINT_URL=<custom-s3-endpoint-url>
|
||||
# S3_FORCE_PATH_STYLE=false
|
||||
# GOOGLE_CLOUD_STORAGE_CREDENTIAL=/the/keyfilename/path
|
||||
# GOOGLE_CLOUD_STORAGE_PROJ_ID=<your-gcp-project-id>
|
||||
# GOOGLE_CLOUD_STORAGE_BUCKET_NAME=<the-bucket-name>
|
||||
# GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=true
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## SETTINGS ####################################################
|
||||
############################################################################################################
|
||||
|
||||
# NUMBER_OF_PROXIES= 1
|
||||
# CORS_ORIGINS=*
|
||||
# IFRAME_ORIGINS=*
|
||||
# FLOWISE_FILE_SIZE_LIMIT=50mb
|
||||
# SHOW_COMMUNITY_NODES=true
|
||||
# DISABLE_FLOWISE_TELEMETRY=true
|
||||
# DISABLED_NODES=bufferMemory,chatOpenAI (comma separated list of node names to disable)
|
||||
# Uncomment the following line to enable model list config, load the list of models from your local config file
|
||||
# see https://raw.githubusercontent.com/FlowiseAI/Flowise/main/packages/components/models.json for the format
|
||||
# MODEL_LIST_CONFIG_JSON=/your_model_list_config_file_path
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################ AUTH PARAMETERS ###############################################
|
||||
############################################################################################################
|
||||
|
||||
# APP_URL=http://localhost:3000
|
||||
|
||||
# SMTP_HOST=smtp.host.com
|
||||
# SMTP_PORT=465
|
||||
# SMTP_USER=smtp_user
|
||||
# SMTP_PASSWORD=smtp_password
|
||||
# SMTP_SECURE=true
|
||||
# ALLOW_UNAUTHORIZED_CERTS=false
|
||||
# SENDER_EMAIL=team@example.com
|
||||
|
||||
JWT_AUTH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
|
||||
JWT_REFRESH_TOKEN_SECRET='AABBCCDDAABBCCDDAABBCCDDAABBCCDDAABBCCDD'
|
||||
JWT_ISSUER='ISSUER'
|
||||
JWT_AUDIENCE='AUDIENCE'
|
||||
JWT_TOKEN_EXPIRY_IN_MINUTES=360
|
||||
JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=43200
|
||||
# EXPIRE_AUTH_TOKENS_ON_RESTART=true # (if you need to expire all tokens on app restart)
|
||||
# EXPRESS_SESSION_SECRET=flowise
|
||||
# SECURE_COOKIES=
|
||||
|
||||
# INVITE_TOKEN_EXPIRY_IN_HOURS=24
|
||||
# PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=15
|
||||
# PASSWORD_SALT_HASH_ROUNDS=10
|
||||
# TOKEN_HASH_SECRET='popcorn'
|
||||
|
||||
# WORKSPACE_INVITE_TEMPLATE_PATH=/path/to/custom/workspace_invite.hbs
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################# ENTERPRISE ###################################################
|
||||
############################################################################################################
|
||||
|
||||
# LICENSE_URL=
|
||||
# FLOWISE_EE_LICENSE_KEY=
|
||||
# OFFLINE=
|
||||
|
||||
|
||||
############################################################################################################
|
||||
########################################### METRICS COLLECTION #############################################
|
||||
############################################################################################################
|
||||
|
||||
# POSTHOG_PUBLIC_API_KEY=your_posthog_public_api_key
|
||||
|
||||
# ENABLE_METRICS=false
|
||||
# METRICS_PROVIDER=prometheus # prometheus | open_telemetry
|
||||
# METRICS_INCLUDE_NODE_METRICS=true # default is true
|
||||
# METRICS_SERVICE_NAME=FlowiseAI
|
||||
|
||||
# ONLY NEEDED if METRICS_PROVIDER=open_telemetry
|
||||
# METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=http://localhost:4318/v1/metrics
|
||||
# METRICS_OPEN_TELEMETRY_PROTOCOL=http # http | grpc | proto (default is http)
|
||||
# METRICS_OPEN_TELEMETRY_DEBUG=true # default is false
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################### PROXY ######################################################
|
||||
############################################################################################################
|
||||
|
||||
# Uncomment the following lines to enable global agent proxy, see https://www.npmjs.com/package/global-agent for more details
|
||||
# GLOBAL_AGENT_HTTP_PROXY=CorporateHttpProxyUrl
|
||||
# GLOBAL_AGENT_HTTPS_PROXY=CorporateHttpsProxyUrl
|
||||
# GLOBAL_AGENT_NO_PROXY=ExceptionHostsToBypassProxyIfNeeded
|
||||
|
||||
|
||||
############################################################################################################
|
||||
########################################### QUEUE CONFIGURATION ############################################
|
||||
############################################################################################################
|
||||
|
||||
# MODE=queue #(queue | main)
|
||||
# QUEUE_NAME=flowise-queue
|
||||
# QUEUE_REDIS_EVENT_STREAM_MAX_LEN=100000
|
||||
# WORKER_CONCURRENCY=100000
|
||||
# REMOVE_ON_AGE=86400
|
||||
# REMOVE_ON_COUNT=10000
|
||||
# REDIS_URL=
|
||||
# REDIS_HOST=localhost
|
||||
# REDIS_PORT=6379
|
||||
# REDIS_USERNAME=
|
||||
# REDIS_PASSWORD=
|
||||
# REDIS_TLS=
|
||||
# REDIS_CERT=
|
||||
# REDIS_KEY=
|
||||
# REDIS_CA=
|
||||
# REDIS_KEEP_ALIVE=
|
||||
# ENABLE_BULLMQ_DASHBOARD=
|
||||
|
||||
|
||||
############################################################################################################
|
||||
############################################## SECURITY ####################################################
|
||||
############################################################################################################
|
||||
|
||||
# HTTP_DENY_LIST=
|
||||
# CUSTOM_MCP_SECURITY_CHECK=true
|
||||
# CUSTOM_MCP_PROTOCOL=sse #(stdio | sse)
|
||||
# TRUST_PROXY=true #(true | false | 1 | loopback| linklocal | uniquelocal | IP addresses | loopback, IP addresses)
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
FROM node:20-alpine
|
||||
|
||||
RUN apk add --update libc6-compat python3 make g++
|
||||
# needed for pdfjs-dist
|
||||
RUN apk add --no-cache build-base cairo-dev pango-dev
|
||||
|
||||
# Install Chromium and curl for container-level health checks
|
||||
RUN apk add --no-cache chromium curl
|
||||
|
||||
#install PNPM globally
|
||||
RUN npm install -g pnpm
|
||||
|
||||
ENV PUPPETEER_SKIP_DOWNLOAD=true
|
||||
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
|
||||
|
||||
ENV NODE_OPTIONS=--max-old-space-size=8192
|
||||
|
||||
WORKDIR /usr/src
|
||||
|
||||
# Copy app source
|
||||
COPY . .
|
||||
|
||||
RUN pnpm install
|
||||
|
||||
RUN pnpm build
|
||||
|
||||
# --- Healthcheck Setup ---
|
||||
|
||||
WORKDIR /app/healthcheck
|
||||
|
||||
COPY docker/worker/healthcheck/package.json .
|
||||
|
||||
RUN npm install --omit=dev
|
||||
|
||||
COPY docker/worker/healthcheck/healthcheck.js .
|
||||
|
||||
# --- End Healthcheck Setup ---
|
||||
|
||||
# Set the main working directory back
|
||||
WORKDIR /usr/src
|
||||
|
||||
# Environment variables for port configuration
|
||||
ENV WORKER_PORT=5566
|
||||
|
||||
# Expose port (can be overridden by env var)
|
||||
EXPOSE ${WORKER_PORT}
|
||||
|
||||
# Start healthcheck in background and flowise worker in foreground
|
||||
CMD ["/bin/sh", "-c", "node /app/healthcheck/healthcheck.js & sleep 5 && pnpm run start-worker"]
|
||||
|
|
@ -18,7 +18,11 @@ Here’s an overview of the process:
|
|||
|
||||
## Setting up Worker:
|
||||
|
||||
1. Copy paste the same `.env` file used to setup main server. Change the `PORT` to other available port numbers. Ex: 5566
|
||||
2. `docker compose up -d`
|
||||
3. Open [http://localhost:5566](http://localhost:5566)
|
||||
1. Navigate to `docker/worker` folder
|
||||
2. In the `.env.example`, setup all the necessary env variables for `QUEUE CONFIGURATION`. Env variables for worker must match the one for main server. Change the `WORKER_PORT` to other available port numbers to listen for healthcheck. Ex: 5566
|
||||
3. `docker compose up -d`
|
||||
4. You can bring the worker container down by `docker compose stop`
|
||||
|
||||
## Entrypoint:
|
||||
|
||||
Different from main server image which is using `flowise start`, entrypoint for worker is `pnpm run start-worker`. This is because the worker's [Dockerfile](./Dockerfile) build the image from source files via `pnpm build` instead of npm registry via `RUN npm install -g flowise`.
|
||||
|
|
|
|||
|
|
@ -2,16 +2,12 @@ version: '3.1'
|
|||
|
||||
services:
|
||||
flowise:
|
||||
image: flowiseai/flowise
|
||||
image: flowiseai/flowise-worker:latest
|
||||
restart: always
|
||||
environment:
|
||||
- PORT=${PORT}
|
||||
- CORS_ORIGINS=${CORS_ORIGINS}
|
||||
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
|
||||
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
|
||||
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
|
||||
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
|
||||
- DEBUG=${DEBUG}
|
||||
- WORKER_PORT=${WORKER_PORT:-5566}
|
||||
|
||||
# DATABASE
|
||||
- DATABASE_PATH=${DATABASE_PATH}
|
||||
- DATABASE_TYPE=${DATABASE_TYPE}
|
||||
- DATABASE_PORT=${DATABASE_PORT}
|
||||
|
|
@ -21,37 +17,130 @@ services:
|
|||
- DATABASE_PASSWORD=${DATABASE_PASSWORD}
|
||||
- DATABASE_SSL=${DATABASE_SSL}
|
||||
- DATABASE_SSL_KEY_BASE64=${DATABASE_SSL_KEY_BASE64}
|
||||
- APIKEY_STORAGE_TYPE=${APIKEY_STORAGE_TYPE}
|
||||
- APIKEY_PATH=${APIKEY_PATH}
|
||||
|
||||
# SECRET KEYS
|
||||
- SECRETKEY_STORAGE_TYPE=${SECRETKEY_STORAGE_TYPE}
|
||||
- SECRETKEY_PATH=${SECRETKEY_PATH}
|
||||
- FLOWISE_SECRETKEY_OVERWRITE=${FLOWISE_SECRETKEY_OVERWRITE}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- SECRETKEY_AWS_ACCESS_KEY=${SECRETKEY_AWS_ACCESS_KEY}
|
||||
- SECRETKEY_AWS_SECRET_KEY=${SECRETKEY_AWS_SECRET_KEY}
|
||||
- SECRETKEY_AWS_REGION=${SECRETKEY_AWS_REGION}
|
||||
- SECRETKEY_AWS_NAME=${SECRETKEY_AWS_NAME}
|
||||
|
||||
# LOGGING
|
||||
- DEBUG=${DEBUG}
|
||||
- LOG_PATH=${LOG_PATH}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- LOG_SANITIZE_BODY_FIELDS=${LOG_SANITIZE_BODY_FIELDS}
|
||||
- LOG_SANITIZE_HEADER_FIELDS=${LOG_SANITIZE_HEADER_FIELDS}
|
||||
|
||||
# CUSTOM TOOL/FUNCTION DEPENDENCIES
|
||||
- TOOL_FUNCTION_BUILTIN_DEP=${TOOL_FUNCTION_BUILTIN_DEP}
|
||||
- TOOL_FUNCTION_EXTERNAL_DEP=${TOOL_FUNCTION_EXTERNAL_DEP}
|
||||
- ALLOW_BUILTIN_DEP=${ALLOW_BUILTIN_DEP}
|
||||
|
||||
# STORAGE
|
||||
- STORAGE_TYPE=${STORAGE_TYPE}
|
||||
- BLOB_STORAGE_PATH=${BLOB_STORAGE_PATH}
|
||||
- S3_STORAGE_BUCKET_NAME=${S3_STORAGE_BUCKET_NAME}
|
||||
- S3_STORAGE_ACCESS_KEY_ID=${S3_STORAGE_ACCESS_KEY_ID}
|
||||
- S3_STORAGE_SECRET_ACCESS_KEY=${S3_STORAGE_SECRET_ACCESS_KEY}
|
||||
- S3_STORAGE_REGION=${S3_STORAGE_REGION}
|
||||
- S3_ENDPOINT_URL=${S3_ENDPOINT_URL}
|
||||
- S3_FORCE_PATH_STYLE=${S3_FORCE_PATH_STYLE}
|
||||
- GOOGLE_CLOUD_STORAGE_CREDENTIAL=${GOOGLE_CLOUD_STORAGE_CREDENTIAL}
|
||||
- GOOGLE_CLOUD_STORAGE_PROJ_ID=${GOOGLE_CLOUD_STORAGE_PROJ_ID}
|
||||
- GOOGLE_CLOUD_STORAGE_BUCKET_NAME=${GOOGLE_CLOUD_STORAGE_BUCKET_NAME}
|
||||
- GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS=${GOOGLE_CLOUD_UNIFORM_BUCKET_ACCESS}
|
||||
|
||||
# SETTINGS
|
||||
- NUMBER_OF_PROXIES=${NUMBER_OF_PROXIES}
|
||||
- CORS_ORIGINS=${CORS_ORIGINS}
|
||||
- IFRAME_ORIGINS=${IFRAME_ORIGINS}
|
||||
- FLOWISE_FILE_SIZE_LIMIT=${FLOWISE_FILE_SIZE_LIMIT}
|
||||
- SHOW_COMMUNITY_NODES=${SHOW_COMMUNITY_NODES}
|
||||
- DISABLE_FLOWISE_TELEMETRY=${DISABLE_FLOWISE_TELEMETRY}
|
||||
- DISABLED_NODES=${DISABLED_NODES}
|
||||
- MODEL_LIST_CONFIG_JSON=${MODEL_LIST_CONFIG_JSON}
|
||||
|
||||
# AUTH PARAMETERS
|
||||
- APP_URL=${APP_URL}
|
||||
- JWT_AUTH_TOKEN_SECRET=${JWT_AUTH_TOKEN_SECRET}
|
||||
- JWT_REFRESH_TOKEN_SECRET=${JWT_REFRESH_TOKEN_SECRET}
|
||||
- JWT_ISSUER=${JWT_ISSUER}
|
||||
- JWT_AUDIENCE=${JWT_AUDIENCE}
|
||||
- JWT_TOKEN_EXPIRY_IN_MINUTES=${JWT_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES=${JWT_REFRESH_TOKEN_EXPIRY_IN_MINUTES}
|
||||
- EXPIRE_AUTH_TOKENS_ON_RESTART=${EXPIRE_AUTH_TOKENS_ON_RESTART}
|
||||
- EXPRESS_SESSION_SECRET=${EXPRESS_SESSION_SECRET}
|
||||
- PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS=${PASSWORD_RESET_TOKEN_EXPIRY_IN_MINS}
|
||||
- PASSWORD_SALT_HASH_ROUNDS=${PASSWORD_SALT_HASH_ROUNDS}
|
||||
- TOKEN_HASH_SECRET=${TOKEN_HASH_SECRET}
|
||||
- SECURE_COOKIES=${SECURE_COOKIES}
|
||||
|
||||
# EMAIL
|
||||
- SMTP_HOST=${SMTP_HOST}
|
||||
- SMTP_PORT=${SMTP_PORT}
|
||||
- SMTP_USER=${SMTP_USER}
|
||||
- SMTP_PASSWORD=${SMTP_PASSWORD}
|
||||
- SMTP_SECURE=${SMTP_SECURE}
|
||||
- ALLOW_UNAUTHORIZED_CERTS=${ALLOW_UNAUTHORIZED_CERTS}
|
||||
- SENDER_EMAIL=${SENDER_EMAIL}
|
||||
|
||||
# ENTERPRISE
|
||||
- LICENSE_URL=${LICENSE_URL}
|
||||
- FLOWISE_EE_LICENSE_KEY=${FLOWISE_EE_LICENSE_KEY}
|
||||
- OFFLINE=${OFFLINE}
|
||||
- INVITE_TOKEN_EXPIRY_IN_HOURS=${INVITE_TOKEN_EXPIRY_IN_HOURS}
|
||||
- WORKSPACE_INVITE_TEMPLATE_PATH=${WORKSPACE_INVITE_TEMPLATE_PATH}
|
||||
|
||||
# METRICS COLLECTION
|
||||
- POSTHOG_PUBLIC_API_KEY=${POSTHOG_PUBLIC_API_KEY}
|
||||
- ENABLE_METRICS=${ENABLE_METRICS}
|
||||
- METRICS_PROVIDER=${METRICS_PROVIDER}
|
||||
- METRICS_INCLUDE_NODE_METRICS=${METRICS_INCLUDE_NODE_METRICS}
|
||||
- METRICS_SERVICE_NAME=${METRICS_SERVICE_NAME}
|
||||
- METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT=${METRICS_OPEN_TELEMETRY_METRIC_ENDPOINT}
|
||||
- METRICS_OPEN_TELEMETRY_PROTOCOL=${METRICS_OPEN_TELEMETRY_PROTOCOL}
|
||||
- METRICS_OPEN_TELEMETRY_DEBUG=${METRICS_OPEN_TELEMETRY_DEBUG}
|
||||
|
||||
# PROXY
|
||||
- GLOBAL_AGENT_HTTP_PROXY=${GLOBAL_AGENT_HTTP_PROXY}
|
||||
- GLOBAL_AGENT_HTTPS_PROXY=${GLOBAL_AGENT_HTTPS_PROXY}
|
||||
- GLOBAL_AGENT_NO_PROXY=${GLOBAL_AGENT_NO_PROXY}
|
||||
- DISABLED_NODES=${DISABLED_NODES}
|
||||
|
||||
# QUEUE CONFIGURATION
|
||||
- MODE=${MODE}
|
||||
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
|
||||
- QUEUE_NAME=${QUEUE_NAME}
|
||||
- QUEUE_REDIS_EVENT_STREAM_MAX_LEN=${QUEUE_REDIS_EVENT_STREAM_MAX_LEN}
|
||||
- WORKER_CONCURRENCY=${WORKER_CONCURRENCY}
|
||||
- REMOVE_ON_AGE=${REMOVE_ON_AGE}
|
||||
- REMOVE_ON_COUNT=${REMOVE_ON_COUNT}
|
||||
- REDIS_URL=${REDIS_URL}
|
||||
- REDIS_HOST=${REDIS_HOST}
|
||||
- REDIS_PORT=${REDIS_PORT}
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
- REDIS_USERNAME=${REDIS_USERNAME}
|
||||
- REDIS_PASSWORD=${REDIS_PASSWORD}
|
||||
- REDIS_TLS=${REDIS_TLS}
|
||||
- REDIS_CERT=${REDIS_CERT}
|
||||
- REDIS_KEY=${REDIS_KEY}
|
||||
- REDIS_CA=${REDIS_CA}
|
||||
- REDIS_KEEP_ALIVE=${REDIS_KEEP_ALIVE}
|
||||
- ENABLE_BULLMQ_DASHBOARD=${ENABLE_BULLMQ_DASHBOARD}
|
||||
|
||||
# SECURITY
|
||||
- CUSTOM_MCP_SECURITY_CHECK=${CUSTOM_MCP_SECURITY_CHECK}
|
||||
- CUSTOM_MCP_PROTOCOL=${CUSTOM_MCP_PROTOCOL}
|
||||
- HTTP_DENY_LIST=${HTTP_DENY_LIST}
|
||||
- TRUST_PROXY=${TRUST_PROXY}
|
||||
ports:
|
||||
- '${PORT}:${PORT}'
|
||||
- '${WORKER_PORT}:${WORKER_PORT}'
|
||||
healthcheck:
|
||||
test: ['CMD', 'curl', '-f', 'http://localhost:${WORKER_PORT}/healthz']
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
volumes:
|
||||
- ~/.flowise:/root/.flowise
|
||||
entrypoint: /bin/sh -c "sleep 3; flowise worker"
|
||||
entrypoint: /bin/sh -c "node /app/healthcheck/healthcheck.js & sleep 5 && pnpm run start-worker"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,13 @@
|
|||
const express = require('express')
|
||||
const app = express()
|
||||
|
||||
const port = process.env.WORKER_PORT || 5566
|
||||
|
||||
app.get('/healthz', (req, res) => {
|
||||
res.status(200).send('OK')
|
||||
})
|
||||
|
||||
app.listen(port, () => {
|
||||
// eslint-disable-next-line no-console
|
||||
console.log(`Healthcheck server listening on port ${port}`)
|
||||
})
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"name": "flowise-worker-healthcheck",
|
||||
"version": "1.0.0",
|
||||
"description": "Simple healthcheck server for Flowise worker",
|
||||
"main": "healthcheck.js",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"start": "node healthcheck.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"express": "^4.19.2"
|
||||
}
|
||||
}
|
||||
|
|
@ -112,45 +112,41 @@ Flowise 在一个单一的单体存储库中有 3 个不同的模块。
|
|||
pnpm start
|
||||
```
|
||||
|
||||
11. 提交代码并从指向 [Flowise 主分支](https://github.com/FlowiseAI/Flowise/tree/master) 的分叉分支上提交 Pull Request。
|
||||
11. 提交代码并从指向 [Flowise 主分支](https://github.com/FlowiseAI/Flowise/tree/main) 的分叉分支上提交 Pull Request。
|
||||
|
||||
## 🌱 环境变量
|
||||
|
||||
Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。阅读[更多信息](https://docs.flowiseai.com/environment-variables)
|
||||
|
||||
| 变量名 | 描述 | 类型 | 默认值 |
|
||||
| ---------------------------- | ------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- | --- |
|
||||
| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
|
||||
| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
|
||||
| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
|
||||
| FLOWISE_FILE_SIZE_LIMIT | 上传文件大小限制 | 字符串 | 50mb | |
|
||||
| DEBUG | 打印组件的日志 | 布尔值 | |
|
||||
| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
|
||||
| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
|
||||
| APIKEY_STORAGE_TYPE | 存储 API 密钥的存储类型 | 枚举字符串: `json`, `db` | `json` |
|
||||
| APIKEY_PATH | 存储 API 密钥的位置, 当`APIKEY_STORAGE_TYPE`是`json` | 字符串 | `your-path/Flowise/packages/server` |
|
||||
| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
|
||||
| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
|
||||
| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
|
||||
| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
|
||||
| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
|
||||
| SECRETKEY_PATH | 保存加密密钥(用于加密/解密凭据)的位置 | 字符串 | `your-path/Flowise/packages/server` |
|
||||
| FLOWISE_SECRETKEY_OVERWRITE | 加密密钥用于替代存储在 SECRETKEY_PATH 中的密钥 | 字符串 |
|
||||
| MODEL_LIST_CONFIG_JSON | 加载模型的位置 | 字符 | `/your_model_list_config_file_path` |
|
||||
| STORAGE_TYPE | 上传文件的存储类型 | 枚举字符串: `local`, `s3` | `local` |
|
||||
| BLOB_STORAGE_PATH | 上传文件存储的本地文件夹路径, 当`STORAGE_TYPE`是`local` | 字符串 | `your-home-dir/.flowise/storage` |
|
||||
| S3_STORAGE_BUCKET_NAME | S3 存储文件夹路径, 当`STORAGE_TYPE`是`s3` | 字符串 | |
|
||||
| S3_STORAGE_ACCESS_KEY_ID | AWS 访问密钥 (Access Key) | 字符串 | |
|
||||
| S3_STORAGE_SECRET_ACCESS_KEY | AWS 密钥 (Secret Key) | 字符串 | |
|
||||
| S3_STORAGE_REGION | S3 存储地区 | 字符串 | |
|
||||
| S3_ENDPOINT_URL | S3 端点 URL | 字符串 | |
|
||||
| S3_FORCE_PATH_STYLE | 将其设置为 true 以强制请求使用路径样式寻址 | 布尔值 | false |
|
||||
| SHOW_COMMUNITY_NODES | 显示由社区创建的节点 | 布尔值 | |
|
||||
| DISABLED_NODES | 从界面中隐藏节点(以逗号分隔的节点名称列表) | 字符串 | |
|
||||
| 变量名 | 描述 | 类型 | 默认值 |
|
||||
| ------------------------------ | -------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- |
|
||||
| `PORT` | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
|
||||
| `FLOWISE_FILE_SIZE_LIMIT` | 上传文件大小限制 | 字符串 | 50mb |
|
||||
| `DEBUG` | 打印组件的日志 | 布尔值 | |
|
||||
| `LOG_PATH` | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
|
||||
| `LOG_LEVEL` | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
|
||||
| `TOOL_FUNCTION_BUILTIN_DEP` | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
|
||||
| `TOOL_FUNCTION_EXTERNAL_DEP` | 用于工具函数的外部模块 | 字符串 | |
|
||||
| `DATABASE_TYPE` | 存储 Flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
|
||||
| `DATABASE_PATH` | 数据库保存的位置(当 `DATABASE_TYPE` 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
|
||||
| `DATABASE_HOST` | 主机 URL 或 IP 地址(当 `DATABASE_TYPE` 不是 sqlite 时) | 字符串 | |
|
||||
| `DATABASE_PORT` | 数据库端口(当 `DATABASE_TYPE` 不是 sqlite 时) | 字符串 | |
|
||||
| `DATABASE_USERNAME` | 数据库用户名(当 `DATABASE_TYPE` 不是 sqlite 时) | 字符串 | |
|
||||
| `DATABASE_PASSWORD` | 数据库密码(当 `DATABASE_TYPE` 不是 sqlite 时) | 字符串 | |
|
||||
| `DATABASE_NAME` | 数据库名称(当 `DATABASE_TYPE` 不是 sqlite 时) | 字符串 | |
|
||||
| `SECRETKEY_PATH` | 保存加密密钥(用于加密/解密凭据)的位置 | 字符串 | `your-path/Flowise/packages/server` |
|
||||
| `FLOWISE_SECRETKEY_OVERWRITE` | 加密密钥用于替代存储在 `SECRETKEY_PATH` 中的密钥 | 字符串 | |
|
||||
| `MODEL_LIST_CONFIG_JSON` | 加载模型的位置 | 字符串 | `/your_model_list_config_file_path` |
|
||||
| `STORAGE_TYPE` | 上传文件的存储类型 | 枚举字符串: `local`, `s3` | `local` |
|
||||
| `BLOB_STORAGE_PATH` | 本地上传文件存储路径(当 `STORAGE_TYPE` 为 `local`) | 字符串 | `your-home-dir/.flowise/storage` |
|
||||
| `S3_STORAGE_BUCKET_NAME` | S3 存储文件夹路径(当 `STORAGE_TYPE` 为 `s3`) | 字符串 | |
|
||||
| `S3_STORAGE_ACCESS_KEY_ID` | AWS 访问密钥 (Access Key) | 字符串 | |
|
||||
| `S3_STORAGE_SECRET_ACCESS_KEY` | AWS 密钥 (Secret Key) | 字符串 | |
|
||||
| `S3_STORAGE_REGION` | S3 存储地区 | 字符串 | |
|
||||
| `S3_ENDPOINT_URL` | S3 端点 URL | 字符串 | |
|
||||
| `S3_FORCE_PATH_STYLE` | 设置为 true 以强制请求使用路径样式寻址 | 布尔值 | false |
|
||||
| `SHOW_COMMUNITY_NODES` | 显示由社区创建的节点 | 布尔值 | |
|
||||
| `DISABLED_NODES` | 从界面中隐藏节点(以逗号分隔的节点名称列表) | 字符串 | |
|
||||
|
||||
您也可以在使用 `npx` 时指定环境变量。例如:
|
||||
|
||||
|
|
|
|||
|
|
@ -31,12 +31,6 @@
|
|||
npx flowise start
|
||||
```
|
||||
|
||||
ユーザー名とパスワードを入力
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. [http://localhost:3000](http://localhost:3000) を開く
|
||||
|
||||
## 🐳 Docker
|
||||
|
|
@ -127,15 +121,6 @@ Flowise には、3 つの異なるモジュールが 1 つの mono リポジト
|
|||
|
||||
コードの変更は [http://localhost:8080](http://localhost:8080) に自動的にアプリをリロードします
|
||||
|
||||
## 🔒 認証
|
||||
|
||||
アプリレベルの認証を有効にするには、 `FLOWISE_USERNAME` と `FLOWISE_PASSWORD` を `packages/server` の `.env` ファイルに追加します:
|
||||
|
||||
```
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 環境変数
|
||||
|
||||
Flowise は、インスタンスを設定するためのさまざまな環境変数をサポートしています。`packages/server` フォルダ内の `.env` ファイルで以下の変数を指定することができる。[続き](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)を読む
|
||||
|
|
@ -197,9 +182,9 @@ Flowise は、インスタンスを設定するためのさまざまな環境変
|
|||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
|
||||
[コントリビューティングガイド](CONTRIBUTING.md)を参照してください。質問や問題があれば、[Discord](https://discord.gg/jbaHfsRVBW) までご連絡ください。
|
||||
[コントリビューティングガイド](../CONTRIBUTING.md)を参照してください。質問や問題があれば、[Discord](https://discord.gg/jbaHfsRVBW) までご連絡ください。
|
||||
[](https://star-history.com/#FlowiseAI/Flowise&Date)
|
||||
|
||||
## 📄 ライセンス
|
||||
|
||||
このリポジトリのソースコードは、[Apache License Version 2.0](LICENSE.md)の下で利用可能です。
|
||||
このリポジトリのソースコードは、[Apache License Version 2.0](../LICENSE.md)の下で利用可能です。
|
||||
|
|
|
|||
|
|
@ -31,12 +31,6 @@
|
|||
npx flowise start
|
||||
```
|
||||
|
||||
사용자 이름과 비밀번호로 시작하기
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. [http://localhost:3000](http://localhost:3000) URL 열기
|
||||
|
||||
## 🐳 도커(Docker)를 활용하여 시작하기
|
||||
|
|
@ -127,15 +121,6 @@ Flowise는 단일 리포지토리에 3개의 서로 다른 모듈이 있습니
|
|||
|
||||
코드가 변경되면 [http://localhost:8080](http://localhost:8080)에서 자동으로 애플리케이션을 새로고침 합니다.
|
||||
|
||||
## 🔒 인증
|
||||
|
||||
애플리케이션 수준의 인증을 사용하려면 `packages/server`의 `.env` 파일에 `FLOWISE_USERNAME` 및 `FLOWISE_PASSWORD`를 추가합니다:
|
||||
|
||||
```
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 환경 변수
|
||||
|
||||
Flowise는 인스턴스 구성을 위한 다양한 환경 변수를 지원합니다. `packages/server` 폴더 내 `.env` 파일에 다양한 환경 변수를 지정할 수 있습니다. [자세히 보기](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
|
|
@ -197,9 +182,9 @@ Flowise는 인스턴스 구성을 위한 다양한 환경 변수를 지원합니
|
|||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
|
||||
[contributing guide](CONTRIBUTING.md)를 살펴보세요. 디스코드 [Discord](https://discord.gg/jbaHfsRVBW) 채널에서도 이슈나 질의응답을 진행하실 수 있습니다.
|
||||
[contributing guide](../CONTRIBUTING.md)를 살펴보세요. 디스코드 [Discord](https://discord.gg/jbaHfsRVBW) 채널에서도 이슈나 질의응답을 진행하실 수 있습니다.
|
||||
[](https://star-history.com/#FlowiseAI/Flowise&Date)
|
||||
|
||||
## 📄 라이센스
|
||||
|
||||
본 리포지토리의 소스코드는 [Apache License Version 2.0](LICENSE.md) 라이센스가 적용됩니다.
|
||||
본 리포지토리의 소스코드는 [Apache License Version 2.0](../LICENSE.md) 라이센스가 적용됩니다.
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@
|
|||
|
||||
[English](../README.md) | 繁體中文 | [简体中文](./README-ZH.md) | [日本語](./README-JA.md) | [한국어](./README-KR.md)
|
||||
|
||||
<h3>可視化建構 AI/LLM 流程</h3>
|
||||
<h3>可視化建置 AI/LLM 流程</h3>
|
||||
<a href="https://github.com/FlowiseAI/Flowise">
|
||||
<img width="100%" src="https://github.com/FlowiseAI/Flowise/blob/main/images/flowise_agentflow.gif?raw=true"></a>
|
||||
|
||||
|
|
@ -31,28 +31,22 @@
|
|||
npx flowise start
|
||||
```
|
||||
|
||||
使用用戶名和密碼
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. 打開 [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
## 🐳 Docker
|
||||
|
||||
### Docker Compose
|
||||
|
||||
1. 克隆 Flowise 項目
|
||||
2. 進入項目根目錄的 `docker` 文件夾
|
||||
3. 複製 `.env.example` 文件,粘貼到相同位置,並重命名為 `.env` 文件
|
||||
1. 複製 Flowise 專案
|
||||
2. 進入專案根目錄的 `docker` 資料夾
|
||||
3. 複製 `.env.example` 文件,貼到相同位置,並重新命名為 `.env` 文件
|
||||
4. `docker compose up -d`
|
||||
5. 打開 [http://localhost:3000](http://localhost:3000)
|
||||
6. 您可以通過 `docker compose stop` 停止容器
|
||||
6. 您可以透過 `docker compose stop` 停止容器
|
||||
|
||||
### Docker 映像
|
||||
|
||||
1. 本地構建映像:
|
||||
1. 本地建置映像:
|
||||
```bash
|
||||
docker build --no-cache -t flowise .
|
||||
```
|
||||
|
|
@ -69,7 +63,7 @@
|
|||
|
||||
## 👨💻 開發者
|
||||
|
||||
Flowise 在單個 mono 存儲庫中有 3 個不同的模塊。
|
||||
Flowise 在單個 mono 儲存庫中有 3 個不同的模組。
|
||||
|
||||
- `server`: 提供 API 邏輯的 Node 後端
|
||||
- `ui`: React 前端
|
||||
|
|
@ -85,33 +79,33 @@ Flowise 在單個 mono 存儲庫中有 3 個不同的模塊。
|
|||
|
||||
### 設置
|
||||
|
||||
1. 克隆存儲庫
|
||||
1. 複製儲存庫
|
||||
|
||||
```bash
|
||||
git clone https://github.com/FlowiseAI/Flowise.git
|
||||
```
|
||||
|
||||
2. 進入存儲庫文件夾
|
||||
2. 進入儲存庫文件夾
|
||||
|
||||
```bash
|
||||
cd Flowise
|
||||
```
|
||||
|
||||
3. 安裝所有模塊的所有依賴項:
|
||||
3. 安裝所有模組的所有依賴項:
|
||||
|
||||
```bash
|
||||
pnpm install
|
||||
```
|
||||
|
||||
4. 構建所有代碼:
|
||||
4. 建置所有程式碼:
|
||||
|
||||
```bash
|
||||
pnpm build
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>退出代碼 134(JavaScript 堆內存不足)</summary>
|
||||
如果在運行上述 `build` 腳本時遇到此錯誤,請嘗試增加 Node.js 堆大小並重新運行腳本:
|
||||
<summary>Exit code 134(JavaScript heap out of memory)</summary>
|
||||
如果在運行上述 `build` 腳本時遇到此錯誤,請嘗試增加 Node.js 中的 Heap 記憶體大小並重新運行腳本:
|
||||
|
||||
export NODE_OPTIONS="--max-old-space-size=4096"
|
||||
pnpm build
|
||||
|
|
@ -124,9 +118,9 @@ Flowise 在單個 mono 存儲庫中有 3 個不同的模塊。
|
|||
pnpm start
|
||||
```
|
||||
|
||||
您現在可以訪問 [http://localhost:3000](http://localhost:3000)
|
||||
您現在可以開啟 [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
6. 對於開發構建:
|
||||
6. 對於開發建置:
|
||||
|
||||
- 在 `packages/ui` 中創建 `.env` 文件並指定 `VITE_PORT`(參考 `.env.example`)
|
||||
- 在 `packages/server` 中創建 `.env` 文件並指定 `PORT`(參考 `.env.example`)
|
||||
|
|
@ -136,28 +130,19 @@ Flowise 在單個 mono 存儲庫中有 3 個不同的模塊。
|
|||
pnpm dev
|
||||
```
|
||||
|
||||
任何代碼更改都會自動重新加載應用程序 [http://localhost:8080](http://localhost:8080)
|
||||
任何程式碼更改都會自動重新加載應用程式 [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
## 🔒 認證
|
||||
## 🌱 環境變數
|
||||
|
||||
要啟用應用級別的身份驗證,請在 `packages/server` 中的 `.env` 文件中添加 `FLOWISE_USERNAME` 和 `FLOWISE_PASSWORD`:
|
||||
|
||||
```
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 環境變量
|
||||
|
||||
Flowise 支持不同的環境變量來配置您的實例。您可以在 `packages/server` 文件夾中的 `.env` 文件中指定以下變量。閱讀 [更多](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
Flowise 支持不同的環境變數來配置您的實例。您可以在 `packages/server` 文件夾中的 `.env` 文件中指定以下變數。閱讀 [更多](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
|
||||
## 📖 文檔
|
||||
|
||||
[Flowise 文檔](https://docs.flowiseai.com/)
|
||||
|
||||
## 🌐 自我托管
|
||||
## 🌐 自行架設
|
||||
|
||||
在您現有的基礎設施中部署 Flowise 自我托管,我們支持各種 [部署](https://docs.flowiseai.com/configuration/deployment)
|
||||
在您現有的基礎設施中部署 Flowise,我們支持各種自行架設選項 [部署](https://docs.flowiseai.com/configuration/deployment)
|
||||
|
||||
- [AWS](https://docs.flowiseai.com/configuration/deployment/aws)
|
||||
- [Azure](https://docs.flowiseai.com/configuration/deployment/azure)
|
||||
|
|
@ -193,9 +178,9 @@ Flowise 支持不同的環境變量來配置您的實例。您可以在 `package
|
|||
|
||||
</details>
|
||||
|
||||
## ☁️ Flowise 雲
|
||||
## ☁️ Flowise 雲端平台
|
||||
|
||||
[開始使用 Flowise 雲](https://flowiseai.com/)
|
||||
[開始使用 Flowise 雲端平台](https://flowiseai.com/)
|
||||
|
||||
## 🙋 支持
|
||||
|
||||
|
|
@ -209,9 +194,9 @@ Flowise 支持不同的環境變量來配置您的實例。您可以在 `package
|
|||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
|
||||
請參閱 [貢獻指南](CONTRIBUTING.md)。如果您有任何問題或問題,請通過 [Discord](https://discord.gg/jbaHfsRVBW) 與我們聯繫。
|
||||
請參閱 [貢獻指南](../CONTRIBUTING.md)。如果您有任何問題或問題,請透過 [Discord](https://discord.gg/jbaHfsRVBW) 與我們聯繫。
|
||||
[](https://star-history.com/#FlowiseAI/Flowise&Date)
|
||||
|
||||
## 📄 許可證
|
||||
|
||||
此存儲庫中的源代碼根據 [Apache 許可證版本 2.0](LICENSE.md) 提供。
|
||||
此儲存庫中的原始碼根據 [Apache 2.0 授權條款](../LICENSE.md) 授權使用。
|
||||
|
|
|
|||
|
|
@ -31,12 +31,6 @@
|
|||
npx flowise start
|
||||
```
|
||||
|
||||
使用用户名和密码
|
||||
|
||||
```bash
|
||||
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
3. 打开 [http://localhost:3000](http://localhost:3000)
|
||||
|
||||
## 🐳 Docker
|
||||
|
|
@ -127,15 +121,6 @@ Flowise 在一个单一的代码库中有 3 个不同的模块。
|
|||
|
||||
任何代码更改都会自动重新加载应用程序,访问 [http://localhost:8080](http://localhost:8080)
|
||||
|
||||
## 🔒 认证
|
||||
|
||||
要启用应用程序级身份验证,在 `packages/server` 的 `.env` 文件中添加 `FLOWISE_USERNAME` 和 `FLOWISE_PASSWORD`:
|
||||
|
||||
```
|
||||
FLOWISE_USERNAME=user
|
||||
FLOWISE_PASSWORD=1234
|
||||
```
|
||||
|
||||
## 🌱 环境变量
|
||||
|
||||
Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。了解更多信息,请阅读[文档](https://github.com/FlowiseAI/Flowise/blob/main/CONTRIBUTING.md#-env-variables)
|
||||
|
|
@ -197,8 +182,8 @@ Flowise 支持不同的环境变量来配置您的实例。您可以在 `package
|
|||
<img src="https://contrib.rocks/image?repo=FlowiseAI/Flowise" />
|
||||
</a>
|
||||
|
||||
参见[贡献指南](CONTRIBUTING.md)。如果您有任何问题或问题,请在[Discord](https://discord.gg/jbaHfsRVBW)上与我们联系。
|
||||
参见[贡献指南](CONTRIBUTING-ZH.md)。如果您有任何问题或问题,请在[Discord](https://discord.gg/jbaHfsRVBW)上与我们联系。
|
||||
|
||||
## 📄 许可证
|
||||
|
||||
此代码库中的源代码在[Apache License Version 2.0 许可证](LICENSE.md)下提供。
|
||||
此代码库中的源代码在[Apache License Version 2.0 许可证](../LICENSE.md)下提供。
|
||||
|
|
|
|||
|
|
@ -1,15 +1,17 @@
|
|||
version: "2"
|
||||
version: '2'
|
||||
services:
|
||||
otel-collector:
|
||||
image: otel/opentelemetry-collector-contrib
|
||||
command: ["--config=/etc/otelcol-contrib/config.yaml", "--feature-gates=-exporter.datadogexporter.DisableAPMStats", "${OTELCOL_ARGS}"]
|
||||
volumes:
|
||||
- ./otel.config.yml:/etc/otelcol-contrib/config.yaml
|
||||
ports:
|
||||
- 1888:1888 # pprof extension
|
||||
- 8888:8888 # Prometheus metrics exposed by the Collector
|
||||
- 8889:8889 # Prometheus exporter metrics
|
||||
- 13133:13133 # health_check extension
|
||||
- 4317:4317 # OTLP gRPC receiver
|
||||
- 4318:4318 # OTLP http receiver
|
||||
- 55679:55679 # zpages extension
|
||||
otel-collector:
|
||||
read_only: true
|
||||
image: otel/opentelemetry-collector-contrib
|
||||
command:
|
||||
['--config=/etc/otelcol-contrib/config.yaml', '--feature-gates=-exporter.datadogexporter.DisableAPMStats', '${OTELCOL_ARGS}']
|
||||
volumes:
|
||||
- ./otel.config.yml:/etc/otelcol-contrib/config.yaml
|
||||
ports:
|
||||
- 1888:1888 # pprof extension
|
||||
- 8888:8888 # Prometheus metrics exposed by the Collector
|
||||
- 8889:8889 # Prometheus exporter metrics
|
||||
- 13133:13133 # health_check extension
|
||||
- 4317:4317 # OTLP gRPC receiver
|
||||
- 4318:4318 # OTLP http receiver
|
||||
- 55679:55679 # zpages extension
|
||||
|
|
|
|||
18
package.json
18
package.json
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "flowise",
|
||||
"version": "3.0.0",
|
||||
"version": "3.0.11",
|
||||
"private": true,
|
||||
"homepage": "https://flowiseai.com",
|
||||
"workspaces": [
|
||||
|
|
@ -20,6 +20,10 @@
|
|||
"start-worker": "run-script-os",
|
||||
"start-worker:windows": "cd packages/server/bin && run worker",
|
||||
"start-worker:default": "cd packages/server/bin && ./run worker",
|
||||
"user": "run-script-os",
|
||||
"user:windows": "cd packages/server/bin && run user",
|
||||
"user:default": "cd packages/server/bin && ./run user",
|
||||
"test": "turbo run test",
|
||||
"clean": "pnpm --filter \"./packages/**\" clean",
|
||||
"nuke": "pnpm --filter \"./packages/**\" nuke && rimraf node_modules .turbo",
|
||||
"format": "prettier --write \"**/*.{ts,tsx,md}\"",
|
||||
|
|
@ -62,20 +66,26 @@
|
|||
"sqlite3"
|
||||
],
|
||||
"overrides": {
|
||||
"axios": "1.7.9",
|
||||
"axios": "1.12.0",
|
||||
"body-parser": "2.0.2",
|
||||
"braces": "3.0.3",
|
||||
"cross-spawn": "7.0.6",
|
||||
"form-data": "4.0.4",
|
||||
"glob-parent": "6.0.2",
|
||||
"http-proxy-middleware": "3.0.3",
|
||||
"json5": "2.2.3",
|
||||
"nth-check": "2.1.1",
|
||||
"path-to-regexp": "0.1.12",
|
||||
"prismjs": "1.29.0",
|
||||
"rollup": "4.45.0",
|
||||
"semver": "7.7.1",
|
||||
"set-value": "4.1.0",
|
||||
"solid-js": "1.9.7",
|
||||
"tar-fs": "3.1.0",
|
||||
"unset-value": "2.0.1",
|
||||
"webpack-dev-middleware": "7.4.2"
|
||||
"webpack-dev-middleware": "7.4.2",
|
||||
"ws": "8.18.3",
|
||||
"xlsx": "https://cdn.sheetjs.com/xlsx-0.20.3/xlsx-0.20.3.tgz"
|
||||
}
|
||||
},
|
||||
"engines": {
|
||||
|
|
@ -85,7 +95,7 @@
|
|||
"resolutions": {
|
||||
"@google/generative-ai": "^0.24.0",
|
||||
"@grpc/grpc-js": "^1.10.10",
|
||||
"@langchain/core": "0.3.37",
|
||||
"@langchain/core": "0.3.61",
|
||||
"@qdrant/openapi-typescript-fetch": "1.2.6",
|
||||
"openai": "4.96.0",
|
||||
"protobufjs": "7.4.0"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "flowise-api",
|
||||
"version": "1.0.2",
|
||||
"version": "1.0.3",
|
||||
"description": "Flowise API documentation server",
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
|
|
|
|||
|
|
@ -1216,15 +1216,18 @@ paths:
|
|||
security:
|
||||
- bearerAuth: []
|
||||
operationId: createPrediction
|
||||
summary: Create a new prediction
|
||||
description: Create a new prediction
|
||||
summary: Send message to flow and get AI response
|
||||
description: |
|
||||
Send a message to your flow and receive an AI-generated response. This is the primary endpoint for interacting with your flows and assistants.
|
||||
**Authentication**: API key may be required depending on flow settings.
|
||||
parameters:
|
||||
- in: path
|
||||
name: id
|
||||
required: true
|
||||
schema:
|
||||
type: string
|
||||
description: Chatflow ID
|
||||
description: Flow ID - the unique identifier of your flow
|
||||
example: 'your-flow-id'
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
|
|
@ -1236,24 +1239,36 @@ paths:
|
|||
properties:
|
||||
question:
|
||||
type: string
|
||||
description: Question to ask during the prediction process
|
||||
description: Question/message to send to the flow
|
||||
example: 'Analyze this uploaded file and summarize its contents'
|
||||
files:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
format: binary
|
||||
description: Files to be uploaded
|
||||
modelName:
|
||||
description: Files to be uploaded (images, audio, documents, etc.)
|
||||
streaming:
|
||||
type: boolean
|
||||
description: Enable streaming responses
|
||||
default: false
|
||||
overrideConfig:
|
||||
type: string
|
||||
nullable: true
|
||||
example: ''
|
||||
description: Other override configurations
|
||||
description: JSON string of configuration overrides
|
||||
example: '{"sessionId":"user-123","temperature":0.7}'
|
||||
history:
|
||||
type: string
|
||||
description: JSON string of conversation history
|
||||
example: '[{"role":"userMessage","content":"Hello"},{"role":"apiMessage","content":"Hi there!"}]'
|
||||
humanInput:
|
||||
type: string
|
||||
description: JSON string of human input for resuming execution
|
||||
example: '{"type":"proceed","feedback":"Continue with the plan"}'
|
||||
required:
|
||||
- question
|
||||
required: true
|
||||
responses:
|
||||
'200':
|
||||
description: Prediction created successfully
|
||||
description: Successful prediction response
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
|
|
@ -1261,45 +1276,106 @@ paths:
|
|||
properties:
|
||||
text:
|
||||
type: string
|
||||
description: The result of the prediction
|
||||
description: The AI-generated response text
|
||||
example: 'Artificial intelligence (AI) is a branch of computer science that focuses on creating systems capable of performing tasks that typically require human intelligence.'
|
||||
json:
|
||||
type: object
|
||||
description: The result of the prediction in JSON format if available
|
||||
description: The result in JSON format if available (for structured outputs)
|
||||
nullable: true
|
||||
question:
|
||||
type: string
|
||||
description: The question asked during the prediction process
|
||||
description: The original question/message sent to the flow
|
||||
example: 'What is artificial intelligence?'
|
||||
chatId:
|
||||
type: string
|
||||
description: The chat ID associated with the prediction
|
||||
description: Unique identifier for the chat session
|
||||
example: 'chat-12345'
|
||||
chatMessageId:
|
||||
type: string
|
||||
description: The chat message ID associated with the prediction
|
||||
description: Unique identifier for this specific message
|
||||
example: 'msg-67890'
|
||||
sessionId:
|
||||
type: string
|
||||
description: The session ID associated with the prediction
|
||||
description: Session identifier for conversation continuity
|
||||
example: 'user-session-123'
|
||||
nullable: true
|
||||
memoryType:
|
||||
type: string
|
||||
description: The memory type associated with the prediction
|
||||
description: Type of memory used for conversation context
|
||||
example: 'Buffer Memory'
|
||||
nullable: true
|
||||
sourceDocuments:
|
||||
type: array
|
||||
description: Documents retrieved from vector store (if RAG is enabled)
|
||||
items:
|
||||
$ref: '#/components/schemas/Document'
|
||||
nullable: true
|
||||
usedTools:
|
||||
type: array
|
||||
description: Tools that were invoked during the response generation
|
||||
items:
|
||||
$ref: '#/components/schemas/UsedTool'
|
||||
fileAnnotations:
|
||||
type: array
|
||||
items:
|
||||
$ref: '#/components/schemas/FileAnnotation'
|
||||
nullable: true
|
||||
'400':
|
||||
description: Invalid input provided
|
||||
description: Bad Request - Invalid input provided or request format is incorrect
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
example: 'Invalid request format. Check required fields and parameter types.'
|
||||
'401':
|
||||
description: Unauthorized - API key required or invalid
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
example: 'Unauthorized access. Please verify your API key.'
|
||||
'404':
|
||||
description: Chatflow not found
|
||||
description: Not Found - Chatflow with specified ID does not exist
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
example: 'Chatflow not found. Please verify the chatflow ID.'
|
||||
'413':
|
||||
description: Payload Too Large - Request payload exceeds size limits
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
example: 'Request payload too large. Please reduce file sizes or split large requests.'
|
||||
'422':
|
||||
description: Validation error
|
||||
description: Validation Error - Request validation failed
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
example: 'Validation failed. Check parameter requirements and data types.'
|
||||
'500':
|
||||
description: Internal server error
|
||||
description: Internal Server Error - Flow configuration or execution error
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
example: 'Internal server error. Check flow configuration and node settings.'
|
||||
/tools:
|
||||
post:
|
||||
tags:
|
||||
|
|
@ -2011,13 +2087,33 @@ components:
|
|||
properties:
|
||||
question:
|
||||
type: string
|
||||
description: The question being asked
|
||||
description: The question/message to send to the flow
|
||||
example: 'What is artificial intelligence?'
|
||||
form:
|
||||
type: object
|
||||
description: The form object to send to the flow (alternative to question for Agentflow V2)
|
||||
additionalProperties: true
|
||||
example:
|
||||
title: 'Example'
|
||||
count: 1
|
||||
streaming:
|
||||
type: boolean
|
||||
description: Enable streaming responses for real-time output
|
||||
default: false
|
||||
example: false
|
||||
overrideConfig:
|
||||
type: object
|
||||
description: The configuration to override the default prediction settings (optional)
|
||||
description: Override flow configuration and pass variables at runtime
|
||||
additionalProperties: true
|
||||
example:
|
||||
sessionId: 'user-session-123'
|
||||
temperature: 0.7
|
||||
maxTokens: 500
|
||||
vars:
|
||||
user_name: 'Alice'
|
||||
history:
|
||||
type: array
|
||||
description: The history messages to be prepended (optional)
|
||||
description: Previous conversation messages for context
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
|
|
@ -2030,8 +2126,14 @@ components:
|
|||
type: string
|
||||
description: The content of the message
|
||||
example: 'Hello, how can I help you?'
|
||||
example:
|
||||
- role: 'apiMessage'
|
||||
content: "Hello! I'm an AI assistant. How can I help you today?"
|
||||
- role: 'userMessage'
|
||||
content: "Hi, my name is Sarah and I'm learning about AI"
|
||||
uploads:
|
||||
type: array
|
||||
description: Files to upload (images, audio, documents, etc.)
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
|
|
@ -2051,7 +2153,42 @@ components:
|
|||
mime:
|
||||
type: string
|
||||
description: The MIME type of the file or resource
|
||||
enum:
|
||||
[
|
||||
'image/png',
|
||||
'image/jpeg',
|
||||
'image/jpg',
|
||||
'image/gif',
|
||||
'image/webp',
|
||||
'audio/mp4',
|
||||
'audio/webm',
|
||||
'audio/wav',
|
||||
'audio/mpeg',
|
||||
'audio/ogg',
|
||||
'audio/aac'
|
||||
]
|
||||
example: 'image/png'
|
||||
example:
|
||||
- type: 'file'
|
||||
name: 'example.png'
|
||||
data: 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAABjElEQVRIS+2Vv0oDQRDG'
|
||||
mime: 'image/png'
|
||||
humanInput:
|
||||
type: object
|
||||
description: Return human feedback and resume execution from a stopped checkpoint
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
enum: [proceed, reject]
|
||||
description: Type of human input response
|
||||
example: 'reject'
|
||||
feedback:
|
||||
type: string
|
||||
description: Feedback to the last output
|
||||
example: 'Include more emoji'
|
||||
example:
|
||||
type: 'reject'
|
||||
feedback: 'Include more emoji'
|
||||
|
||||
Tool:
|
||||
type: object
|
||||
|
|
|
|||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class AgentflowApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Agentflow API'
|
||||
this.name = 'agentflowApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Agentflow Api Key',
|
||||
name: 'agentflowApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: AgentflowApi }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeCredential, INodeParams } from '../src/Interface'
|
||||
|
||||
class CometApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Comet API'
|
||||
this.name = 'cometApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Comet API Key',
|
||||
name: 'cometApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: CometApi }
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class ElevenLabsApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Eleven Labs API'
|
||||
this.name = 'elevenLabsApi'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Sign up for a Eleven Labs account and <a target="_blank" href="https://elevenlabs.io/app/settings/api-keys">create an API Key</a>.'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Eleven Labs API Key',
|
||||
name: 'elevenLabsApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: ElevenLabsApi }
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
const scopes = [
|
||||
'https://www.googleapis.com/auth/gmail.readonly',
|
||||
'https://www.googleapis.com/auth/gmail.compose',
|
||||
'https://www.googleapis.com/auth/gmail.modify',
|
||||
'https://www.googleapis.com/auth/gmail.labels'
|
||||
]
|
||||
|
||||
class GmailOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
description: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Gmail OAuth2'
|
||||
this.name = 'gmailOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/gmail">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://accounts.google.com/o/oauth2/v2/auth'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://oauth2.googleapis.com/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Additional Parameters',
|
||||
name: 'additionalParameters',
|
||||
type: 'string',
|
||||
default: 'access_type=offline&prompt=consent',
|
||||
hidden: true
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GmailOAuth2 }
|
||||
|
|
@ -0,0 +1,58 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
const scopes = ['https://www.googleapis.com/auth/calendar', 'https://www.googleapis.com/auth/calendar.events']
|
||||
|
||||
class GoogleCalendarOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
description: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Google Calendar OAuth2'
|
||||
this.name = 'googleCalendarOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/google-calendar">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://accounts.google.com/o/oauth2/v2/auth'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://oauth2.googleapis.com/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Additional Parameters',
|
||||
name: 'additionalParameters',
|
||||
type: 'string',
|
||||
default: 'access_type=offline&prompt=consent',
|
||||
hidden: true
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GoogleCalendarOAuth2 }
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
const scopes = [
|
||||
'https://www.googleapis.com/auth/documents',
|
||||
'https://www.googleapis.com/auth/drive',
|
||||
'https://www.googleapis.com/auth/drive.file'
|
||||
]
|
||||
|
||||
class GoogleDocsOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
description: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Google Docs OAuth2'
|
||||
this.name = 'googleDocsOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/google-sheets">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://accounts.google.com/o/oauth2/v2/auth'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://oauth2.googleapis.com/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Additional Parameters',
|
||||
name: 'additionalParameters',
|
||||
type: 'string',
|
||||
default: 'access_type=offline&prompt=consent',
|
||||
hidden: true
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GoogleDocsOAuth2 }
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
const scopes = [
|
||||
'https://www.googleapis.com/auth/drive',
|
||||
'https://www.googleapis.com/auth/drive.appdata',
|
||||
'https://www.googleapis.com/auth/drive.photos.readonly'
|
||||
]
|
||||
|
||||
class GoogleDriveOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
description: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Google Drive OAuth2'
|
||||
this.name = 'googleDriveOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/google-drive">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://accounts.google.com/o/oauth2/v2/auth'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://oauth2.googleapis.com/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Additional Parameters',
|
||||
name: 'additionalParameters',
|
||||
type: 'string',
|
||||
default: 'access_type=offline&prompt=consent',
|
||||
hidden: true
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GoogleDriveOAuth2 }
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
const scopes = [
|
||||
'https://www.googleapis.com/auth/drive.file',
|
||||
'https://www.googleapis.com/auth/spreadsheets',
|
||||
'https://www.googleapis.com/auth/drive.metadata'
|
||||
]
|
||||
|
||||
class GoogleSheetsOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
description: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Google Sheets OAuth2'
|
||||
this.name = 'googleSheetsOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/google-sheets">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://accounts.google.com/o/oauth2/v2/auth'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://oauth2.googleapis.com/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Additional Parameters',
|
||||
name: 'additionalParameters',
|
||||
type: 'string',
|
||||
default: 'access_type=offline&prompt=consent',
|
||||
hidden: true
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: GoogleSheetsOAuth2 }
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
const scopes = [
|
||||
'openid',
|
||||
'offline_access',
|
||||
'Contacts.Read',
|
||||
'Contacts.ReadWrite',
|
||||
'Calendars.Read',
|
||||
'Calendars.Read.Shared',
|
||||
'Calendars.ReadWrite',
|
||||
'Mail.Read',
|
||||
'Mail.ReadWrite',
|
||||
'Mail.ReadWrite.Shared',
|
||||
'Mail.Send',
|
||||
'Mail.Send.Shared',
|
||||
'MailboxSettings.Read'
|
||||
]
|
||||
|
||||
class MsoftOutlookOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Microsoft Outlook OAuth2'
|
||||
this.name = 'microsoftOutlookOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/microsoft-outlook">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/authorize'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: MsoftOutlookOAuth2 }
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
// Comprehensive scopes for Microsoft Teams operations
|
||||
const scopes = [
|
||||
// Basic authentication
|
||||
'openid',
|
||||
'offline_access',
|
||||
|
||||
// User permissions
|
||||
'User.Read',
|
||||
'User.ReadWrite.All',
|
||||
|
||||
// Teams and Groups
|
||||
'Group.ReadWrite.All',
|
||||
'Team.ReadBasic.All',
|
||||
'Team.Create',
|
||||
'TeamMember.ReadWrite.All',
|
||||
|
||||
// Channels
|
||||
'Channel.ReadBasic.All',
|
||||
'Channel.Create',
|
||||
'Channel.Delete.All',
|
||||
'ChannelMember.ReadWrite.All',
|
||||
|
||||
// Chat operations
|
||||
'Chat.ReadWrite',
|
||||
'Chat.Create',
|
||||
'ChatMember.ReadWrite',
|
||||
|
||||
// Messages
|
||||
'ChatMessage.Send',
|
||||
'ChatMessage.Read',
|
||||
'ChannelMessage.Send',
|
||||
'ChannelMessage.Read.All',
|
||||
|
||||
// Reactions and advanced features
|
||||
'TeamsActivity.Send'
|
||||
]
|
||||
|
||||
class MsoftTeamsOAuth2 implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
description: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Microsoft Teams OAuth2'
|
||||
this.name = 'microsoftTeamsOAuth2'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'You can find the setup instructions <a target="_blank" href="https://docs.flowiseai.com/integrations/langchain/tools/microsoft-teams">here</a>'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Authorization URL',
|
||||
name: 'authorizationUrl',
|
||||
type: 'string',
|
||||
default: 'https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/authorize'
|
||||
},
|
||||
{
|
||||
label: 'Access Token URL',
|
||||
name: 'accessTokenUrl',
|
||||
type: 'string',
|
||||
default: 'https://login.microsoftonline.com/<tenantId>/oauth2/v2.0/token'
|
||||
},
|
||||
{
|
||||
label: 'Client ID',
|
||||
name: 'clientId',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Client Secret',
|
||||
name: 'clientSecret',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Scope',
|
||||
name: 'scope',
|
||||
type: 'string',
|
||||
hidden: true,
|
||||
default: scopes.join(' ')
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: MsoftTeamsOAuth2 }
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class OxylabsApiCredential implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Oxylabs API'
|
||||
this.name = 'oxylabsApi'
|
||||
this.version = 1.0
|
||||
this.description = 'Oxylabs API credentials description, to add more info'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Oxylabs Username',
|
||||
name: 'username',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Oxylabs Password',
|
||||
name: 'password',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: OxylabsApiCredential }
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class SambanovaApi implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Sambanova API'
|
||||
this.name = 'sambanovaApi'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Sambanova Api Key',
|
||||
name: 'sambanovaApiKey',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: SambanovaApi }
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class TeradataBearerTokenCredential implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
description: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Teradata Bearer Token'
|
||||
this.name = 'teradataBearerToken'
|
||||
this.version = 1.0
|
||||
this.description =
|
||||
'Refer to <a target="_blank" href="https://docs.teradata.com/r/Enterprise_IntelliFlex_VMware/Teradata-Vector-Store-User-Guide/Setting-up-Vector-Store/Importing-Modules-Required-for-Vector-Store">official guide</a> on how to get Teradata Bearer Token'
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Token',
|
||||
name: 'token',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: TeradataBearerTokenCredential }
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class TeradataTD2Credential implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Teradata TD2 Auth'
|
||||
this.name = 'teradataTD2Auth'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Teradata TD2 Auth Username',
|
||||
name: 'tdUsername',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Teradata TD2 Auth Password',
|
||||
name: 'tdPassword',
|
||||
type: 'password'
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: TeradataTD2Credential }
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
import { INodeParams, INodeCredential } from '../src/Interface'
|
||||
|
||||
class TeradataVectorStoreApiCredentials implements INodeCredential {
|
||||
label: string
|
||||
name: string
|
||||
version: number
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Teradata Vector Store API Credentials'
|
||||
this.name = 'teradataVectorStoreApiCredentials'
|
||||
this.version = 1.0
|
||||
this.inputs = [
|
||||
{
|
||||
label: 'Teradata Host IP',
|
||||
name: 'tdHostIp',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Username',
|
||||
name: 'tdUsername',
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'Password',
|
||||
name: 'tdPassword',
|
||||
type: 'password'
|
||||
},
|
||||
{
|
||||
label: 'Vector_Store_Base_URL',
|
||||
name: 'baseURL',
|
||||
description: 'Teradata Vector Store Base URL',
|
||||
placeholder: `Base_URL`,
|
||||
type: 'string'
|
||||
},
|
||||
{
|
||||
label: 'JWT Token',
|
||||
name: 'jwtToken',
|
||||
type: 'password',
|
||||
description: 'Bearer token for JWT authentication',
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { credClass: TeradataVectorStoreApiCredentials }
|
||||
|
|
@ -0,0 +1,165 @@
|
|||
import { RunCollectorCallbackHandler } from '@langchain/core/tracers/run_collector'
|
||||
import { Run } from '@langchain/core/tracers/base'
|
||||
import { EvaluationRunner } from './EvaluationRunner'
|
||||
import { encoding_for_model, get_encoding } from '@dqbd/tiktoken'
|
||||
|
||||
export class EvaluationRunTracer extends RunCollectorCallbackHandler {
|
||||
evaluationRunId: string
|
||||
model: string
|
||||
|
||||
constructor(id: string) {
|
||||
super()
|
||||
this.evaluationRunId = id
|
||||
}
|
||||
|
||||
async persistRun(run: Run): Promise<void> {
|
||||
return super.persistRun(run)
|
||||
}
|
||||
|
||||
countPromptTokens = (encoding: any, run: Run): number => {
|
||||
let promptTokenCount = 0
|
||||
if (encoding) {
|
||||
if (run.inputs?.messages?.length > 0 && run.inputs?.messages[0]?.length > 0) {
|
||||
run.inputs.messages[0].map((message: any) => {
|
||||
let content = message.content
|
||||
? message.content
|
||||
: message.SystemMessage?.content
|
||||
? message.SystemMessage.content
|
||||
: message.HumanMessage?.content
|
||||
? message.HumanMessage.content
|
||||
: message.AIMessage?.content
|
||||
? message.AIMessage.content
|
||||
: undefined
|
||||
promptTokenCount += content ? encoding.encode(content).length : 0
|
||||
})
|
||||
}
|
||||
if (run.inputs?.prompts?.length > 0) {
|
||||
const content = run.inputs.prompts[0]
|
||||
promptTokenCount += content ? encoding.encode(content).length : 0
|
||||
}
|
||||
}
|
||||
return promptTokenCount
|
||||
}
|
||||
|
||||
countCompletionTokens = (encoding: any, run: Run): number => {
|
||||
let completionTokenCount = 0
|
||||
if (encoding) {
|
||||
if (run.outputs?.generations?.length > 0 && run.outputs?.generations[0]?.length > 0) {
|
||||
run.outputs?.generations[0].map((chunk: any) => {
|
||||
let content = chunk.text ? chunk.text : chunk.message?.content ? chunk.message?.content : undefined
|
||||
completionTokenCount += content ? encoding.encode(content).length : 0
|
||||
})
|
||||
}
|
||||
}
|
||||
return completionTokenCount
|
||||
}
|
||||
|
||||
extractModelName = (run: Run): string => {
|
||||
return (
|
||||
(run?.serialized as any)?.kwargs?.model ||
|
||||
(run?.serialized as any)?.kwargs?.model_name ||
|
||||
(run?.extra as any)?.metadata?.ls_model_name ||
|
||||
(run?.extra as any)?.metadata?.fw_model_name
|
||||
)
|
||||
}
|
||||
|
||||
onLLMEnd?(run: Run): void | Promise<void> {
|
||||
if (run.name) {
|
||||
let provider = run.name
|
||||
if (provider === 'BedrockChat') {
|
||||
provider = 'awsChatBedrock'
|
||||
}
|
||||
EvaluationRunner.addMetrics(
|
||||
this.evaluationRunId,
|
||||
JSON.stringify({
|
||||
provider: provider
|
||||
})
|
||||
)
|
||||
}
|
||||
|
||||
let model = this.extractModelName(run)
|
||||
if (run.outputs?.llmOutput?.tokenUsage) {
|
||||
const tokenUsage = run.outputs?.llmOutput?.tokenUsage
|
||||
if (tokenUsage) {
|
||||
const metric = {
|
||||
completionTokens: tokenUsage.completionTokens,
|
||||
promptTokens: tokenUsage.promptTokens,
|
||||
model: model,
|
||||
totalTokens: tokenUsage.totalTokens
|
||||
}
|
||||
EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify(metric))
|
||||
}
|
||||
} else if (
|
||||
run.outputs?.generations?.length > 0 &&
|
||||
run.outputs?.generations[0].length > 0 &&
|
||||
run.outputs?.generations[0][0]?.message?.usage_metadata?.total_tokens
|
||||
) {
|
||||
const usage_metadata = run.outputs?.generations[0][0]?.message?.usage_metadata
|
||||
if (usage_metadata) {
|
||||
const metric = {
|
||||
completionTokens: usage_metadata.output_tokens,
|
||||
promptTokens: usage_metadata.input_tokens,
|
||||
model: model || this.model,
|
||||
totalTokens: usage_metadata.total_tokens
|
||||
}
|
||||
EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify(metric))
|
||||
}
|
||||
} else {
|
||||
let encoding: any = undefined
|
||||
let promptInputTokens = 0
|
||||
let completionTokenCount = 0
|
||||
try {
|
||||
encoding = encoding_for_model(model as any)
|
||||
promptInputTokens = this.countPromptTokens(encoding, run)
|
||||
completionTokenCount = this.countCompletionTokens(encoding, run)
|
||||
} catch (e) {
|
||||
try {
|
||||
// as tiktoken will fail for non openai models, assume that is 'cl100k_base'
|
||||
encoding = get_encoding('cl100k_base')
|
||||
promptInputTokens = this.countPromptTokens(encoding, run)
|
||||
completionTokenCount = this.countCompletionTokens(encoding, run)
|
||||
} catch (e) {
|
||||
// stay silent
|
||||
}
|
||||
}
|
||||
const metric = {
|
||||
completionTokens: completionTokenCount,
|
||||
promptTokens: promptInputTokens,
|
||||
model: model,
|
||||
totalTokens: promptInputTokens + completionTokenCount
|
||||
}
|
||||
EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify(metric))
|
||||
//cleanup
|
||||
this.model = ''
|
||||
}
|
||||
}
|
||||
|
||||
async onRunUpdate(run: Run): Promise<void> {
|
||||
const json = {
|
||||
[run.run_type]: elapsed(run)
|
||||
}
|
||||
let metric = JSON.stringify(json)
|
||||
if (metric) {
|
||||
EvaluationRunner.addMetrics(this.evaluationRunId, metric)
|
||||
}
|
||||
|
||||
if (run.run_type === 'llm') {
|
||||
let model = this.extractModelName(run)
|
||||
if (model) {
|
||||
EvaluationRunner.addMetrics(this.evaluationRunId, JSON.stringify({ model: model }))
|
||||
this.model = model
|
||||
}
|
||||
// OpenAI non streaming models
|
||||
const estimatedTokenUsage = run.outputs?.llmOutput?.estimatedTokenUsage
|
||||
if (estimatedTokenUsage && typeof estimatedTokenUsage === 'object' && Object.keys(estimatedTokenUsage).length > 0) {
|
||||
EvaluationRunner.addMetrics(this.evaluationRunId, estimatedTokenUsage)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function elapsed(run: Run) {
|
||||
if (!run.end_time) return ''
|
||||
const elapsed = run.end_time - run.start_time
|
||||
return `${elapsed.toFixed(2)}`
|
||||
}
|
||||
|
|
@ -0,0 +1,186 @@
|
|||
import { ChatMessage, LLMEndEvent, LLMStartEvent, LLMStreamEvent, MessageContentTextDetail, RetrievalEndEvent, Settings } from 'llamaindex'
|
||||
import { EvaluationRunner } from './EvaluationRunner'
|
||||
import { additionalCallbacks, ICommonObject, INodeData } from '../src'
|
||||
import { RetrievalStartEvent } from 'llamaindex/dist/type/llm/types'
|
||||
import { AgentEndEvent, AgentStartEvent } from 'llamaindex/dist/type/agent/types'
|
||||
import { encoding_for_model } from '@dqbd/tiktoken'
|
||||
import { MessageContent } from '@langchain/core/messages'
|
||||
|
||||
export class EvaluationRunTracerLlama {
|
||||
evaluationRunId: string
|
||||
static cbInit = false
|
||||
static startTimes = new Map<string, number>()
|
||||
static models = new Map<string, string>()
|
||||
static tokenCounts = new Map<string, number>()
|
||||
|
||||
constructor(id: string) {
|
||||
this.evaluationRunId = id
|
||||
EvaluationRunTracerLlama.constructCallBacks()
|
||||
}
|
||||
|
||||
static constructCallBacks = () => {
|
||||
if (!EvaluationRunTracerLlama.cbInit) {
|
||||
Settings.callbackManager.on('llm-start', (event: LLMStartEvent) => {
|
||||
const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
|
||||
if (!evalID) return
|
||||
const model = (event as any).reason?.caller?.model
|
||||
if (model) {
|
||||
EvaluationRunTracerLlama.models.set(evalID, model)
|
||||
try {
|
||||
const encoding = encoding_for_model(model)
|
||||
if (encoding) {
|
||||
const { messages } = event.detail.payload
|
||||
let tokenCount = messages.reduce((count: number, message: ChatMessage) => {
|
||||
return count + encoding.encode(extractText(message.content)).length
|
||||
}, 0)
|
||||
EvaluationRunTracerLlama.tokenCounts.set(evalID + '_promptTokens', tokenCount)
|
||||
EvaluationRunTracerLlama.tokenCounts.set(evalID + '_outputTokens', 0)
|
||||
}
|
||||
} catch (e) {
|
||||
// catch the error and continue to work.
|
||||
}
|
||||
}
|
||||
EvaluationRunTracerLlama.startTimes.set(evalID + '_llm', event.timeStamp)
|
||||
})
|
||||
Settings.callbackManager.on('llm-end', (event: LLMEndEvent) => {
|
||||
this.calculateAndSetMetrics(event, 'llm')
|
||||
})
|
||||
Settings.callbackManager.on('llm-stream', (event: LLMStreamEvent) => {
|
||||
const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
|
||||
if (!evalID) return
|
||||
const { chunk } = event.detail.payload
|
||||
const { delta } = chunk
|
||||
const model = (event as any).reason?.caller?.model
|
||||
try {
|
||||
const encoding = encoding_for_model(model)
|
||||
if (encoding) {
|
||||
let tokenCount = EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens') || 0
|
||||
tokenCount += encoding.encode(extractText(delta)).length
|
||||
EvaluationRunTracerLlama.tokenCounts.set(evalID + '_outputTokens', tokenCount)
|
||||
}
|
||||
} catch (e) {
|
||||
// catch the error and continue to work.
|
||||
}
|
||||
})
|
||||
Settings.callbackManager.on('retrieve-start', (event: RetrievalStartEvent) => {
|
||||
const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
|
||||
if (evalID) {
|
||||
EvaluationRunTracerLlama.startTimes.set(evalID + '_retriever', event.timeStamp)
|
||||
}
|
||||
})
|
||||
Settings.callbackManager.on('retrieve-end', (event: RetrievalEndEvent) => {
|
||||
this.calculateAndSetMetrics(event, 'retriever')
|
||||
})
|
||||
Settings.callbackManager.on('agent-start', (event: AgentStartEvent) => {
|
||||
const evalID = (event as any).reason.parent?.caller?.evaluationRunId || (event as any).reason.caller?.evaluationRunId
|
||||
if (evalID) {
|
||||
EvaluationRunTracerLlama.startTimes.set(evalID + '_agent', event.timeStamp)
|
||||
}
|
||||
})
|
||||
Settings.callbackManager.on('agent-end', (event: AgentEndEvent) => {
|
||||
this.calculateAndSetMetrics(event, 'agent')
|
||||
})
|
||||
EvaluationRunTracerLlama.cbInit = true
|
||||
}
|
||||
}
|
||||
|
||||
private static calculateAndSetMetrics(event: any, label: string) {
|
||||
const evalID = event.reason.parent?.caller?.evaluationRunId || event.reason.caller?.evaluationRunId
|
||||
if (!evalID) return
|
||||
const startTime = EvaluationRunTracerLlama.startTimes.get(evalID + '_' + label) as number
|
||||
let model =
|
||||
(event as any).reason?.caller?.model || (event as any).reason?.caller?.llm?.model || EvaluationRunTracerLlama.models.get(evalID)
|
||||
|
||||
if (event.detail.payload?.response?.message && model) {
|
||||
try {
|
||||
const encoding = encoding_for_model(model)
|
||||
if (encoding) {
|
||||
let tokenCount = EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens') || 0
|
||||
tokenCount += encoding.encode(event.detail.payload.response?.message?.content || '').length
|
||||
EvaluationRunTracerLlama.tokenCounts.set(evalID + '_outputTokens', tokenCount)
|
||||
}
|
||||
} catch (e) {
|
||||
// catch the error and continue to work.
|
||||
}
|
||||
}
|
||||
|
||||
// Anthropic
|
||||
if (event.detail?.payload?.response?.raw?.usage) {
|
||||
const usage = event.detail.payload.response.raw.usage
|
||||
if (usage.output_tokens) {
|
||||
const metric = {
|
||||
completionTokens: usage.output_tokens,
|
||||
promptTokens: usage.input_tokens,
|
||||
model: model,
|
||||
totalTokens: usage.input_tokens + usage.output_tokens
|
||||
}
|
||||
EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
|
||||
} else if (usage.completion_tokens) {
|
||||
const metric = {
|
||||
completionTokens: usage.completion_tokens,
|
||||
promptTokens: usage.prompt_tokens,
|
||||
model: model,
|
||||
totalTokens: usage.total_tokens
|
||||
}
|
||||
EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
|
||||
}
|
||||
} else if (event.detail?.payload?.response?.raw['amazon-bedrock-invocationMetrics']) {
|
||||
const usage = event.detail?.payload?.response?.raw['amazon-bedrock-invocationMetrics']
|
||||
const metric = {
|
||||
completionTokens: usage.outputTokenCount,
|
||||
promptTokens: usage.inputTokenCount,
|
||||
model: event.detail?.payload?.response?.raw.model,
|
||||
totalTokens: usage.inputTokenCount + usage.outputTokenCount
|
||||
}
|
||||
EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
|
||||
} else {
|
||||
const metric = {
|
||||
[label]: (event.timeStamp - startTime).toFixed(2),
|
||||
completionTokens: EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens'),
|
||||
promptTokens: EvaluationRunTracerLlama.tokenCounts.get(evalID + '_promptTokens'),
|
||||
model: model || EvaluationRunTracerLlama.models.get(evalID) || '',
|
||||
totalTokens:
|
||||
(EvaluationRunTracerLlama.tokenCounts.get(evalID + '_outputTokens') || 0) +
|
||||
(EvaluationRunTracerLlama.tokenCounts.get(evalID + '_promptTokens') || 0)
|
||||
}
|
||||
EvaluationRunner.addMetrics(evalID, JSON.stringify(metric))
|
||||
}
|
||||
|
||||
//cleanup
|
||||
EvaluationRunTracerLlama.startTimes.delete(evalID + '_' + label)
|
||||
EvaluationRunTracerLlama.startTimes.delete(evalID + '_outputTokens')
|
||||
EvaluationRunTracerLlama.startTimes.delete(evalID + '_promptTokens')
|
||||
EvaluationRunTracerLlama.models.delete(evalID)
|
||||
}
|
||||
|
||||
static async injectEvaluationMetadata(nodeData: INodeData, options: ICommonObject, callerObj: any) {
|
||||
if (options.evaluationRunId && callerObj) {
|
||||
// these are needed for evaluation runs
|
||||
options.llamaIndex = true
|
||||
await additionalCallbacks(nodeData, options)
|
||||
Object.defineProperty(callerObj, 'evaluationRunId', {
|
||||
enumerable: true,
|
||||
configurable: true,
|
||||
writable: true,
|
||||
value: options.evaluationRunId
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// from https://github.com/run-llama/LlamaIndexTS/blob/main/packages/core/src/llm/utils.ts
|
||||
export function extractText(message: MessageContent): string {
|
||||
if (typeof message !== 'string' && !Array.isArray(message)) {
|
||||
console.warn('extractText called with non-MessageContent message, this is likely a bug.')
|
||||
return `${message}`
|
||||
} else if (typeof message !== 'string' && Array.isArray(message)) {
|
||||
// message is of type MessageContentDetail[] - retrieve just the text parts and concatenate them
|
||||
// so we can pass them to the context generator
|
||||
return message
|
||||
.filter((c): c is MessageContentTextDetail => c.type === 'text')
|
||||
.map((c) => c.text)
|
||||
.join('\n\n')
|
||||
} else {
|
||||
return message
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,226 @@
|
|||
import axios from 'axios'
|
||||
import { v4 as uuidv4 } from 'uuid'
|
||||
import { ICommonObject } from '../src'
|
||||
|
||||
import { getModelConfigByModelName, MODEL_TYPE } from '../src/modelLoader'
|
||||
|
||||
export class EvaluationRunner {
|
||||
static metrics = new Map<string, string[]>()
|
||||
|
||||
static getCostMetrics = async (selectedProvider: string, selectedModel: string) => {
|
||||
let modelConfig = await getModelConfigByModelName(MODEL_TYPE.CHAT, selectedProvider, selectedModel)
|
||||
if (modelConfig) {
|
||||
if (modelConfig['cost_values']) {
|
||||
return modelConfig.cost_values
|
||||
}
|
||||
return { cost_values: modelConfig }
|
||||
} else {
|
||||
modelConfig = await getModelConfigByModelName(MODEL_TYPE.LLM, selectedProvider, selectedModel)
|
||||
if (modelConfig) {
|
||||
if (modelConfig['cost_values']) {
|
||||
return modelConfig.cost_values
|
||||
}
|
||||
return { cost_values: modelConfig }
|
||||
}
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
|
||||
static async getAndDeleteMetrics(id: string) {
|
||||
const val = EvaluationRunner.metrics.get(id)
|
||||
if (val) {
|
||||
try {
|
||||
//first lets get the provider and model
|
||||
let selectedModel = undefined
|
||||
let selectedProvider = undefined
|
||||
if (val && val.length > 0) {
|
||||
let modelName = ''
|
||||
let providerName = ''
|
||||
for (let i = 0; i < val.length; i++) {
|
||||
const metric = val[i]
|
||||
if (typeof metric === 'object') {
|
||||
modelName = metric['model']
|
||||
providerName = metric['provider']
|
||||
} else {
|
||||
modelName = JSON.parse(metric)['model']
|
||||
providerName = JSON.parse(metric)['provider']
|
||||
}
|
||||
|
||||
if (modelName) {
|
||||
selectedModel = modelName
|
||||
}
|
||||
if (providerName) {
|
||||
selectedProvider = providerName
|
||||
}
|
||||
}
|
||||
}
|
||||
if (selectedProvider && selectedModel) {
|
||||
const modelConfig = await EvaluationRunner.getCostMetrics(selectedProvider, selectedModel)
|
||||
if (modelConfig) {
|
||||
val.push(JSON.stringify({ cost_values: modelConfig }))
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
//stay silent
|
||||
}
|
||||
}
|
||||
EvaluationRunner.metrics.delete(id)
|
||||
return val
|
||||
}
|
||||
|
||||
static addMetrics(id: string, metric: string) {
|
||||
if (EvaluationRunner.metrics.has(id)) {
|
||||
EvaluationRunner.metrics.get(id)?.push(metric)
|
||||
} else {
|
||||
EvaluationRunner.metrics.set(id, [metric])
|
||||
}
|
||||
}
|
||||
|
||||
baseURL = ''
|
||||
|
||||
constructor(baseURL: string) {
|
||||
this.baseURL = baseURL
|
||||
}
|
||||
|
||||
getChatflowApiKey(chatflowId: string, apiKeys: { chatflowId: string; apiKey: string }[] = []) {
|
||||
return apiKeys.find((item) => item.chatflowId === chatflowId)?.apiKey || ''
|
||||
}
|
||||
|
||||
public async runEvaluations(data: ICommonObject) {
|
||||
const chatflowIds = JSON.parse(data.chatflowId)
|
||||
const returnData: ICommonObject = {}
|
||||
returnData.evaluationId = data.evaluationId
|
||||
returnData.runDate = new Date()
|
||||
returnData.rows = []
|
||||
for (let i = 0; i < data.dataset.rows.length; i++) {
|
||||
returnData.rows.push({
|
||||
input: data.dataset.rows[i].input,
|
||||
expectedOutput: data.dataset.rows[i].output,
|
||||
itemNo: data.dataset.rows[i].sequenceNo,
|
||||
evaluations: [],
|
||||
status: 'pending'
|
||||
})
|
||||
}
|
||||
for (let i = 0; i < chatflowIds.length; i++) {
|
||||
const chatflowId = chatflowIds[i]
|
||||
await this.evaluateChatflow(chatflowId, this.getChatflowApiKey(chatflowId, data.apiKeys), data, returnData)
|
||||
}
|
||||
return returnData
|
||||
}
|
||||
|
||||
async evaluateChatflow(chatflowId: string, apiKey: string, data: any, returnData: any) {
|
||||
for (let i = 0; i < data.dataset.rows.length; i++) {
|
||||
const item = data.dataset.rows[i]
|
||||
const uuid = uuidv4()
|
||||
|
||||
const headers: any = {
|
||||
'X-Request-ID': uuid,
|
||||
'X-Flowise-Evaluation': 'true'
|
||||
}
|
||||
if (apiKey) {
|
||||
headers['Authorization'] = `Bearer ${apiKey}`
|
||||
}
|
||||
let axiosConfig = {
|
||||
headers: headers
|
||||
}
|
||||
let startTime = performance.now()
|
||||
const runData: any = {}
|
||||
runData.chatflowId = chatflowId
|
||||
runData.startTime = startTime
|
||||
const postData: any = { question: item.input, evaluationRunId: uuid, evaluation: true }
|
||||
if (data.sessionId) {
|
||||
postData.overrideConfig = { sessionId: data.sessionId }
|
||||
}
|
||||
try {
|
||||
let response = await axios.post(`${this.baseURL}/api/v1/prediction/${chatflowId}`, postData, axiosConfig)
|
||||
let agentFlowMetrics: any[] = []
|
||||
if (response?.data?.agentFlowExecutedData) {
|
||||
for (let i = 0; i < response.data.agentFlowExecutedData.length; i++) {
|
||||
const agentFlowExecutedData = response.data.agentFlowExecutedData[i]
|
||||
const input_tokens = agentFlowExecutedData?.data?.output?.usageMetadata?.input_tokens || 0
|
||||
const output_tokens = agentFlowExecutedData?.data?.output?.usageMetadata?.output_tokens || 0
|
||||
const total_tokens =
|
||||
agentFlowExecutedData?.data?.output?.usageMetadata?.total_tokens || input_tokens + output_tokens
|
||||
const metrics: any = {
|
||||
promptTokens: input_tokens,
|
||||
completionTokens: output_tokens,
|
||||
totalTokens: total_tokens,
|
||||
provider:
|
||||
agentFlowExecutedData.data?.input?.llmModelConfig?.llmModel ||
|
||||
agentFlowExecutedData.data?.input?.agentModelConfig?.agentModel,
|
||||
model:
|
||||
agentFlowExecutedData.data?.input?.llmModelConfig?.modelName ||
|
||||
agentFlowExecutedData.data?.input?.agentModelConfig?.modelName,
|
||||
nodeLabel: agentFlowExecutedData?.nodeLabel,
|
||||
nodeId: agentFlowExecutedData?.nodeId
|
||||
}
|
||||
if (metrics.provider && metrics.model) {
|
||||
const modelConfig = await EvaluationRunner.getCostMetrics(metrics.provider, metrics.model)
|
||||
if (modelConfig) {
|
||||
metrics.cost_values = {
|
||||
input_cost: (modelConfig.cost_values.input_cost || 0) * (input_tokens / 1000),
|
||||
output_cost: (modelConfig.cost_values.output_cost || 0) * (output_tokens / 1000)
|
||||
}
|
||||
metrics.cost_values.total_cost = metrics.cost_values.input_cost + metrics.cost_values.output_cost
|
||||
}
|
||||
}
|
||||
agentFlowMetrics.push(metrics)
|
||||
}
|
||||
}
|
||||
const endTime = performance.now()
|
||||
const timeTaken = (endTime - startTime).toFixed(2)
|
||||
if (response?.data?.metrics) {
|
||||
runData.metrics = response.data.metrics
|
||||
runData.metrics.push({
|
||||
apiLatency: timeTaken
|
||||
})
|
||||
} else {
|
||||
runData.metrics = [
|
||||
{
|
||||
apiLatency: timeTaken
|
||||
}
|
||||
]
|
||||
}
|
||||
if (agentFlowMetrics.length > 0) {
|
||||
runData.nested_metrics = agentFlowMetrics
|
||||
}
|
||||
runData.status = 'complete'
|
||||
let resultText = ''
|
||||
if (response.data.text) resultText = response.data.text
|
||||
else if (response.data.json) resultText = '```json\n' + JSON.stringify(response.data.json, null, 2)
|
||||
else resultText = JSON.stringify(response.data, null, 2)
|
||||
|
||||
runData.actualOutput = resultText
|
||||
runData.latency = timeTaken
|
||||
runData.error = ''
|
||||
} catch (error: any) {
|
||||
runData.status = 'error'
|
||||
runData.actualOutput = ''
|
||||
runData.error = error?.response?.data?.message
|
||||
? error.response.data.message
|
||||
: error?.message
|
||||
? error.message
|
||||
: 'Unknown error'
|
||||
try {
|
||||
if (runData.error.indexOf('-') > -1) {
|
||||
// if there is a dash, remove all content before
|
||||
runData.error = 'Error: ' + runData.error.substr(runData.error.indexOf('-') + 1).trim()
|
||||
}
|
||||
} catch (error) {
|
||||
//stay silent
|
||||
}
|
||||
const endTime = performance.now()
|
||||
const timeTaken = (endTime - startTime).toFixed(2)
|
||||
runData.metrics = [
|
||||
{
|
||||
apiLatency: timeTaken
|
||||
}
|
||||
]
|
||||
runData.latency = timeTaken
|
||||
}
|
||||
runData.uuid = uuid
|
||||
returnData.rows[i].evaluations.push(runData)
|
||||
}
|
||||
return returnData
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
module.exports = {
|
||||
preset: 'ts-jest',
|
||||
testEnvironment: 'node',
|
||||
roots: ['<rootDir>/nodes'],
|
||||
transform: {
|
||||
'^.+\\.tsx?$': 'ts-jest'
|
||||
},
|
||||
testRegex: '(/__tests__/.*|(\\.|/)(test|spec))\\.tsx?$',
|
||||
moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'],
|
||||
verbose: true,
|
||||
testPathIgnorePatterns: ['/node_modules/', '/dist/'],
|
||||
moduleNameMapper: {
|
||||
'^../../../src/(.*)$': '<rootDir>/src/$1'
|
||||
}
|
||||
}
|
||||
|
|
@ -3,6 +3,41 @@
|
|||
{
|
||||
"name": "awsChatBedrock",
|
||||
"models": [
|
||||
{
|
||||
"label": "anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"name": "anthropic.claude-sonnet-4-5-20250929-v1:0",
|
||||
"description": "Claude 4.5 Sonnet",
|
||||
"input_cost": 0.000003,
|
||||
"output_cost": 0.000015
|
||||
},
|
||||
{
|
||||
"label": "anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"name": "anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
"description": "Claude 4.5 Haiku",
|
||||
"input_cost": 0.000001,
|
||||
"output_cost": 0.000005
|
||||
},
|
||||
{
|
||||
"label": "openai.gpt-oss-20b-1:0",
|
||||
"name": "openai.gpt-oss-20b-1:0",
|
||||
"description": "21B parameters model optimized for lower latency, local, and specialized use cases",
|
||||
"input_cost": 0.00007,
|
||||
"output_cost": 0.0003
|
||||
},
|
||||
{
|
||||
"label": "openai.gpt-oss-120b-1:0",
|
||||
"name": "openai.gpt-oss-120b-1:0",
|
||||
"description": "120B parameters model optimized for production, general purpose, and high-reasoning use cases",
|
||||
"input_cost": 0.00015,
|
||||
"output_cost": 0.0006
|
||||
},
|
||||
{
|
||||
"label": "anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"name": "anthropic.claude-opus-4-1-20250805-v1:0",
|
||||
"description": "Claude 4.1 Opus",
|
||||
"input_cost": 0.000015,
|
||||
"output_cost": 0.000075
|
||||
},
|
||||
{
|
||||
"label": "anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
"name": "anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
|
|
@ -280,6 +315,30 @@
|
|||
{
|
||||
"name": "azureChatOpenAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gpt-5.1",
|
||||
"name": "gpt-5.1",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gpt-5",
|
||||
"name": "gpt-5",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-mini",
|
||||
"name": "gpt-5-mini",
|
||||
"input_cost": 0.00000025,
|
||||
"output_cost": 0.000002
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-nano",
|
||||
"name": "gpt-5-nano",
|
||||
"input_cost": 0.00000005,
|
||||
"output_cost": 0.0000004
|
||||
},
|
||||
{
|
||||
"label": "gpt-4.1",
|
||||
"name": "gpt-4.1",
|
||||
|
|
@ -357,6 +416,18 @@
|
|||
"name": "gpt-4.5-preview",
|
||||
"input_cost": 0.000075,
|
||||
"output_cost": 0.00015
|
||||
},
|
||||
{
|
||||
"label": "gpt-4.1-mini",
|
||||
"name": "gpt-4.1-mini",
|
||||
"input_cost": 0.0000004,
|
||||
"output_cost": 0.0000016
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-chat-latest",
|
||||
"name": "gpt-5-chat-latest",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
@ -416,12 +487,38 @@
|
|||
"name": "gpt-4-1106-preview",
|
||||
"input_cost": 0.00001,
|
||||
"output_cost": 0.00003
|
||||
},
|
||||
{
|
||||
"label": "gpt-4.1-mini",
|
||||
"name": "gpt-4.1-mini",
|
||||
"input_cost": 0.0000004,
|
||||
"output_cost": 0.0000016
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-chat-latest",
|
||||
"name": "gpt-5-chat-latest",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "chatAnthropic",
|
||||
"models": [
|
||||
{
|
||||
"label": "claude-sonnet-4-5",
|
||||
"name": "claude-sonnet-4-5",
|
||||
"description": "Claude 4.5 Sonnet",
|
||||
"input_cost": 0.000003,
|
||||
"output_cost": 0.000015
|
||||
},
|
||||
{
|
||||
"label": "claude-haiku-4-5",
|
||||
"name": "claude-haiku-4-5",
|
||||
"description": "Claude 4.5 Haiku",
|
||||
"input_cost": 0.000001,
|
||||
"output_cost": 0.000005
|
||||
},
|
||||
{
|
||||
"label": "claude-sonnet-4-0",
|
||||
"name": "claude-sonnet-4-0",
|
||||
|
|
@ -429,6 +526,13 @@
|
|||
"input_cost": 0.000003,
|
||||
"output_cost": 0.000015
|
||||
},
|
||||
{
|
||||
"label": "claude-opus-4-1",
|
||||
"name": "claude-opus-4-1",
|
||||
"description": "Claude 4.1 Opus",
|
||||
"input_cost": 0.000015,
|
||||
"output_cost": 0.000075
|
||||
},
|
||||
{
|
||||
"label": "claude-opus-4-0",
|
||||
"name": "claude-opus-4-0",
|
||||
|
|
@ -524,17 +628,29 @@
|
|||
"name": "chatGoogleGenerativeAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gemini-2.5-flash-preview-05-20",
|
||||
"name": "gemini-2.5-flash-preview-05-20",
|
||||
"input_cost": 0.15e-6,
|
||||
"output_cost": 6e-7
|
||||
"label": "gemini-3-pro-preview",
|
||||
"name": "gemini-3-pro-preview",
|
||||
"input_cost": 0.00002,
|
||||
"output_cost": 0.00012
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-pro-preview-03-25",
|
||||
"name": "gemini-2.5-pro-preview-03-25",
|
||||
"label": "gemini-2.5-pro",
|
||||
"name": "gemini-2.5-pro",
|
||||
"input_cost": 0.3e-6,
|
||||
"output_cost": 0.000025
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-flash",
|
||||
"name": "gemini-2.5-flash",
|
||||
"input_cost": 1.25e-6,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-flash-lite",
|
||||
"name": "gemini-2.5-flash-lite",
|
||||
"input_cost": 1e-7,
|
||||
"output_cost": 4e-7
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.0-flash",
|
||||
"name": "gemini-2.0-flash",
|
||||
|
|
@ -581,6 +697,42 @@
|
|||
{
|
||||
"name": "chatGoogleVertexAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gemini-3-pro-preview",
|
||||
"name": "gemini-3-pro-preview",
|
||||
"input_cost": 0.00002,
|
||||
"output_cost": 0.00012
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-pro",
|
||||
"name": "gemini-2.5-pro",
|
||||
"input_cost": 0.3e-6,
|
||||
"output_cost": 0.000025
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-flash",
|
||||
"name": "gemini-2.5-flash",
|
||||
"input_cost": 1.25e-6,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.5-flash-lite",
|
||||
"name": "gemini-2.5-flash-lite",
|
||||
"input_cost": 1e-7,
|
||||
"output_cost": 4e-7
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.0-flash",
|
||||
"name": "gemini-2.0-flash-001",
|
||||
"input_cost": 1e-7,
|
||||
"output_cost": 4e-7
|
||||
},
|
||||
{
|
||||
"label": "gemini-2.0-flash-lite",
|
||||
"name": "gemini-2.0-flash-lite-001",
|
||||
"input_cost": 7.5e-8,
|
||||
"output_cost": 3e-7
|
||||
},
|
||||
{
|
||||
"label": "gemini-1.5-flash-002",
|
||||
"name": "gemini-1.5-flash-002",
|
||||
|
|
@ -617,6 +769,27 @@
|
|||
"input_cost": 1.25e-7,
|
||||
"output_cost": 3.75e-7
|
||||
},
|
||||
{
|
||||
"label": "claude-sonnet-4-5@20250929",
|
||||
"name": "claude-sonnet-4-5@20250929",
|
||||
"description": "Claude 4.5 Sonnet",
|
||||
"input_cost": 0.000003,
|
||||
"output_cost": 0.000015
|
||||
},
|
||||
{
|
||||
"label": "claude-haiku-4-5@20251001",
|
||||
"name": "claude-haiku-4-5@20251001",
|
||||
"description": "Claude 4.5 Haiku",
|
||||
"input_cost": 0.000001,
|
||||
"output_cost": 0.000005
|
||||
},
|
||||
{
|
||||
"label": "claude-opus-4-1@20250805",
|
||||
"name": "claude-opus-4-1@20250805",
|
||||
"description": "Claude 4.1 Opus",
|
||||
"input_cost": 0.000015,
|
||||
"output_cost": 0.000075
|
||||
},
|
||||
{
|
||||
"label": "claude-sonnet-4@20250514",
|
||||
"name": "claude-sonnet-4@20250514",
|
||||
|
|
@ -673,11 +846,63 @@
|
|||
"input_cost": 2.5e-7,
|
||||
"output_cost": 1.25e-6
|
||||
}
|
||||
],
|
||||
"regions": [
|
||||
{ "label": "us-east1", "name": "us-east1" },
|
||||
{ "label": "us-east4", "name": "us-east4" },
|
||||
{ "label": "us-central1", "name": "us-central1" },
|
||||
{ "label": "us-west1", "name": "us-west1" },
|
||||
{ "label": "europe-west4", "name": "europe-west4" },
|
||||
{ "label": "europe-west1", "name": "europe-west1" },
|
||||
{ "label": "europe-west3", "name": "europe-west3" },
|
||||
{ "label": "europe-west2", "name": "europe-west2" },
|
||||
{ "label": "asia-east1", "name": "asia-east1" },
|
||||
{ "label": "asia-southeast1", "name": "asia-southeast1" },
|
||||
{ "label": "asia-northeast1", "name": "asia-northeast1" },
|
||||
{ "label": "asia-south1", "name": "asia-south1" },
|
||||
{ "label": "australia-southeast1", "name": "australia-southeast1" },
|
||||
{ "label": "southamerica-east1", "name": "southamerica-east1" },
|
||||
{ "label": "africa-south1", "name": "africa-south1" },
|
||||
{ "label": "asia-east2", "name": "asia-east2" },
|
||||
{ "label": "asia-northeast2", "name": "asia-northeast2" },
|
||||
{ "label": "asia-northeast3", "name": "asia-northeast3" },
|
||||
{ "label": "asia-south2", "name": "asia-south2" },
|
||||
{ "label": "asia-southeast2", "name": "asia-southeast2" },
|
||||
{ "label": "australia-southeast2", "name": "australia-southeast2" },
|
||||
{ "label": "europe-central2", "name": "europe-central2" },
|
||||
{ "label": "europe-north1", "name": "europe-north1" },
|
||||
{ "label": "europe-north2", "name": "europe-north2" },
|
||||
{ "label": "europe-southwest1", "name": "europe-southwest1" },
|
||||
{ "label": "europe-west10", "name": "europe-west10" },
|
||||
{ "label": "europe-west12", "name": "europe-west12" },
|
||||
{ "label": "europe-west6", "name": "europe-west6" },
|
||||
{ "label": "europe-west8", "name": "europe-west8" },
|
||||
{ "label": "europe-west9", "name": "europe-west9" },
|
||||
{ "label": "me-central1", "name": "me-central1" },
|
||||
{ "label": "me-central2", "name": "me-central2" },
|
||||
{ "label": "me-west1", "name": "me-west1" },
|
||||
{ "label": "northamerica-northeast1", "name": "northamerica-northeast1" },
|
||||
{ "label": "northamerica-northeast2", "name": "northamerica-northeast2" },
|
||||
{ "label": "northamerica-south1", "name": "northamerica-south1" },
|
||||
{ "label": "southamerica-west1", "name": "southamerica-west1" },
|
||||
{ "label": "us-east5", "name": "us-east5" },
|
||||
{ "label": "us-south1", "name": "us-south1" },
|
||||
{ "label": "us-west2", "name": "us-west2" },
|
||||
{ "label": "us-west3", "name": "us-west3" },
|
||||
{ "label": "us-west4", "name": "us-west4" }
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "groqChat",
|
||||
"models": [
|
||||
{
|
||||
"label": "openai/gpt-oss-20b",
|
||||
"name": "openai/gpt-oss-20b"
|
||||
},
|
||||
{
|
||||
"label": "openai/gpt-oss-120b",
|
||||
"name": "openai/gpt-oss-120b"
|
||||
},
|
||||
{
|
||||
"label": "meta-llama/llama-4-maverick-17b-128e-instruct",
|
||||
"name": "meta-llama/llama-4-maverick-17b-128e-instruct"
|
||||
|
|
@ -789,6 +1014,30 @@
|
|||
{
|
||||
"name": "chatOpenAI",
|
||||
"models": [
|
||||
{
|
||||
"label": "gpt-5.1",
|
||||
"name": "gpt-5.1",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gpt-5",
|
||||
"name": "gpt-5",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-mini",
|
||||
"name": "gpt-5-mini",
|
||||
"input_cost": 0.00000025,
|
||||
"output_cost": 0.000002
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-nano",
|
||||
"name": "gpt-5-nano",
|
||||
"input_cost": 0.00000005,
|
||||
"output_cost": 0.0000004
|
||||
},
|
||||
{
|
||||
"label": "gpt-4.1",
|
||||
"name": "gpt-4.1",
|
||||
|
|
@ -1217,6 +1466,18 @@
|
|||
"name": "mistral-large-2402",
|
||||
"input_cost": 0.002,
|
||||
"output_cost": 0.006
|
||||
},
|
||||
{
|
||||
"label": "codestral-latsest",
|
||||
"name": "codestral-latest",
|
||||
"input_cost": 0.0002,
|
||||
"output_cost": 0.0006
|
||||
},
|
||||
{
|
||||
"label": "devstral-small-2505",
|
||||
"name": "devstral-small-2505",
|
||||
"input_cost": 0.0001,
|
||||
"output_cost": 0.0003
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
@ -1511,6 +1772,18 @@
|
|||
"name": "gpt-4-32k",
|
||||
"input_cost": 0.00006,
|
||||
"output_cost": 0.00012
|
||||
},
|
||||
{
|
||||
"label": "gpt-4.1-mini",
|
||||
"name": "gpt-4.1-mini",
|
||||
"input_cost": 0.0000004,
|
||||
"output_cost": 0.0000016
|
||||
},
|
||||
{
|
||||
"label": "gpt-5-chat-latest",
|
||||
"name": "gpt-5-chat-latest",
|
||||
"input_cost": 0.00000125,
|
||||
"output_cost": 0.00001
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
@ -1711,29 +1984,65 @@
|
|||
"name": "googlevertexaiEmbeddings",
|
||||
"models": [
|
||||
{
|
||||
"label": "multimodalembedding",
|
||||
"name": "multimodalembedding"
|
||||
"label": "gemini-embedding-001",
|
||||
"name": "gemini-embedding-001"
|
||||
},
|
||||
{
|
||||
"label": "text-embedding-004",
|
||||
"name": "text-embedding-004"
|
||||
},
|
||||
{
|
||||
"label": "text-embedding-005",
|
||||
"name": "text-embedding-005"
|
||||
},
|
||||
{
|
||||
"label": "text-multilingual-embedding-002",
|
||||
"name": "text-multilingual-embedding-002"
|
||||
},
|
||||
{
|
||||
"label": "textembedding-gecko@001",
|
||||
"name": "textembedding-gecko@001"
|
||||
},
|
||||
{
|
||||
"label": "textembedding-gecko@latest",
|
||||
"name": "textembedding-gecko@latest"
|
||||
},
|
||||
{
|
||||
"label": "textembedding-gecko-multilingual@latest",
|
||||
"name": "textembedding-gecko-multilingual@latest"
|
||||
}
|
||||
],
|
||||
"regions": [
|
||||
{ "label": "us-east1", "name": "us-east1" },
|
||||
{ "label": "us-east4", "name": "us-east4" },
|
||||
{ "label": "us-central1", "name": "us-central1" },
|
||||
{ "label": "us-west1", "name": "us-west1" },
|
||||
{ "label": "europe-west4", "name": "europe-west4" },
|
||||
{ "label": "europe-west1", "name": "europe-west1" },
|
||||
{ "label": "europe-west3", "name": "europe-west3" },
|
||||
{ "label": "europe-west2", "name": "europe-west2" },
|
||||
{ "label": "asia-east1", "name": "asia-east1" },
|
||||
{ "label": "asia-southeast1", "name": "asia-southeast1" },
|
||||
{ "label": "asia-northeast1", "name": "asia-northeast1" },
|
||||
{ "label": "asia-south1", "name": "asia-south1" },
|
||||
{ "label": "australia-southeast1", "name": "australia-southeast1" },
|
||||
{ "label": "southamerica-east1", "name": "southamerica-east1" },
|
||||
{ "label": "africa-south1", "name": "africa-south1" },
|
||||
{ "label": "asia-east2", "name": "asia-east2" },
|
||||
{ "label": "asia-northeast2", "name": "asia-northeast2" },
|
||||
{ "label": "asia-northeast3", "name": "asia-northeast3" },
|
||||
{ "label": "asia-south2", "name": "asia-south2" },
|
||||
{ "label": "asia-southeast2", "name": "asia-southeast2" },
|
||||
{ "label": "australia-southeast2", "name": "australia-southeast2" },
|
||||
{ "label": "europe-central2", "name": "europe-central2" },
|
||||
{ "label": "europe-north1", "name": "europe-north1" },
|
||||
{ "label": "europe-north2", "name": "europe-north2" },
|
||||
{ "label": "europe-southwest1", "name": "europe-southwest1" },
|
||||
{ "label": "europe-west10", "name": "europe-west10" },
|
||||
{ "label": "europe-west12", "name": "europe-west12" },
|
||||
{ "label": "europe-west6", "name": "europe-west6" },
|
||||
{ "label": "europe-west8", "name": "europe-west8" },
|
||||
{ "label": "europe-west9", "name": "europe-west9" },
|
||||
{ "label": "me-central1", "name": "me-central1" },
|
||||
{ "label": "me-central2", "name": "me-central2" },
|
||||
{ "label": "me-west1", "name": "me-west1" },
|
||||
{ "label": "northamerica-northeast1", "name": "northamerica-northeast1" },
|
||||
{ "label": "northamerica-northeast2", "name": "northamerica-northeast2" },
|
||||
{ "label": "northamerica-south1", "name": "northamerica-south1" },
|
||||
{ "label": "southamerica-west1", "name": "southamerica-west1" },
|
||||
{ "label": "us-east5", "name": "us-east5" },
|
||||
{ "label": "us-south1", "name": "us-south1" },
|
||||
{ "label": "us-west2", "name": "us-west2" },
|
||||
{ "label": "us-west3", "name": "us-west3" },
|
||||
{ "label": "us-west4", "name": "us-west4" }
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -1,4 +1,5 @@
|
|||
import { CommonType, ICommonObject, ICondition, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
|
||||
import removeMarkdown from 'remove-markdown'
|
||||
|
||||
class Condition_Agentflow implements INode {
|
||||
label: string
|
||||
|
|
@ -300,8 +301,8 @@ class Condition_Agentflow implements INode {
|
|||
value2 = parseFloat(_value2 as string) || 0
|
||||
break
|
||||
default: // string
|
||||
value1 = _value1 as string
|
||||
value2 = _value2 as string
|
||||
value1 = removeMarkdown((_value1 as string) || '')
|
||||
value2 = removeMarkdown((_value2 as string) || '')
|
||||
}
|
||||
|
||||
const compareOperationResult = compareOperationFunctions[operation](value1, value2)
|
||||
|
|
@ -316,7 +317,7 @@ class Condition_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
// If no condition is fullfilled, add isFulfilled to the ELSE condition
|
||||
// If no condition is fulfilled, add isFulfilled to the ELSE condition
|
||||
const dummyElseConditionData = {
|
||||
type: 'string',
|
||||
value1: '',
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class ConditionAgent_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Condition Agent'
|
||||
this.name = 'conditionAgentAgentflow'
|
||||
this.version = 1.0
|
||||
this.version = 1.1
|
||||
this.type = 'ConditionAgent'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = `Utilize an agent to split flows based on dynamic conditions`
|
||||
|
|
@ -80,6 +80,26 @@ class ConditionAgent_Agentflow implements INode {
|
|||
scenario: ''
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
label: 'Override System Prompt',
|
||||
name: 'conditionAgentOverrideSystemPrompt',
|
||||
type: 'boolean',
|
||||
description: 'Override initial system prompt for Condition Agent',
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Node System Prompt',
|
||||
name: 'conditionAgentSystemPrompt',
|
||||
type: 'string',
|
||||
rows: 4,
|
||||
optional: true,
|
||||
acceptVariable: true,
|
||||
default: CONDITION_AGENT_SYSTEM_PROMPT,
|
||||
description: 'Expert use only. Modifying this can significantly alter agent behavior. Leave default if unsure',
|
||||
show: {
|
||||
conditionAgentOverrideSystemPrompt: true
|
||||
}
|
||||
}
|
||||
/*{
|
||||
label: 'Enable Memory',
|
||||
|
|
@ -242,6 +262,12 @@ class ConditionAgent_Agentflow implements INode {
|
|||
const conditionAgentInput = nodeData.inputs?.conditionAgentInput as string
|
||||
let input = conditionAgentInput || question
|
||||
const conditionAgentInstructions = nodeData.inputs?.conditionAgentInstructions as string
|
||||
const conditionAgentSystemPrompt = nodeData.inputs?.conditionAgentSystemPrompt as string
|
||||
const conditionAgentOverrideSystemPrompt = nodeData.inputs?.conditionAgentOverrideSystemPrompt as boolean
|
||||
let systemPrompt = CONDITION_AGENT_SYSTEM_PROMPT
|
||||
if (conditionAgentSystemPrompt && conditionAgentOverrideSystemPrompt) {
|
||||
systemPrompt = conditionAgentSystemPrompt
|
||||
}
|
||||
|
||||
// Extract memory and configuration options
|
||||
const enableMemory = nodeData.inputs?.conditionAgentEnableMemory as boolean
|
||||
|
|
@ -277,31 +303,15 @@ class ConditionAgent_Agentflow implements INode {
|
|||
const messages: BaseMessageLike[] = [
|
||||
{
|
||||
role: 'system',
|
||||
content: CONDITION_AGENT_SYSTEM_PROMPT
|
||||
content: systemPrompt
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: `{"input": "Hello", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}`
|
||||
content: `{"input": "Hello", "scenarios": ["user is asking about AI", "user is not asking about AI"], "instruction": "Your task is to check if the user is asking about AI."}`
|
||||
},
|
||||
{
|
||||
role: 'assistant',
|
||||
content: `\`\`\`json\n{"output": "default"}\n\`\`\``
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: `{"input": "What is AIGC?", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}`
|
||||
},
|
||||
{
|
||||
role: 'assistant',
|
||||
content: `\`\`\`json\n{"output": "user is asking about AI"}\n\`\`\``
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: `{"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "default"], "instruction": "Determine if the user is interested in learning about AI"}`
|
||||
},
|
||||
{
|
||||
role: 'assistant',
|
||||
content: `\`\`\`json\n{"output": "user is interested in AI topics"}\n\`\`\``
|
||||
content: `\`\`\`json\n{"output": "user is not asking about AI"}\n\`\`\``
|
||||
}
|
||||
]
|
||||
// Use to store messages with image file references as we do not want to store the base64 data into database
|
||||
|
|
@ -374,15 +384,19 @@ class ConditionAgent_Agentflow implements INode {
|
|||
)
|
||||
}
|
||||
|
||||
let calledOutputName = 'default'
|
||||
let calledOutputName: string
|
||||
try {
|
||||
const parsedResponse = this.parseJsonMarkdown(response.content as string)
|
||||
if (!parsedResponse.output) {
|
||||
throw new Error('Missing "output" key in response')
|
||||
if (!parsedResponse.output || typeof parsedResponse.output !== 'string') {
|
||||
throw new Error('LLM response is missing the "output" key or it is not a string.')
|
||||
}
|
||||
calledOutputName = parsedResponse.output
|
||||
} catch (error) {
|
||||
console.warn(`Failed to parse LLM response: ${error}. Using default output.`)
|
||||
throw new Error(
|
||||
`Failed to parse a valid scenario from the LLM's response. Please check if the model is capable of following JSON output instructions. Raw LLM Response: "${
|
||||
response.content as string
|
||||
}"`
|
||||
)
|
||||
}
|
||||
|
||||
// Clean up empty inputs
|
||||
|
|
|
|||
|
|
@ -8,8 +8,7 @@ import {
|
|||
INodeParams,
|
||||
IServerSideEventStreamer
|
||||
} from '../../../src/Interface'
|
||||
import { availableDependencies, defaultAllowBuiltInDep, getVars, prepareSandboxVars } from '../../../src/utils'
|
||||
import { NodeVM } from '@flowiseai/nodevm'
|
||||
import { getVars, executeJavaScriptCode, createCodeExecutionSandbox, processTemplateVariables } from '../../../src/utils'
|
||||
import { updateFlowState } from '../utils'
|
||||
|
||||
interface ICustomFunctionInputVariables {
|
||||
|
|
@ -19,9 +18,9 @@ interface ICustomFunctionInputVariables {
|
|||
|
||||
const exampleFunc = `/*
|
||||
* You can use any libraries imported in Flowise
|
||||
* You can use properties specified in Input Schema as variables. Ex: Property = userid, Variable = $userid
|
||||
* You can use properties specified in Input Variables with the prefix $. For example: $foo
|
||||
* You can get default flow config: $flow.sessionId, $flow.chatId, $flow.chatflowId, $flow.input, $flow.state
|
||||
* You can get custom variables: $vars.<variable-name>
|
||||
* You can get global variables: $vars.<variable-name>
|
||||
* Must return a string value at the end of function
|
||||
*/
|
||||
|
||||
|
|
@ -146,77 +145,51 @@ class CustomFunction_Agentflow implements INode {
|
|||
const appDataSource = options.appDataSource as DataSource
|
||||
const databaseEntities = options.databaseEntities as IDatabaseEntity
|
||||
|
||||
// Update flow state if needed
|
||||
let newState = { ...state }
|
||||
if (_customFunctionUpdateState && Array.isArray(_customFunctionUpdateState) && _customFunctionUpdateState.length > 0) {
|
||||
newState = updateFlowState(state, _customFunctionUpdateState)
|
||||
}
|
||||
|
||||
const variables = await getVars(appDataSource, databaseEntities, nodeData)
|
||||
const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
|
||||
const flow = {
|
||||
chatflowId: options.chatflowid,
|
||||
sessionId: options.sessionId,
|
||||
chatId: options.chatId,
|
||||
input
|
||||
input,
|
||||
state
|
||||
}
|
||||
|
||||
let sandbox: any = {
|
||||
$input: input,
|
||||
util: undefined,
|
||||
Symbol: undefined,
|
||||
child_process: undefined,
|
||||
fs: undefined,
|
||||
process: undefined
|
||||
}
|
||||
sandbox['$vars'] = prepareSandboxVars(variables)
|
||||
sandbox['$flow'] = flow
|
||||
|
||||
// Create additional sandbox variables for custom function inputs
|
||||
const additionalSandbox: ICommonObject = {}
|
||||
for (const item of functionInputVariables) {
|
||||
const variableName = item.variableName
|
||||
const variableValue = item.variableValue
|
||||
sandbox[`$${variableName}`] = variableValue
|
||||
additionalSandbox[`$${variableName}`] = variableValue
|
||||
}
|
||||
|
||||
const builtinDeps = process.env.TOOL_FUNCTION_BUILTIN_DEP
|
||||
? defaultAllowBuiltInDep.concat(process.env.TOOL_FUNCTION_BUILTIN_DEP.split(','))
|
||||
: defaultAllowBuiltInDep
|
||||
const externalDeps = process.env.TOOL_FUNCTION_EXTERNAL_DEP ? process.env.TOOL_FUNCTION_EXTERNAL_DEP.split(',') : []
|
||||
const deps = availableDependencies.concat(externalDeps)
|
||||
const sandbox = createCodeExecutionSandbox(input, variables, flow, additionalSandbox)
|
||||
|
||||
const nodeVMOptions = {
|
||||
console: 'inherit',
|
||||
sandbox,
|
||||
require: {
|
||||
external: { modules: deps },
|
||||
builtin: builtinDeps
|
||||
},
|
||||
eval: false,
|
||||
wasm: false,
|
||||
timeout: 10000
|
||||
} as any
|
||||
// Setup streaming function if needed
|
||||
const streamOutput = isStreamable
|
||||
? (output: string) => {
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer
|
||||
sseStreamer.streamTokenEvent(chatId, output)
|
||||
}
|
||||
: undefined
|
||||
|
||||
const vm = new NodeVM(nodeVMOptions)
|
||||
try {
|
||||
const response = await vm.run(`module.exports = async function() {${javascriptFunction}}()`, __dirname)
|
||||
const response = await executeJavaScriptCode(javascriptFunction, sandbox, {
|
||||
libraries: ['axios'],
|
||||
streamOutput
|
||||
})
|
||||
|
||||
let finalOutput = response
|
||||
if (typeof response === 'object') {
|
||||
finalOutput = JSON.stringify(response, null, 2)
|
||||
}
|
||||
|
||||
if (isStreamable) {
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer
|
||||
sseStreamer.streamTokenEvent(chatId, finalOutput)
|
||||
// Update flow state if needed
|
||||
let newState = { ...state }
|
||||
if (_customFunctionUpdateState && Array.isArray(_customFunctionUpdateState) && _customFunctionUpdateState.length > 0) {
|
||||
newState = updateFlowState(state, _customFunctionUpdateState)
|
||||
}
|
||||
|
||||
// Process template variables in state
|
||||
if (newState && Object.keys(newState).length > 0) {
|
||||
for (const key in newState) {
|
||||
if (newState[key].toString().includes('{{ output }}')) {
|
||||
newState[key] = finalOutput
|
||||
}
|
||||
}
|
||||
}
|
||||
newState = processTemplateVariables(newState, finalOutput)
|
||||
|
||||
const returnOutput = {
|
||||
id: nodeData.id,
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import {
|
|||
IServerSideEventStreamer
|
||||
} from '../../../src/Interface'
|
||||
import axios, { AxiosRequestConfig } from 'axios'
|
||||
import { getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { getCredentialData, getCredentialParam, processTemplateVariables, parseJsonBody } from '../../../src/utils'
|
||||
import { DataSource } from 'typeorm'
|
||||
import { BaseMessageLike } from '@langchain/core/messages'
|
||||
import { updateFlowState } from '../utils'
|
||||
|
|
@ -30,7 +30,7 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Execute Flow'
|
||||
this.name = 'executeFlowAgentflow'
|
||||
this.version = 1.0
|
||||
this.version = 1.1
|
||||
this.type = 'ExecuteFlow'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Execute another flow'
|
||||
|
|
@ -62,7 +62,8 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
name: 'executeFlowOverrideConfig',
|
||||
description: 'Override the config passed to the flow',
|
||||
type: 'json',
|
||||
optional: true
|
||||
optional: true,
|
||||
acceptVariable: true
|
||||
},
|
||||
{
|
||||
label: 'Base URL',
|
||||
|
|
@ -127,7 +128,8 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
return returnData
|
||||
}
|
||||
|
||||
const chatflows = await appDataSource.getRepository(databaseEntities['ChatFlow']).find()
|
||||
const searchOptions = options.searchOptions || {}
|
||||
const chatflows = await appDataSource.getRepository(databaseEntities['ChatFlow']).findBy(searchOptions)
|
||||
|
||||
for (let i = 0; i < chatflows.length; i += 1) {
|
||||
let cfType = 'Chatflow'
|
||||
|
|
@ -161,12 +163,15 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
const flowInput = nodeData.inputs?.executeFlowInput as string
|
||||
const returnResponseAs = nodeData.inputs?.executeFlowReturnResponseAs as string
|
||||
const _executeFlowUpdateState = nodeData.inputs?.executeFlowUpdateState
|
||||
const overrideConfig =
|
||||
typeof nodeData.inputs?.executeFlowOverrideConfig === 'string' &&
|
||||
nodeData.inputs.executeFlowOverrideConfig.startsWith('{') &&
|
||||
nodeData.inputs.executeFlowOverrideConfig.endsWith('}')
|
||||
? JSON.parse(nodeData.inputs.executeFlowOverrideConfig)
|
||||
: nodeData.inputs?.executeFlowOverrideConfig
|
||||
|
||||
let overrideConfig = nodeData.inputs?.executeFlowOverrideConfig
|
||||
if (typeof overrideConfig === 'string' && overrideConfig.startsWith('{') && overrideConfig.endsWith('}')) {
|
||||
try {
|
||||
overrideConfig = parseJsonBody(overrideConfig)
|
||||
} catch (parseError) {
|
||||
throw new Error(`Invalid JSON in executeFlowOverrideConfig: ${parseError.message}`)
|
||||
}
|
||||
}
|
||||
|
||||
const state = options.agentflowRuntime?.state as ICommonObject
|
||||
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
|
||||
|
|
@ -180,7 +185,8 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
if (selectedFlowId === options.chatflowid) throw new Error('Cannot call the same agentflow!')
|
||||
|
||||
let headers: Record<string, string> = {
|
||||
'Content-Type': 'application/json'
|
||||
'Content-Type': 'application/json',
|
||||
'flowise-tool': 'true'
|
||||
}
|
||||
if (chatflowApiKey) headers = { ...headers, Authorization: `Bearer ${chatflowApiKey}` }
|
||||
|
||||
|
|
@ -214,13 +220,7 @@ class ExecuteFlow_Agentflow implements INode {
|
|||
}
|
||||
|
||||
// Process template variables in state
|
||||
if (newState && Object.keys(newState).length > 0) {
|
||||
for (const key in newState) {
|
||||
if (newState[key].toString().includes('{{ output }}')) {
|
||||
newState[key] = resultText
|
||||
}
|
||||
}
|
||||
}
|
||||
newState = processTemplateVariables(newState, resultText)
|
||||
|
||||
// Only add to runtime chat history if this is the first node
|
||||
const inputMessages = []
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import axios, { AxiosRequestConfig, Method, ResponseType } from 'axios'
|
||||
import { AxiosRequestConfig, Method, ResponseType } from 'axios'
|
||||
import FormData from 'form-data'
|
||||
import * as querystring from 'querystring'
|
||||
import { getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { getCredentialData, getCredentialParam, parseJsonBody } from '../../../src/utils'
|
||||
import { secureAxiosRequest } from '../../../src/httpSecurity'
|
||||
|
||||
class HTTP_Agentflow implements INode {
|
||||
label: string
|
||||
|
|
@ -21,7 +22,7 @@ class HTTP_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'HTTP'
|
||||
this.name = 'httpAgentflow'
|
||||
this.version = 1.0
|
||||
this.version = 1.1
|
||||
this.type = 'HTTP'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Send a HTTP request'
|
||||
|
|
@ -66,12 +67,14 @@ class HTTP_Agentflow implements INode {
|
|||
{
|
||||
label: 'URL',
|
||||
name: 'url',
|
||||
type: 'string'
|
||||
type: 'string',
|
||||
acceptVariable: true
|
||||
},
|
||||
{
|
||||
label: 'Headers',
|
||||
name: 'headers',
|
||||
type: 'array',
|
||||
acceptVariable: true,
|
||||
array: [
|
||||
{
|
||||
label: 'Key',
|
||||
|
|
@ -83,7 +86,8 @@ class HTTP_Agentflow implements INode {
|
|||
label: 'Value',
|
||||
name: 'value',
|
||||
type: 'string',
|
||||
default: ''
|
||||
default: '',
|
||||
acceptVariable: true
|
||||
}
|
||||
],
|
||||
optional: true
|
||||
|
|
@ -92,6 +96,7 @@ class HTTP_Agentflow implements INode {
|
|||
label: 'Query Params',
|
||||
name: 'queryParams',
|
||||
type: 'array',
|
||||
acceptVariable: true,
|
||||
array: [
|
||||
{
|
||||
label: 'Key',
|
||||
|
|
@ -103,7 +108,8 @@ class HTTP_Agentflow implements INode {
|
|||
label: 'Value',
|
||||
name: 'value',
|
||||
type: 'string',
|
||||
default: ''
|
||||
default: '',
|
||||
acceptVariable: true
|
||||
}
|
||||
],
|
||||
optional: true
|
||||
|
|
@ -147,6 +153,7 @@ class HTTP_Agentflow implements INode {
|
|||
label: 'Body',
|
||||
name: 'body',
|
||||
type: 'array',
|
||||
acceptVariable: true,
|
||||
show: {
|
||||
bodyType: ['xWwwFormUrlencoded', 'formData']
|
||||
},
|
||||
|
|
@ -161,7 +168,8 @@ class HTTP_Agentflow implements INode {
|
|||
label: 'Value',
|
||||
name: 'value',
|
||||
type: 'string',
|
||||
default: ''
|
||||
default: '',
|
||||
acceptVariable: true
|
||||
}
|
||||
],
|
||||
optional: true
|
||||
|
|
@ -220,14 +228,14 @@ class HTTP_Agentflow implements INode {
|
|||
// Add credentials if provided
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
if (credentialData && Object.keys(credentialData).length !== 0) {
|
||||
const basicAuthUsername = getCredentialParam('username', credentialData, nodeData)
|
||||
const basicAuthPassword = getCredentialParam('password', credentialData, nodeData)
|
||||
const basicAuthUsername = getCredentialParam('basicAuthUsername', credentialData, nodeData)
|
||||
const basicAuthPassword = getCredentialParam('basicAuthPassword', credentialData, nodeData)
|
||||
const bearerToken = getCredentialParam('token', credentialData, nodeData)
|
||||
const apiKeyName = getCredentialParam('key', credentialData, nodeData)
|
||||
const apiKeyValue = getCredentialParam('value', credentialData, nodeData)
|
||||
|
||||
// Determine which type of auth to use based on available credentials
|
||||
if (basicAuthUsername && basicAuthPassword) {
|
||||
if (basicAuthUsername || basicAuthPassword) {
|
||||
// Basic Auth
|
||||
const auth = Buffer.from(`${basicAuthUsername}:${basicAuthPassword}`).toString('base64')
|
||||
requestHeaders['Authorization'] = `Basic ${auth}`
|
||||
|
|
@ -266,10 +274,11 @@ class HTTP_Agentflow implements INode {
|
|||
// Handle request body based on body type
|
||||
if (method !== 'GET' && body) {
|
||||
switch (bodyType) {
|
||||
case 'json':
|
||||
requestConfig.data = typeof body === 'string' ? JSON.parse(body) : body
|
||||
case 'json': {
|
||||
requestConfig.data = typeof body === 'string' ? parseJsonBody(body) : body
|
||||
requestHeaders['Content-Type'] = 'application/json'
|
||||
break
|
||||
}
|
||||
case 'raw':
|
||||
requestConfig.data = body
|
||||
break
|
||||
|
|
@ -284,14 +293,14 @@ class HTTP_Agentflow implements INode {
|
|||
break
|
||||
}
|
||||
case 'xWwwFormUrlencoded':
|
||||
requestConfig.data = querystring.stringify(typeof body === 'string' ? JSON.parse(body) : body)
|
||||
requestConfig.data = querystring.stringify(typeof body === 'string' ? parseJsonBody(body) : body)
|
||||
requestHeaders['Content-Type'] = 'application/x-www-form-urlencoded'
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Make the HTTP request
|
||||
const response = await axios(requestConfig)
|
||||
// Make the secure HTTP request that validates all URLs in redirect chains
|
||||
const response = await secureAxiosRequest(requestConfig)
|
||||
|
||||
// Process response based on response type
|
||||
let responseData
|
||||
|
|
@ -330,6 +339,9 @@ class HTTP_Agentflow implements INode {
|
|||
} catch (error) {
|
||||
console.error('HTTP Request Error:', error)
|
||||
|
||||
const errorMessage =
|
||||
error.response?.data?.message || error.response?.data?.error || error.message || 'An error occurred during the HTTP request'
|
||||
|
||||
// Format error response
|
||||
const errorResponse: any = {
|
||||
id: nodeData.id,
|
||||
|
|
@ -347,7 +359,7 @@ class HTTP_Agentflow implements INode {
|
|||
},
|
||||
error: {
|
||||
name: error.name || 'Error',
|
||||
message: error.message || 'An error occurred during the HTTP request'
|
||||
message: errorMessage
|
||||
},
|
||||
state
|
||||
}
|
||||
|
|
@ -360,7 +372,7 @@ class HTTP_Agentflow implements INode {
|
|||
errorResponse.error.headers = error.response.headers
|
||||
}
|
||||
|
||||
throw new Error(error)
|
||||
throw new Error(errorMessage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -208,7 +208,7 @@ class HumanInput_Agentflow implements INode {
|
|||
humanInputDescription = (nodeData.inputs?.humanInputDescription as string) || 'Do you want to proceed?'
|
||||
const messages = [...pastChatHistory, ...runtimeChatHistory]
|
||||
// Find the last message in the messages array
|
||||
const lastMessage = (messages[messages.length - 1] as any).content || ''
|
||||
const lastMessage = messages.length > 0 ? (messages[messages.length - 1] as any).content || '' : ''
|
||||
humanInputDescription = `${lastMessage}\n\n${humanInputDescription}`
|
||||
if (isStreamable) {
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
|
|
@ -241,8 +241,11 @@ class HumanInput_Agentflow implements INode {
|
|||
if (isStreamable) {
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
for await (const chunk of await llmNodeInstance.stream(messages)) {
|
||||
sseStreamer.streamTokenEvent(chatId, chunk.content.toString())
|
||||
response = response.concat(chunk)
|
||||
const content = typeof chunk === 'string' ? chunk : chunk.content.toString()
|
||||
sseStreamer.streamTokenEvent(chatId, content)
|
||||
|
||||
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
|
||||
response = response.concat(messageChunk)
|
||||
}
|
||||
humanInputDescription = response.content as string
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
|
||||
import { parseJsonBody } from '../../../src/utils'
|
||||
|
||||
class Iteration_Agentflow implements INode {
|
||||
label: string
|
||||
|
|
@ -39,12 +40,17 @@ class Iteration_Agentflow implements INode {
|
|||
const iterationInput = nodeData.inputs?.iterationInput
|
||||
|
||||
// Helper function to clean JSON strings with redundant backslashes
|
||||
const cleanJsonString = (str: string): string => {
|
||||
return str.replace(/\\(["'[\]{}])/g, '$1')
|
||||
const safeParseJson = (str: string): string => {
|
||||
try {
|
||||
return parseJsonBody(str)
|
||||
} catch {
|
||||
// Try parsing after cleaning
|
||||
return parseJsonBody(str.replace(/\\(["'[\]{}])/g, '$1'))
|
||||
}
|
||||
}
|
||||
|
||||
const iterationInputArray =
|
||||
typeof iterationInput === 'string' && iterationInput !== '' ? JSON.parse(cleanJsonString(iterationInput)) : iterationInput
|
||||
typeof iterationInput === 'string' && iterationInput !== '' ? safeParseJson(iterationInput) : iterationInput
|
||||
|
||||
if (!iterationInputArray || !Array.isArray(iterationInputArray)) {
|
||||
throw new Error('Invalid input array')
|
||||
|
|
|
|||
|
|
@ -1,10 +1,9 @@
|
|||
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
|
||||
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
|
||||
import { ICommonObject, IMessage, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
|
||||
import { AIMessageChunk, BaseMessageLike, MessageContentText } from '@langchain/core/messages'
|
||||
import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
|
||||
import { z } from 'zod'
|
||||
import { AnalyticHandler } from '../../../src/handler'
|
||||
import { ILLMMessage, IStructuredOutput } from '../Interface.Agentflow'
|
||||
import { ILLMMessage } from '../Interface.Agentflow'
|
||||
import {
|
||||
getPastChatHistoryImageMessages,
|
||||
getUniqueImageMessages,
|
||||
|
|
@ -12,7 +11,8 @@ import {
|
|||
replaceBase64ImagesWithFileReferences,
|
||||
updateFlowState
|
||||
} from '../utils'
|
||||
import { get } from 'lodash'
|
||||
import { processTemplateVariables, configureStructuredOutput } from '../../../src/utils'
|
||||
import { flatten } from 'lodash'
|
||||
|
||||
class LLM_Agentflow implements INode {
|
||||
label: string
|
||||
|
|
@ -262,6 +262,7 @@ class LLM_Agentflow implements INode {
|
|||
}`,
|
||||
description: 'JSON schema for the structured output',
|
||||
optional: true,
|
||||
hideCodeExecute: true,
|
||||
show: {
|
||||
'llmStructuredOutput[$index].type': 'jsonArray'
|
||||
}
|
||||
|
|
@ -358,6 +359,7 @@ class LLM_Agentflow implements INode {
|
|||
const state = options.agentflowRuntime?.state as ICommonObject
|
||||
const pastChatHistory = (options.pastChatHistory as BaseMessageLike[]) ?? []
|
||||
const runtimeChatHistory = (options.agentflowRuntime?.chatHistory as BaseMessageLike[]) ?? []
|
||||
const prependedChatHistory = options.prependedChatHistory as IMessage[]
|
||||
const chatId = options.chatId as string
|
||||
|
||||
// Initialize the LLM model instance
|
||||
|
|
@ -381,11 +383,27 @@ class LLM_Agentflow implements INode {
|
|||
// Use to keep track of past messages with image file references
|
||||
let pastImageMessagesWithFileRef: BaseMessageLike[] = []
|
||||
|
||||
// Prepend history ONLY if it is the first node
|
||||
if (prependedChatHistory.length > 0 && !runtimeChatHistory.length) {
|
||||
for (const msg of prependedChatHistory) {
|
||||
const role: string = msg.role === 'apiMessage' ? 'assistant' : 'user'
|
||||
const content: string = msg.content ?? ''
|
||||
messages.push({
|
||||
role,
|
||||
content
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
for (const msg of llmMessages) {
|
||||
const role = msg.role
|
||||
const content = msg.content
|
||||
if (role && content) {
|
||||
messages.push({ role, content })
|
||||
if (role === 'system') {
|
||||
messages.unshift({ role, content })
|
||||
} else {
|
||||
messages.push({ role, content })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -410,7 +428,7 @@ class LLM_Agentflow implements INode {
|
|||
/*
|
||||
* If this is the first node:
|
||||
* - Add images to messages if exist
|
||||
* - Add user message
|
||||
* - Add user message if it does not exist in the llmMessages array
|
||||
*/
|
||||
if (options.uploads) {
|
||||
const imageContents = await getUniqueImageMessages(options, messages, modelConfig)
|
||||
|
|
@ -421,7 +439,7 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
if (input && typeof input === 'string') {
|
||||
if (input && typeof input === 'string' && !llmMessages.some((msg) => msg.role === 'user')) {
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content: input
|
||||
|
|
@ -433,7 +451,7 @@ class LLM_Agentflow implements INode {
|
|||
// Configure structured output if specified
|
||||
const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0
|
||||
if (isStructuredOutput) {
|
||||
llmNodeInstance = this.configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
|
||||
llmNodeInstance = configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
|
||||
}
|
||||
|
||||
// Initialize response and determine if streaming is possible
|
||||
|
|
@ -460,11 +478,15 @@ class LLM_Agentflow implements INode {
|
|||
// Stream whole response back to UI if this is the last node
|
||||
if (isLastNode && options.sseStreamer) {
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
let responseContent = JSON.stringify(response, null, 2)
|
||||
if (typeof response.content === 'string') {
|
||||
responseContent = response.content
|
||||
let finalResponse = ''
|
||||
if (response.content && Array.isArray(response.content)) {
|
||||
finalResponse = response.content.map((item: any) => item.text).join('\n')
|
||||
} else if (response.content && typeof response.content === 'string') {
|
||||
finalResponse = response.content
|
||||
} else {
|
||||
finalResponse = JSON.stringify(response, null, 2)
|
||||
}
|
||||
sseStreamer.streamTokenEvent(chatId, responseContent)
|
||||
sseStreamer.streamTokenEvent(chatId, finalResponse)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -486,8 +508,15 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
|
||||
// Prepare final response and output object
|
||||
const finalResponse = (response.content as string) ?? JSON.stringify(response, null, 2)
|
||||
const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta)
|
||||
let finalResponse = ''
|
||||
if (response.content && Array.isArray(response.content)) {
|
||||
finalResponse = response.content.map((item: any) => item.text).join('\n')
|
||||
} else if (response.content && typeof response.content === 'string') {
|
||||
finalResponse = response.content
|
||||
} else {
|
||||
finalResponse = JSON.stringify(response, null, 2)
|
||||
}
|
||||
const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta, isStructuredOutput)
|
||||
|
||||
// End analytics tracking
|
||||
if (analyticHandlers && llmIds) {
|
||||
|
|
@ -500,36 +529,7 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
|
||||
// Process template variables in state
|
||||
if (newState && Object.keys(newState).length > 0) {
|
||||
for (const key in newState) {
|
||||
const stateValue = newState[key].toString()
|
||||
if (stateValue.includes('{{ output')) {
|
||||
// Handle simple output replacement
|
||||
if (stateValue === '{{ output }}') {
|
||||
newState[key] = finalResponse
|
||||
continue
|
||||
}
|
||||
|
||||
// Handle JSON path expressions like {{ output.item1 }}
|
||||
// eslint-disable-next-line
|
||||
const match = stateValue.match(/{{[\s]*output\.([\w\.]+)[\s]*}}/)
|
||||
if (match) {
|
||||
try {
|
||||
// Parse the response if it's JSON
|
||||
const jsonResponse = typeof finalResponse === 'string' ? JSON.parse(finalResponse) : finalResponse
|
||||
// Get the value using lodash get
|
||||
const path = match[1]
|
||||
const value = get(jsonResponse, path)
|
||||
newState[key] = value ?? stateValue // Fall back to original if path not found
|
||||
} catch (e) {
|
||||
// If JSON parsing fails, keep original template
|
||||
console.warn(`Failed to parse JSON or find path in output: ${e}`)
|
||||
newState[key] = stateValue
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
newState = processTemplateVariables(newState, finalResponse)
|
||||
|
||||
// Replace the actual messages array with one that includes the file references for images instead of base64 data
|
||||
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
|
||||
|
|
@ -545,7 +545,19 @@ class LLM_Agentflow implements INode {
|
|||
inputMessages.push(...runtimeImageMessagesWithFileRef)
|
||||
}
|
||||
if (input && typeof input === 'string') {
|
||||
inputMessages.push({ role: 'user', content: input })
|
||||
if (!enableMemory) {
|
||||
if (!llmMessages.some((msg) => msg.role === 'user')) {
|
||||
inputMessages.push({ role: 'user', content: input })
|
||||
} else {
|
||||
llmMessages.map((msg) => {
|
||||
if (msg.role === 'user') {
|
||||
inputMessages.push({ role: 'user', content: msg.content })
|
||||
}
|
||||
})
|
||||
}
|
||||
} else {
|
||||
inputMessages.push({ role: 'user', content: input })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -742,59 +754,6 @@ class LLM_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Configures structured output for the LLM
|
||||
*/
|
||||
private configureStructuredOutput(llmNodeInstance: BaseChatModel, llmStructuredOutput: IStructuredOutput[]): BaseChatModel {
|
||||
try {
|
||||
const zodObj: ICommonObject = {}
|
||||
for (const sch of llmStructuredOutput) {
|
||||
if (sch.type === 'string') {
|
||||
zodObj[sch.key] = z.string().describe(sch.description || '')
|
||||
} else if (sch.type === 'stringArray') {
|
||||
zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
|
||||
} else if (sch.type === 'number') {
|
||||
zodObj[sch.key] = z.number().describe(sch.description || '')
|
||||
} else if (sch.type === 'boolean') {
|
||||
zodObj[sch.key] = z.boolean().describe(sch.description || '')
|
||||
} else if (sch.type === 'enum') {
|
||||
const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
|
||||
zodObj[sch.key] = z
|
||||
.enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
|
||||
.describe(sch.description || '')
|
||||
} else if (sch.type === 'jsonArray') {
|
||||
const jsonSchema = sch.jsonSchema
|
||||
if (jsonSchema) {
|
||||
try {
|
||||
// Parse the JSON schema
|
||||
const schemaObj = JSON.parse(jsonSchema)
|
||||
|
||||
// Create a Zod schema from the JSON schema
|
||||
const itemSchema = this.createZodSchemaFromJSON(schemaObj)
|
||||
|
||||
// Create an array schema of the item schema
|
||||
zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
|
||||
} catch (err) {
|
||||
console.error(`Error parsing JSON schema for ${sch.key}:`, err)
|
||||
// Fallback to generic array of records
|
||||
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
|
||||
}
|
||||
} else {
|
||||
// If no schema provided, use generic array of records
|
||||
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
|
||||
}
|
||||
}
|
||||
}
|
||||
const structuredOutput = z.object(zodObj)
|
||||
|
||||
// @ts-ignore
|
||||
return llmNodeInstance.withStructuredOutput(structuredOutput)
|
||||
} catch (exception) {
|
||||
console.error(exception)
|
||||
return llmNodeInstance
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles streaming response from the LLM
|
||||
*/
|
||||
|
|
@ -811,16 +770,20 @@ class LLM_Agentflow implements INode {
|
|||
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
|
||||
if (sseStreamer) {
|
||||
let content = ''
|
||||
if (Array.isArray(chunk.content) && chunk.content.length > 0) {
|
||||
|
||||
if (typeof chunk === 'string') {
|
||||
content = chunk
|
||||
} else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
|
||||
const contents = chunk.content as MessageContentText[]
|
||||
content = contents.map((item) => item.text).join('')
|
||||
} else {
|
||||
} else if (chunk.content) {
|
||||
content = chunk.content.toString()
|
||||
}
|
||||
sseStreamer.streamTokenEvent(chatId, content)
|
||||
}
|
||||
|
||||
response = response.concat(chunk)
|
||||
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
|
||||
response = response.concat(messageChunk)
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error during streaming:', error)
|
||||
|
|
@ -841,7 +804,8 @@ class LLM_Agentflow implements INode {
|
|||
finalResponse: string,
|
||||
startTime: number,
|
||||
endTime: number,
|
||||
timeDelta: number
|
||||
timeDelta: number,
|
||||
isStructuredOutput: boolean
|
||||
): any {
|
||||
const output: any = {
|
||||
content: finalResponse,
|
||||
|
|
@ -860,6 +824,15 @@ class LLM_Agentflow implements INode {
|
|||
output.usageMetadata = response.usage_metadata
|
||||
}
|
||||
|
||||
if (isStructuredOutput && typeof response === 'object') {
|
||||
const structuredOutput = response as Record<string, any>
|
||||
for (const key in structuredOutput) {
|
||||
if (structuredOutput[key] !== undefined && structuredOutput[key] !== null) {
|
||||
output[key] = structuredOutput[key]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return output
|
||||
}
|
||||
|
||||
|
|
@ -870,7 +843,12 @@ class LLM_Agentflow implements INode {
|
|||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
|
||||
if (response.tool_calls) {
|
||||
sseStreamer.streamCalledToolsEvent(chatId, response.tool_calls)
|
||||
const formattedToolCalls = response.tool_calls.map((toolCall: any) => ({
|
||||
tool: toolCall.name || 'tool',
|
||||
toolInput: toolCall.args,
|
||||
toolOutput: ''
|
||||
}))
|
||||
sseStreamer.streamCalledToolsEvent(chatId, flatten(formattedToolCalls))
|
||||
}
|
||||
|
||||
if (response.usage_metadata) {
|
||||
|
|
@ -879,107 +857,6 @@ class LLM_Agentflow implements INode {
|
|||
|
||||
sseStreamer.streamEndEvent(chatId)
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a Zod schema from a JSON schema object
|
||||
* @param jsonSchema The JSON schema object
|
||||
* @returns A Zod schema
|
||||
*/
|
||||
private createZodSchemaFromJSON(jsonSchema: any): z.ZodTypeAny {
|
||||
// If the schema is an object with properties, create an object schema
|
||||
if (typeof jsonSchema === 'object' && jsonSchema !== null) {
|
||||
const schemaObj: Record<string, z.ZodTypeAny> = {}
|
||||
|
||||
// Process each property in the schema
|
||||
for (const [key, value] of Object.entries(jsonSchema)) {
|
||||
if (value === null) {
|
||||
// Handle null values
|
||||
schemaObj[key] = z.null()
|
||||
} else if (typeof value === 'object' && !Array.isArray(value)) {
|
||||
// Check if the property has a type definition
|
||||
if ('type' in value) {
|
||||
const type = value.type as string
|
||||
const description = ('description' in value ? (value.description as string) : '') || ''
|
||||
|
||||
// Create the appropriate Zod type based on the type property
|
||||
if (type === 'string') {
|
||||
schemaObj[key] = z.string().describe(description)
|
||||
} else if (type === 'number') {
|
||||
schemaObj[key] = z.number().describe(description)
|
||||
} else if (type === 'boolean') {
|
||||
schemaObj[key] = z.boolean().describe(description)
|
||||
} else if (type === 'array') {
|
||||
// If it's an array type, check if items is defined
|
||||
if ('items' in value && value.items) {
|
||||
const itemSchema = this.createZodSchemaFromJSON(value.items)
|
||||
schemaObj[key] = z.array(itemSchema).describe(description)
|
||||
} else {
|
||||
// Default to array of any if items not specified
|
||||
schemaObj[key] = z.array(z.any()).describe(description)
|
||||
}
|
||||
} else if (type === 'object') {
|
||||
// If it's an object type, check if properties is defined
|
||||
if ('properties' in value && value.properties) {
|
||||
const nestedSchema = this.createZodSchemaFromJSON(value.properties)
|
||||
schemaObj[key] = nestedSchema.describe(description)
|
||||
} else {
|
||||
// Default to record of any if properties not specified
|
||||
schemaObj[key] = z.record(z.any()).describe(description)
|
||||
}
|
||||
} else {
|
||||
// Default to any for unknown types
|
||||
schemaObj[key] = z.any().describe(description)
|
||||
}
|
||||
|
||||
// Check if the property is optional
|
||||
if ('optional' in value && value.optional === true) {
|
||||
schemaObj[key] = schemaObj[key].optional()
|
||||
}
|
||||
} else if (Array.isArray(value)) {
|
||||
// Array values without a type property
|
||||
if (value.length > 0) {
|
||||
// If the array has items, recursively create a schema for the first item
|
||||
const itemSchema = this.createZodSchemaFromJSON(value[0])
|
||||
schemaObj[key] = z.array(itemSchema)
|
||||
} else {
|
||||
// Empty array, allow any array
|
||||
schemaObj[key] = z.array(z.any())
|
||||
}
|
||||
} else {
|
||||
// It's a nested object without a type property, recursively create schema
|
||||
schemaObj[key] = this.createZodSchemaFromJSON(value)
|
||||
}
|
||||
} else if (Array.isArray(value)) {
|
||||
// Array values
|
||||
if (value.length > 0) {
|
||||
// If the array has items, recursively create a schema for the first item
|
||||
const itemSchema = this.createZodSchemaFromJSON(value[0])
|
||||
schemaObj[key] = z.array(itemSchema)
|
||||
} else {
|
||||
// Empty array, allow any array
|
||||
schemaObj[key] = z.array(z.any())
|
||||
}
|
||||
} else {
|
||||
// For primitive values (which shouldn't be in the schema directly)
|
||||
// Use the corresponding Zod type
|
||||
if (typeof value === 'string') {
|
||||
schemaObj[key] = z.string()
|
||||
} else if (typeof value === 'number') {
|
||||
schemaObj[key] = z.number()
|
||||
} else if (typeof value === 'boolean') {
|
||||
schemaObj[key] = z.boolean()
|
||||
} else {
|
||||
schemaObj[key] = z.any()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return z.object(schemaObj)
|
||||
}
|
||||
|
||||
// Fallback to any for unknown types
|
||||
return z.any()
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: LLM_Agentflow }
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
|
||||
import { updateFlowState } from '../utils'
|
||||
|
||||
class Loop_Agentflow implements INode {
|
||||
label: string
|
||||
|
|
@ -19,7 +20,7 @@ class Loop_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Loop'
|
||||
this.name = 'loopAgentflow'
|
||||
this.version = 1.0
|
||||
this.version = 1.1
|
||||
this.type = 'Loop'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Loop back to a previous node'
|
||||
|
|
@ -40,6 +41,40 @@ class Loop_Agentflow implements INode {
|
|||
name: 'maxLoopCount',
|
||||
type: 'number',
|
||||
default: 5
|
||||
},
|
||||
{
|
||||
label: 'Fallback Message',
|
||||
name: 'fallbackMessage',
|
||||
type: 'string',
|
||||
description: 'Message to display if the loop count is exceeded',
|
||||
placeholder: 'Enter your fallback message here',
|
||||
rows: 4,
|
||||
acceptVariable: true,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Update Flow State',
|
||||
name: 'loopUpdateState',
|
||||
description: 'Update runtime state during the execution of the workflow',
|
||||
type: 'array',
|
||||
optional: true,
|
||||
acceptVariable: true,
|
||||
array: [
|
||||
{
|
||||
label: 'Key',
|
||||
name: 'key',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listRuntimeStateKeys',
|
||||
freeSolo: true
|
||||
},
|
||||
{
|
||||
label: 'Value',
|
||||
name: 'value',
|
||||
type: 'string',
|
||||
acceptVariable: true,
|
||||
acceptNodeOutputAsVariable: true
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -58,12 +93,20 @@ class Loop_Agentflow implements INode {
|
|||
})
|
||||
}
|
||||
return returnOptions
|
||||
},
|
||||
async listRuntimeStateKeys(_: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
|
||||
const previousNodes = options.previousNodes as ICommonObject[]
|
||||
const startAgentflowNode = previousNodes.find((node) => node.name === 'startAgentflow')
|
||||
const state = startAgentflowNode?.inputs?.startState as ICommonObject[]
|
||||
return state.map((item) => ({ label: item.key, name: item.key }))
|
||||
}
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
|
||||
const loopBackToNode = nodeData.inputs?.loopBackToNode as string
|
||||
const _maxLoopCount = nodeData.inputs?.maxLoopCount as string
|
||||
const fallbackMessage = nodeData.inputs?.fallbackMessage as string
|
||||
const _loopUpdateState = nodeData.inputs?.loopUpdateState
|
||||
|
||||
const state = options.agentflowRuntime?.state as ICommonObject
|
||||
|
||||
|
|
@ -75,16 +118,34 @@ class Loop_Agentflow implements INode {
|
|||
maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5
|
||||
}
|
||||
|
||||
const finalOutput = 'Loop back to ' + `${loopBackToNodeLabel} (${loopBackToNodeId})`
|
||||
|
||||
// Update flow state if needed
|
||||
let newState = { ...state }
|
||||
if (_loopUpdateState && Array.isArray(_loopUpdateState) && _loopUpdateState.length > 0) {
|
||||
newState = updateFlowState(state, _loopUpdateState)
|
||||
}
|
||||
|
||||
// Process template variables in state
|
||||
if (newState && Object.keys(newState).length > 0) {
|
||||
for (const key in newState) {
|
||||
if (newState[key].toString().includes('{{ output }}')) {
|
||||
newState[key] = finalOutput
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const returnOutput = {
|
||||
id: nodeData.id,
|
||||
name: this.name,
|
||||
input: data,
|
||||
output: {
|
||||
content: 'Loop back to ' + `${loopBackToNodeLabel} (${loopBackToNodeId})`,
|
||||
content: finalOutput,
|
||||
nodeID: loopBackToNodeId,
|
||||
maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5
|
||||
maxLoopCount: _maxLoopCount ? parseInt(_maxLoopCount) : 5,
|
||||
fallbackMessage
|
||||
},
|
||||
state
|
||||
state: newState
|
||||
}
|
||||
|
||||
return returnOutput
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import {
|
|||
IServerSideEventStreamer
|
||||
} from '../../../src/Interface'
|
||||
import { updateFlowState } from '../utils'
|
||||
import { processTemplateVariables } from '../../../src/utils'
|
||||
import { DataSource } from 'typeorm'
|
||||
import { BaseRetriever } from '@langchain/core/retrievers'
|
||||
import { Document } from '@langchain/core/documents'
|
||||
|
|
@ -119,7 +120,8 @@ class Retriever_Agentflow implements INode {
|
|||
return returnData
|
||||
}
|
||||
|
||||
const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).find()
|
||||
const searchOptions = options.searchOptions || {}
|
||||
const stores = await appDataSource.getRepository(databaseEntities['DocumentStore']).findBy(searchOptions)
|
||||
for (const store of stores) {
|
||||
if (store.status === 'UPSERTED') {
|
||||
const obj = {
|
||||
|
|
@ -196,14 +198,7 @@ class Retriever_Agentflow implements INode {
|
|||
sseStreamer.streamTokenEvent(chatId, finalOutput)
|
||||
}
|
||||
|
||||
// Process template variables in state
|
||||
if (newState && Object.keys(newState).length > 0) {
|
||||
for (const key in newState) {
|
||||
if (newState[key].toString().includes('{{ output }}')) {
|
||||
newState[key] = finalOutput
|
||||
}
|
||||
}
|
||||
}
|
||||
newState = processTemplateVariables(newState, finalOutput)
|
||||
|
||||
const returnOutput = {
|
||||
id: nodeData.id,
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ class Start_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Start'
|
||||
this.name = 'startAgentflow'
|
||||
this.version = 1.0
|
||||
this.version = 1.1
|
||||
this.type = 'Start'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Starting point of the agentflow'
|
||||
|
|
@ -153,6 +153,13 @@ class Start_Agentflow implements INode {
|
|||
optional: true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
label: 'Persist State',
|
||||
name: 'startPersistState',
|
||||
type: 'boolean',
|
||||
description: 'Persist the state in the same session',
|
||||
optional: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -161,6 +168,7 @@ class Start_Agentflow implements INode {
|
|||
const _flowState = nodeData.inputs?.startState as string
|
||||
const startInputType = nodeData.inputs?.startInputType as string
|
||||
const startEphemeralMemory = nodeData.inputs?.startEphemeralMemory as boolean
|
||||
const startPersistState = nodeData.inputs?.startPersistState as boolean
|
||||
|
||||
let flowStateArray = []
|
||||
if (_flowState) {
|
||||
|
|
@ -176,6 +184,13 @@ class Start_Agentflow implements INode {
|
|||
flowState[state.key] = state.value
|
||||
}
|
||||
|
||||
const runtimeState = options.agentflowRuntime?.state as ICommonObject
|
||||
if (startPersistState === true && runtimeState && Object.keys(runtimeState).length) {
|
||||
for (const state in runtimeState) {
|
||||
flowState[state] = runtimeState[state]
|
||||
}
|
||||
}
|
||||
|
||||
const inputData: ICommonObject = {}
|
||||
const outputData: ICommonObject = {}
|
||||
|
||||
|
|
@ -202,6 +217,10 @@ class Start_Agentflow implements INode {
|
|||
outputData.ephemeralMemory = true
|
||||
}
|
||||
|
||||
if (startPersistState) {
|
||||
outputData.persistState = true
|
||||
}
|
||||
|
||||
const returnOutput = {
|
||||
id: nodeData.id,
|
||||
name: this.name,
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
|
||||
import { updateFlowState } from '../utils'
|
||||
import { processTemplateVariables } from '../../../src/utils'
|
||||
import { Tool } from '@langchain/core/tools'
|
||||
import { ARTIFACTS_PREFIX } from '../../../src/agents'
|
||||
import { ARTIFACTS_PREFIX, TOOL_ARGS_PREFIX } from '../../../src/agents'
|
||||
import zodToJsonSchema from 'zod-to-json-schema'
|
||||
|
||||
interface IToolInputArgs {
|
||||
|
|
@ -28,7 +29,7 @@ class Tool_Agentflow implements INode {
|
|||
constructor() {
|
||||
this.label = 'Tool'
|
||||
this.name = 'toolAgentflow'
|
||||
this.version = 1.0
|
||||
this.version = 1.1
|
||||
this.type = 'Tool'
|
||||
this.category = 'Agent Flows'
|
||||
this.description = 'Tools allow LLM to interact with external systems'
|
||||
|
|
@ -37,7 +38,7 @@ class Tool_Agentflow implements INode {
|
|||
this.inputs = [
|
||||
{
|
||||
label: 'Tool',
|
||||
name: 'selectedTool',
|
||||
name: 'toolAgentflowSelectedTool',
|
||||
type: 'asyncOptions',
|
||||
loadMethod: 'listTools',
|
||||
loadConfig: true
|
||||
|
|
@ -64,7 +65,7 @@ class Tool_Agentflow implements INode {
|
|||
}
|
||||
],
|
||||
show: {
|
||||
selectedTool: '.+'
|
||||
toolAgentflowSelectedTool: '.+'
|
||||
}
|
||||
},
|
||||
{
|
||||
|
|
@ -124,8 +125,11 @@ class Tool_Agentflow implements INode {
|
|||
},
|
||||
async listToolInputArgs(nodeData: INodeData, options: ICommonObject): Promise<INodeOptionsValue[]> {
|
||||
const currentNode = options.currentNode as ICommonObject
|
||||
const selectedTool = currentNode?.inputs?.selectedTool as string
|
||||
const selectedToolConfig = currentNode?.inputs?.selectedToolConfig as ICommonObject
|
||||
const selectedTool = (currentNode?.inputs?.selectedTool as string) || (currentNode?.inputs?.toolAgentflowSelectedTool as string)
|
||||
const selectedToolConfig =
|
||||
(currentNode?.inputs?.selectedToolConfig as ICommonObject) ||
|
||||
(currentNode?.inputs?.toolAgentflowSelectedToolConfig as ICommonObject) ||
|
||||
{}
|
||||
|
||||
const nodeInstanceFilePath = options.componentNodes[selectedTool].filePath as string
|
||||
|
||||
|
|
@ -158,7 +162,7 @@ class Tool_Agentflow implements INode {
|
|||
toolInputArgs = { properties: allProperties }
|
||||
} else {
|
||||
// Handle single tool instance
|
||||
toolInputArgs = toolInstance.schema ? zodToJsonSchema(toolInstance.schema) : {}
|
||||
toolInputArgs = toolInstance.schema ? zodToJsonSchema(toolInstance.schema as any) : {}
|
||||
}
|
||||
|
||||
if (toolInputArgs && Object.keys(toolInputArgs).length > 0) {
|
||||
|
|
@ -183,8 +187,11 @@ class Tool_Agentflow implements INode {
|
|||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
|
||||
const selectedTool = nodeData.inputs?.selectedTool as string
|
||||
const selectedToolConfig = nodeData.inputs?.selectedToolConfig as ICommonObject
|
||||
const selectedTool = (nodeData.inputs?.selectedTool as string) || (nodeData.inputs?.toolAgentflowSelectedTool as string)
|
||||
const selectedToolConfig =
|
||||
(nodeData?.inputs?.selectedToolConfig as ICommonObject) ||
|
||||
(nodeData?.inputs?.toolAgentflowSelectedToolConfig as ICommonObject) ||
|
||||
{}
|
||||
|
||||
const toolInputArgs = nodeData.inputs?.toolInputArgs as IToolInputArgs[]
|
||||
const _toolUpdateState = nodeData.inputs?.toolUpdateState
|
||||
|
|
@ -220,13 +227,55 @@ class Tool_Agentflow implements INode {
|
|||
const toolInstance = (await newToolNodeInstance.init(newNodeData, '', options)) as Tool | Tool[]
|
||||
|
||||
let toolCallArgs: Record<string, any> = {}
|
||||
|
||||
const parseInputValue = (value: string): any => {
|
||||
if (typeof value !== 'string') {
|
||||
return value
|
||||
}
|
||||
|
||||
// Remove escape characters (backslashes before special characters)
|
||||
// ex: \["a", "b", "c", "d", "e"\]
|
||||
let cleanedValue = value
|
||||
.replace(/\\"/g, '"') // \" -> "
|
||||
.replace(/\\\\/g, '\\') // \\ -> \
|
||||
.replace(/\\\[/g, '[') // \[ -> [
|
||||
.replace(/\\\]/g, ']') // \] -> ]
|
||||
.replace(/\\\{/g, '{') // \{ -> {
|
||||
.replace(/\\\}/g, '}') // \} -> }
|
||||
|
||||
// Try to parse as JSON if it looks like JSON/array
|
||||
if (
|
||||
(cleanedValue.startsWith('[') && cleanedValue.endsWith(']')) ||
|
||||
(cleanedValue.startsWith('{') && cleanedValue.endsWith('}'))
|
||||
) {
|
||||
try {
|
||||
return JSON.parse(cleanedValue)
|
||||
} catch (e) {
|
||||
// If parsing fails, return the cleaned value
|
||||
return cleanedValue
|
||||
}
|
||||
}
|
||||
|
||||
return cleanedValue
|
||||
}
|
||||
|
||||
if (newToolNodeInstance.transformNodeInputsToToolArgs) {
|
||||
const defaultParams = newToolNodeInstance.transformNodeInputsToToolArgs(newNodeData)
|
||||
|
||||
toolCallArgs = {
|
||||
...defaultParams,
|
||||
...toolCallArgs
|
||||
}
|
||||
}
|
||||
|
||||
for (const item of toolInputArgs) {
|
||||
const variableName = item.inputArgName
|
||||
const variableValue = item.inputArgValue
|
||||
toolCallArgs[variableName] = variableValue
|
||||
toolCallArgs[variableName] = parseInputValue(variableValue)
|
||||
}
|
||||
|
||||
const flowConfig = {
|
||||
chatflowId: options.chatflowid,
|
||||
sessionId: options.sessionId,
|
||||
chatId: options.chatId,
|
||||
input: input,
|
||||
|
|
@ -262,6 +311,17 @@ class Tool_Agentflow implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
let toolInput
|
||||
if (typeof toolOutput === 'string' && toolOutput.includes(TOOL_ARGS_PREFIX)) {
|
||||
const [output, args] = toolOutput.split(TOOL_ARGS_PREFIX)
|
||||
toolOutput = output
|
||||
try {
|
||||
toolInput = JSON.parse(args)
|
||||
} catch (e) {
|
||||
console.error('Error parsing tool input from tool:', e)
|
||||
}
|
||||
}
|
||||
|
||||
if (typeof toolOutput === 'object') {
|
||||
toolOutput = JSON.stringify(toolOutput, null, 2)
|
||||
}
|
||||
|
|
@ -271,20 +331,13 @@ class Tool_Agentflow implements INode {
|
|||
sseStreamer.streamTokenEvent(chatId, toolOutput)
|
||||
}
|
||||
|
||||
// Process template variables in state
|
||||
if (newState && Object.keys(newState).length > 0) {
|
||||
for (const key in newState) {
|
||||
if (newState[key].toString().includes('{{ output }}')) {
|
||||
newState[key] = toolOutput
|
||||
}
|
||||
}
|
||||
}
|
||||
newState = processTemplateVariables(newState, toolOutput)
|
||||
|
||||
const returnOutput = {
|
||||
id: nodeData.id,
|
||||
name: this.name,
|
||||
input: {
|
||||
toolInputArgs: toolInputArgs,
|
||||
toolInputArgs: toolInput ?? toolInputArgs,
|
||||
selectedTool: selectedTool
|
||||
},
|
||||
output: {
|
||||
|
|
|
|||
|
|
@ -39,37 +39,38 @@ export const DEFAULT_HUMAN_INPUT_DESCRIPTION_HTML = `<p>Summarize the conversati
|
|||
</ul>
|
||||
`
|
||||
|
||||
export const CONDITION_AGENT_SYSTEM_PROMPT = `You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios. If none of the scenarios match the input, you should return "default."
|
||||
|
||||
- **Input**: A string representing the user's query or message.
|
||||
- **Scenarios**: A list of predefined scenarios that relate to the input.
|
||||
- **Instruction**: Determine if the input fits any of the scenarios.
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read the input string** and the list of scenarios.
|
||||
2. **Analyze the content of the input** to identify its main topic or intention.
|
||||
3. **Compare the input with each scenario**:
|
||||
- If a scenario matches the main topic of the input, select that scenario.
|
||||
- If no scenarios match, prepare to output "\`\`\`json\n{"output": "default"}\`\`\`"
|
||||
4. **Output the result**: If a match is found, return the corresponding scenario in JSON; otherwise, return "\`\`\`json\n{"output": "default"}\`\`\`"
|
||||
|
||||
## Output Format
|
||||
|
||||
Output should be a JSON object that either names the matching scenario or returns "\`\`\`json\n{"output": "default"}\`\`\`" if no scenarios match. No explanation is needed.
|
||||
|
||||
## Examples
|
||||
|
||||
1. **Input**: {"input": "Hello", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}
|
||||
**Output**: "\`\`\`json\n{"output": "default"}\`\`\`"
|
||||
|
||||
2. **Input**: {"input": "What is AIGC?", "scenarios": ["user is asking about AI", "default"], "instruction": "Your task is to check and see if user is asking topic about AI"}
|
||||
**Output**: "\`\`\`json\n{"output": "user is asking about AI"}\`\`\`"
|
||||
|
||||
3. **Input**: {"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "default"], "instruction": "Determine if the user is interested in learning about AI"}
|
||||
**Output**: "\`\`\`json\n{"output": "user is interested in AI topics"}\`\`\`"
|
||||
|
||||
## Note
|
||||
- Ensure that the input scenarios align well with potential user queries for accurate matching
|
||||
- DO NOT include anything other than the JSON in your response.
|
||||
`
|
||||
export const CONDITION_AGENT_SYSTEM_PROMPT = `<p>You are part of a multi-agent system designed to make agent coordination and execution easy. Your task is to analyze the given input and select one matching scenario from a provided set of scenarios.</p>
|
||||
<ul>
|
||||
<li><strong>Input</strong>: A string representing the user's query, message or data.</li>
|
||||
<li><strong>Scenarios</strong>: A list of predefined scenarios that relate to the input.</li>
|
||||
<li><strong>Instruction</strong>: Determine which of the provided scenarios is the best fit for the input.</li>
|
||||
</ul>
|
||||
<h2>Steps</h2>
|
||||
<ol>
|
||||
<li><strong>Read the input string</strong> and the list of scenarios.</li>
|
||||
<li><strong>Analyze the content of the input</strong> to identify its main topic or intention.</li>
|
||||
<li><strong>Compare the input with each scenario</strong>: Evaluate how well the input's topic or intention aligns with each of the provided scenarios and select the one that is the best fit.</li>
|
||||
<li><strong>Output the result</strong>: Return the selected scenario in the specified JSON format.</li>
|
||||
</ol>
|
||||
<h2>Output Format</h2>
|
||||
<p>Output should be a JSON object that names the selected scenario, like this: <code>{"output": "<selected_scenario_name>"}</code>. No explanation is needed.</p>
|
||||
<h2>Examples</h2>
|
||||
<ol>
|
||||
<li>
|
||||
<p><strong>Input</strong>: <code>{"input": "Hello", "scenarios": ["user is asking about AI", "user is not asking about AI"], "instruction": "Your task is to check if the user is asking about AI."}</code></p>
|
||||
<p><strong>Output</strong>: <code>{"output": "user is not asking about AI"}</code></p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Input</strong>: <code>{"input": "What is AIGC?", "scenarios": ["user is asking about AI", "user is asking about the weather"], "instruction": "Your task is to check and see if the user is asking a topic about AI."}</code></p>
|
||||
<p><strong>Output</strong>: <code>{"output": "user is asking about AI"}</code></p>
|
||||
</li>
|
||||
<li>
|
||||
<p><strong>Input</strong>: <code>{"input": "Can you explain deep learning?", "scenarios": ["user is interested in AI topics", "user wants to order food"], "instruction": "Determine if the user is interested in learning about AI."}</code></p>
|
||||
<p><strong>Output</strong>: <code>{"output": "user is interested in AI topics"}</code></p>
|
||||
</li>
|
||||
</ol>
|
||||
<h2>Note</h2>
|
||||
<ul>
|
||||
<li>Ensure that the input scenarios align well with potential user queries for accurate matching.</li>
|
||||
<li>DO NOT include anything other than the JSON in your response.</li>
|
||||
</ul>`
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import { getFileFromStorage } from '../../src/storageUtils'
|
|||
import { ICommonObject, IFileUpload } from '../../src/Interface'
|
||||
import { BaseMessageLike } from '@langchain/core/messages'
|
||||
import { IFlowState } from './Interface.Agentflow'
|
||||
import { mapMimeTypeToInputField } from '../../src/utils'
|
||||
import { handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
|
||||
|
||||
export const addImagesToMessages = async (
|
||||
options: ICommonObject,
|
||||
|
|
@ -18,7 +18,7 @@ export const addImagesToMessages = async (
|
|||
for (const upload of imageUploads) {
|
||||
let bf = upload.data
|
||||
if (upload.type == 'stored-file') {
|
||||
const contents = await getFileFromStorage(upload.name, options.chatflowid, options.chatId)
|
||||
const contents = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
|
||||
// as the image is stored in the server, read the file and convert it to base64
|
||||
bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64')
|
||||
|
||||
|
|
@ -90,7 +90,7 @@ export const processMessagesWithImages = async (
|
|||
hasImageReferences = true
|
||||
try {
|
||||
// Get file contents from storage
|
||||
const contents = await getFileFromStorage(item.name, options.chatflowid, options.chatId)
|
||||
const contents = await getFileFromStorage(item.name, options.orgId, options.chatflowid, options.chatId)
|
||||
|
||||
// Create base64 data URL
|
||||
const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64')
|
||||
|
|
@ -313,13 +313,16 @@ export const getPastChatHistoryImageMessages = async (
|
|||
if (message.additional_kwargs && message.additional_kwargs.fileUploads) {
|
||||
// example: [{"type":"stored-file","name":"0_DiXc4ZklSTo3M8J4.jpg","mime":"image/jpeg"}]
|
||||
const fileUploads = message.additional_kwargs.fileUploads
|
||||
const artifacts = message.additional_kwargs.artifacts
|
||||
const fileAnnotations = message.additional_kwargs.fileAnnotations
|
||||
const usedTools = message.additional_kwargs.usedTools
|
||||
try {
|
||||
let messageWithFileUploads = ''
|
||||
const uploads: IFileUpload[] = typeof fileUploads === 'string' ? JSON.parse(fileUploads) : fileUploads
|
||||
const imageContents: MessageContentImageUrl[] = []
|
||||
for (const upload of uploads) {
|
||||
if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) {
|
||||
const fileData = await getFileFromStorage(upload.name, options.chatflowid, options.chatId)
|
||||
const fileData = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
|
||||
// as the image is stored in the server, read the file and convert it to base64
|
||||
const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64')
|
||||
|
||||
|
|
@ -343,7 +346,8 @@ export const getPastChatHistoryImageMessages = async (
|
|||
const nodeOptions = {
|
||||
retrieveAttachmentChatId: true,
|
||||
chatflowid: options.chatflowid,
|
||||
chatId: options.chatId
|
||||
chatId: options.chatId,
|
||||
orgId: options.orgId
|
||||
}
|
||||
let fileInputFieldFromMimeType = 'txtFile'
|
||||
fileInputFieldFromMimeType = mapMimeTypeToInputField(upload.mime)
|
||||
|
|
@ -353,26 +357,87 @@ export const getPastChatHistoryImageMessages = async (
|
|||
}
|
||||
}
|
||||
const documents: string = await fileLoaderNodeInstance.init(nodeData, '', nodeOptions)
|
||||
messageWithFileUploads += `<doc name='${upload.name}'>${documents}</doc>\n\n`
|
||||
messageWithFileUploads += `<doc name='${upload.name}'>${handleEscapeCharacters(documents, true)}</doc>\n\n`
|
||||
}
|
||||
}
|
||||
const messageContent = messageWithFileUploads ? `${messageWithFileUploads}\n\n${message.content}` : message.content
|
||||
const hasArtifacts = artifacts && Array.isArray(artifacts) && artifacts.length > 0
|
||||
const hasFileAnnotations = fileAnnotations && Array.isArray(fileAnnotations) && fileAnnotations.length > 0
|
||||
const hasUsedTools = usedTools && Array.isArray(usedTools) && usedTools.length > 0
|
||||
|
||||
if (imageContents.length > 0) {
|
||||
chatHistory.push({
|
||||
const imageMessage: any = {
|
||||
role: messageRole,
|
||||
content: imageContents
|
||||
})
|
||||
}
|
||||
if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
|
||||
imageMessage.additional_kwargs = {}
|
||||
if (hasArtifacts) imageMessage.additional_kwargs.artifacts = artifacts
|
||||
if (hasFileAnnotations) imageMessage.additional_kwargs.fileAnnotations = fileAnnotations
|
||||
if (hasUsedTools) imageMessage.additional_kwargs.usedTools = usedTools
|
||||
}
|
||||
chatHistory.push(imageMessage)
|
||||
transformedPastMessages.push({
|
||||
role: messageRole,
|
||||
content: [...JSON.parse((pastChatHistory[i] as any).additional_kwargs.fileUploads)]
|
||||
})
|
||||
}
|
||||
chatHistory.push({
|
||||
|
||||
const contentMessage: any = {
|
||||
role: messageRole,
|
||||
content: messageContent
|
||||
})
|
||||
}
|
||||
if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
|
||||
contentMessage.additional_kwargs = {}
|
||||
if (hasArtifacts) contentMessage.additional_kwargs.artifacts = artifacts
|
||||
if (hasFileAnnotations) contentMessage.additional_kwargs.fileAnnotations = fileAnnotations
|
||||
if (hasUsedTools) contentMessage.additional_kwargs.usedTools = usedTools
|
||||
}
|
||||
chatHistory.push(contentMessage)
|
||||
} catch (e) {
|
||||
// failed to parse fileUploads, continue with text only
|
||||
const hasArtifacts = artifacts && Array.isArray(artifacts) && artifacts.length > 0
|
||||
const hasFileAnnotations = fileAnnotations && Array.isArray(fileAnnotations) && fileAnnotations.length > 0
|
||||
const hasUsedTools = usedTools && Array.isArray(usedTools) && usedTools.length > 0
|
||||
|
||||
const errorMessage: any = {
|
||||
role: messageRole,
|
||||
content: message.content
|
||||
}
|
||||
if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
|
||||
errorMessage.additional_kwargs = {}
|
||||
if (hasArtifacts) errorMessage.additional_kwargs.artifacts = artifacts
|
||||
if (hasFileAnnotations) errorMessage.additional_kwargs.fileAnnotations = fileAnnotations
|
||||
if (hasUsedTools) errorMessage.additional_kwargs.usedTools = usedTools
|
||||
}
|
||||
chatHistory.push(errorMessage)
|
||||
}
|
||||
} else if (message.additional_kwargs) {
|
||||
const hasArtifacts =
|
||||
message.additional_kwargs.artifacts &&
|
||||
Array.isArray(message.additional_kwargs.artifacts) &&
|
||||
message.additional_kwargs.artifacts.length > 0
|
||||
const hasFileAnnotations =
|
||||
message.additional_kwargs.fileAnnotations &&
|
||||
Array.isArray(message.additional_kwargs.fileAnnotations) &&
|
||||
message.additional_kwargs.fileAnnotations.length > 0
|
||||
const hasUsedTools =
|
||||
message.additional_kwargs.usedTools &&
|
||||
Array.isArray(message.additional_kwargs.usedTools) &&
|
||||
message.additional_kwargs.usedTools.length > 0
|
||||
|
||||
if (hasArtifacts || hasFileAnnotations || hasUsedTools) {
|
||||
const messageAdditionalKwargs: any = {}
|
||||
if (hasArtifacts) messageAdditionalKwargs.artifacts = message.additional_kwargs.artifacts
|
||||
if (hasFileAnnotations) messageAdditionalKwargs.fileAnnotations = message.additional_kwargs.fileAnnotations
|
||||
if (hasUsedTools) messageAdditionalKwargs.usedTools = message.additional_kwargs.usedTools
|
||||
|
||||
chatHistory.push({
|
||||
role: messageRole,
|
||||
content: message.content,
|
||||
additional_kwargs: messageAdditionalKwargs
|
||||
})
|
||||
} else {
|
||||
chatHistory.push({
|
||||
role: messageRole,
|
||||
content: message.content
|
||||
|
|
@ -394,9 +459,9 @@ export const getPastChatHistoryImageMessages = async (
|
|||
/**
|
||||
* Updates the flow state with new values
|
||||
*/
|
||||
export const updateFlowState = (state: ICommonObject, llmUpdateState: IFlowState[]): ICommonObject => {
|
||||
export const updateFlowState = (state: ICommonObject, updateState: IFlowState[]): ICommonObject => {
|
||||
let newFlowState: Record<string, any> = {}
|
||||
for (const state of llmUpdateState) {
|
||||
for (const state of updateState) {
|
||||
newFlowState[state.key] = state.value
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -128,7 +128,7 @@ class Airtable_Agents implements INode {
|
|||
|
||||
let base64String = Buffer.from(JSON.stringify(airtableData)).toString('base64')
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
const pyodide = await LoadPyodide()
|
||||
|
|
@ -163,7 +163,7 @@ json.dumps(my_dict)`
|
|||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(systemPrompt),
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
dict: dataframeColDict,
|
||||
|
|
@ -183,7 +183,7 @@ json.dumps(my_dict)`
|
|||
// TODO: get print console output
|
||||
finalResult = await pyodide.runPythonAsync(code)
|
||||
} catch (error) {
|
||||
throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using follwoing code: "${pythonCode}"`)
|
||||
throw new Error(`Sorry, I'm unable to find answer for question: "${input}" using following code: "${pythonCode}"`)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -192,7 +192,7 @@ json.dumps(my_dict)`
|
|||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(finalSystemPrompt),
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
question: input,
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ class AutoGPT_Agents implements INode {
|
|||
category: string
|
||||
baseClasses: string[]
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'AutoGPT'
|
||||
|
|
@ -30,6 +31,7 @@ class AutoGPT_Agents implements INode {
|
|||
this.version = 2.0
|
||||
this.type = 'AutoGPT'
|
||||
this.category = 'Agents'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.icon = 'autogpt.svg'
|
||||
this.description = 'Autonomous agent with chain of thoughts for self-guided task completion'
|
||||
this.baseClasses = ['AutoGPT']
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ class BabyAGI_Agents implements INode {
|
|||
category: string
|
||||
baseClasses: string[]
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'BabyAGI'
|
||||
|
|
@ -23,6 +24,7 @@ class BabyAGI_Agents implements INode {
|
|||
this.type = 'BabyAGI'
|
||||
this.category = 'Agents'
|
||||
this.icon = 'babyagi.svg'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.description = 'Task Driven Autonomous Agent which creates new task and reprioritizes task list based on objective'
|
||||
this.baseClasses = ['BabyAGI']
|
||||
this.inputs = [
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ class CSV_Agents implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const shouldStreamResponse = options.shouldStreamResponse
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
const chatId = options.chatId
|
||||
|
|
@ -114,11 +114,12 @@ class CSV_Agents implements INode {
|
|||
} else {
|
||||
files = [fileName]
|
||||
}
|
||||
const orgId = options.orgId
|
||||
const chatflowid = options.chatflowid
|
||||
|
||||
for (const file of files) {
|
||||
if (!file) continue
|
||||
const fileData = await getFileFromStorage(file, chatflowid)
|
||||
const fileData = await getFileFromStorage(file, orgId, chatflowid)
|
||||
base64String += fileData.toString('base64')
|
||||
}
|
||||
} else {
|
||||
|
|
@ -170,7 +171,7 @@ json.dumps(my_dict)`
|
|||
const chain = new LLMChain({
|
||||
llm: model,
|
||||
prompt: PromptTemplate.fromTemplate(systemPrompt),
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
dict: dataframeColDict,
|
||||
|
|
@ -201,7 +202,7 @@ json.dumps(my_dict)`
|
|||
prompt: PromptTemplate.fromTemplate(
|
||||
systemMessagePrompt ? `${systemMessagePrompt}\n${finalSystemPrompt}` : finalSystemPrompt
|
||||
),
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
const inputs = {
|
||||
question: input,
|
||||
|
|
|
|||
|
|
@ -132,7 +132,7 @@ class ConversationalAgent_Agents implements INode {
|
|||
}
|
||||
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
let res: ChainValues = {}
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import { RunnableSequence } from '@langchain/core/runnables'
|
|||
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
|
||||
import { ChatPromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate, PromptTemplate } from '@langchain/core/prompts'
|
||||
import { formatToOpenAIToolMessages } from 'langchain/agents/format_scratchpad/openai_tools'
|
||||
import { getBaseClasses, transformBracesWithColon } from '../../../src/utils'
|
||||
import { getBaseClasses, transformBracesWithColon, convertChatHistoryToText, convertBaseMessagetoIMessage } from '../../../src/utils'
|
||||
import { type ToolsAgentStep } from 'langchain/agents/openai/output_parser'
|
||||
import {
|
||||
FlowiseMemory,
|
||||
|
|
@ -23,8 +23,10 @@ import { Moderation, checkInputs, streamResponse } from '../../moderation/Modera
|
|||
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
|
||||
import type { Document } from '@langchain/core/documents'
|
||||
import { BaseRetriever } from '@langchain/core/retrievers'
|
||||
import { RESPONSE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
|
||||
import { RESPONSE_TEMPLATE, REPHRASE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
|
||||
import { addImagesToMessages, llmSupportsVision } from '../../../src/multiModalUtils'
|
||||
import { StringOutputParser } from '@langchain/core/output_parsers'
|
||||
import { Tool } from '@langchain/core/tools'
|
||||
|
||||
class ConversationalRetrievalToolAgent_Agents implements INode {
|
||||
label: string
|
||||
|
|
@ -42,7 +44,7 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
constructor(fields?: { sessionId?: string }) {
|
||||
this.label = 'Conversational Retrieval Tool Agent'
|
||||
this.name = 'conversationalRetrievalToolAgent'
|
||||
this.author = 'niztal(falkor)'
|
||||
this.author = 'niztal(falkor) and nikitas-novatix'
|
||||
this.version = 1.0
|
||||
this.type = 'AgentExecutor'
|
||||
this.category = 'Agents'
|
||||
|
|
@ -79,6 +81,26 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
optional: true,
|
||||
default: RESPONSE_TEMPLATE
|
||||
},
|
||||
{
|
||||
label: 'Rephrase Prompt',
|
||||
name: 'rephrasePrompt',
|
||||
type: 'string',
|
||||
description: 'Using previous chat history, rephrase question into a standalone question',
|
||||
warning: 'Prompt must include input variables: {chat_history} and {question}',
|
||||
rows: 4,
|
||||
additionalParams: true,
|
||||
optional: true,
|
||||
default: REPHRASE_TEMPLATE
|
||||
},
|
||||
{
|
||||
label: 'Rephrase Model',
|
||||
name: 'rephraseModel',
|
||||
type: 'BaseChatModel',
|
||||
description:
|
||||
'Optional: Use a different (faster/cheaper) model for rephrasing. If not specified, uses the main Tool Calling Chat Model.',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Input Moderation',
|
||||
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
|
||||
|
|
@ -103,8 +125,9 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
this.sessionId = fields?.sessionId
|
||||
}
|
||||
|
||||
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
|
||||
return prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
|
||||
// The agent will be prepared in run() with the correct user message - it needs the actual runtime input for rephrasing
|
||||
async init(_nodeData: INodeData, _input: string, _options: ICommonObject): Promise<any> {
|
||||
return null
|
||||
}
|
||||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
|
||||
|
|
@ -130,7 +153,7 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
|
||||
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
let res: ChainValues = {}
|
||||
|
|
@ -148,6 +171,23 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
|
|||
sseStreamer.streamUsedToolsEvent(chatId, res.usedTools)
|
||||
usedTools = res.usedTools
|
||||
}
|
||||
|
||||
// If the tool is set to returnDirect, stream the output to the client
|
||||
if (res.usedTools && res.usedTools.length) {
|
||||
let inputTools = nodeData.inputs?.tools
|
||||
inputTools = flatten(inputTools)
|
||||
for (const tool of res.usedTools) {
|
||||
const inputTool = inputTools.find((inputTool: Tool) => inputTool.name === tool.tool)
|
||||
if (inputTool && (inputTool as any).returnDirect && shouldStreamResponse) {
|
||||
sseStreamer.streamTokenEvent(chatId, tool.toolOutput)
|
||||
// Prevent CustomChainHandler from streaming the same output again
|
||||
if (res.output === tool.toolOutput) {
|
||||
res.output = ''
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// The CustomChainHandler will send the stream end event
|
||||
} else {
|
||||
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
|
||||
if (res.sourceDocuments) {
|
||||
|
|
@ -210,9 +250,11 @@ const prepareAgent = async (
|
|||
flowObj: { sessionId?: string; chatId?: string; input?: string }
|
||||
) => {
|
||||
const model = nodeData.inputs?.model as BaseChatModel
|
||||
const rephraseModel = (nodeData.inputs?.rephraseModel as BaseChatModel) || model // Use main model if not specified
|
||||
const maxIterations = nodeData.inputs?.maxIterations as string
|
||||
const memory = nodeData.inputs?.memory as FlowiseMemory
|
||||
let systemMessage = nodeData.inputs?.systemMessage as string
|
||||
let rephrasePrompt = nodeData.inputs?.rephrasePrompt as string
|
||||
let tools = nodeData.inputs?.tools
|
||||
tools = flatten(tools)
|
||||
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
|
||||
|
|
@ -220,6 +262,9 @@ const prepareAgent = async (
|
|||
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
|
||||
|
||||
systemMessage = transformBracesWithColon(systemMessage)
|
||||
if (rephrasePrompt) {
|
||||
rephrasePrompt = transformBracesWithColon(rephrasePrompt)
|
||||
}
|
||||
|
||||
const prompt = ChatPromptTemplate.fromMessages([
|
||||
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
|
||||
|
|
@ -263,6 +308,37 @@ const prepareAgent = async (
|
|||
|
||||
const modelWithTools = model.bindTools(tools)
|
||||
|
||||
// Function to get standalone question (either rephrased or original)
|
||||
const getStandaloneQuestion = async (input: string): Promise<string> => {
|
||||
// If no rephrase prompt, return the original input
|
||||
if (!rephrasePrompt) {
|
||||
return input
|
||||
}
|
||||
|
||||
// Get chat history (use empty string if none)
|
||||
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
|
||||
const iMessages = convertBaseMessagetoIMessage(messages)
|
||||
const chatHistoryString = convertChatHistoryToText(iMessages)
|
||||
|
||||
// Always rephrase to normalize/expand user queries for better retrieval
|
||||
try {
|
||||
const CONDENSE_QUESTION_PROMPT = PromptTemplate.fromTemplate(rephrasePrompt)
|
||||
const condenseQuestionChain = RunnableSequence.from([CONDENSE_QUESTION_PROMPT, rephraseModel, new StringOutputParser()])
|
||||
const res = await condenseQuestionChain.invoke({
|
||||
question: input,
|
||||
chat_history: chatHistoryString
|
||||
})
|
||||
return res
|
||||
} catch (error) {
|
||||
console.error('Error rephrasing question:', error)
|
||||
// On error, fall back to original input
|
||||
return input
|
||||
}
|
||||
}
|
||||
|
||||
// Get standalone question before creating runnable
|
||||
const standaloneQuestion = await getStandaloneQuestion(flowObj?.input || '')
|
||||
|
||||
const runnableAgent = RunnableSequence.from([
|
||||
{
|
||||
[inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
|
||||
|
|
@ -272,7 +348,9 @@ const prepareAgent = async (
|
|||
return messages ?? []
|
||||
},
|
||||
context: async (i: { input: string; chatHistory?: string }) => {
|
||||
const relevantDocs = await vectorStoreRetriever.invoke(i.input)
|
||||
// Use the standalone question (rephrased or original) for retrieval
|
||||
const retrievalQuery = standaloneQuestion || i.input
|
||||
const relevantDocs = await vectorStoreRetriever.invoke(retrievalQuery)
|
||||
const formattedDocs = formatDocs(relevantDocs)
|
||||
return formattedDocs
|
||||
}
|
||||
|
|
@ -288,11 +366,13 @@ const prepareAgent = async (
|
|||
sessionId: flowObj?.sessionId,
|
||||
chatId: flowObj?.chatId,
|
||||
input: flowObj?.input,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
|
||||
})
|
||||
|
||||
return executor
|
||||
}
|
||||
|
||||
module.exports = { nodeClass: ConversationalRetrievalToolAgent_Agents }
|
||||
module.exports = {
|
||||
nodeClass: ConversationalRetrievalToolAgent_Agents
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import { flatten } from 'lodash'
|
|||
import { MessageContentTextDetail, ChatMessage, AnthropicAgent, Anthropic } from 'llamaindex'
|
||||
import { getBaseClasses } from '../../../../src/utils'
|
||||
import { FlowiseMemory, ICommonObject, IMessage, INode, INodeData, INodeParams, IUsedTool } from '../../../../src/Interface'
|
||||
import { EvaluationRunTracerLlama } from '../../../../evaluation/EvaluationRunTracerLlama'
|
||||
|
||||
class AnthropicAgent_LlamaIndex_Agents implements INode {
|
||||
label: string
|
||||
|
|
@ -96,13 +97,16 @@ class AnthropicAgent_LlamaIndex_Agents implements INode {
|
|||
tools,
|
||||
llm: model,
|
||||
chatHistory: chatHistory,
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
|
||||
// these are needed for evaluation runs
|
||||
await EvaluationRunTracerLlama.injectEvaluationMetadata(nodeData, options, agent)
|
||||
|
||||
let text = ''
|
||||
const usedTools: IUsedTool[] = []
|
||||
|
||||
const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' })
|
||||
const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' ? true : false })
|
||||
|
||||
if (response.sources.length) {
|
||||
for (const sourceTool of response.sources) {
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import { flatten } from 'lodash'
|
||||
import { ChatMessage, OpenAI, OpenAIAgent } from 'llamaindex'
|
||||
import { getBaseClasses } from '../../../../src/utils'
|
||||
import { EvaluationRunTracerLlama } from '../../../../evaluation/EvaluationRunTracerLlama'
|
||||
import {
|
||||
FlowiseMemory,
|
||||
ICommonObject,
|
||||
|
|
@ -107,9 +108,12 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
|
|||
tools,
|
||||
llm: model,
|
||||
chatHistory: chatHistory,
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
|
||||
// these are needed for evaluation runs
|
||||
await EvaluationRunTracerLlama.injectEvaluationMetadata(nodeData, options, agent)
|
||||
|
||||
let text = ''
|
||||
let isStreamingStarted = false
|
||||
const usedTools: IUsedTool[] = []
|
||||
|
|
@ -119,10 +123,9 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
|
|||
message: input,
|
||||
chatHistory,
|
||||
stream: true,
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
for await (const chunk of stream) {
|
||||
//console.log('chunk', chunk)
|
||||
text += chunk.response.delta
|
||||
if (!isStreamingStarted) {
|
||||
isStreamingStarted = true
|
||||
|
|
@ -147,7 +150,7 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
|
|||
}
|
||||
}
|
||||
} else {
|
||||
const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' })
|
||||
const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' ? true : false })
|
||||
if (response.sources.length) {
|
||||
for (const sourceTool of response.sources) {
|
||||
usedTools.push({
|
||||
|
|
|
|||
|
|
@ -107,7 +107,11 @@ class OpenAIAssistant_Agents implements INode {
|
|||
return returnData
|
||||
}
|
||||
|
||||
const assistants = await appDataSource.getRepository(databaseEntities['Assistant']).find()
|
||||
const searchOptions = options.searchOptions || {}
|
||||
const assistants = await appDataSource.getRepository(databaseEntities['Assistant']).findBy({
|
||||
...searchOptions,
|
||||
type: 'OPENAI'
|
||||
})
|
||||
|
||||
for (let i = 0; i < assistants.length; i += 1) {
|
||||
const assistantDetails = JSON.parse(assistants[i].details)
|
||||
|
|
@ -130,13 +134,14 @@ class OpenAIAssistant_Agents implements INode {
|
|||
const selectedAssistantId = nodeData.inputs?.selectedAssistant as string
|
||||
const appDataSource = options.appDataSource as DataSource
|
||||
const databaseEntities = options.databaseEntities as IDatabaseEntity
|
||||
const orgId = options.orgId
|
||||
|
||||
const assistant = await appDataSource.getRepository(databaseEntities['Assistant']).findOneBy({
|
||||
id: selectedAssistantId
|
||||
})
|
||||
|
||||
if (!assistant) {
|
||||
options.logger.error(`Assistant ${selectedAssistantId} not found`)
|
||||
options.logger.error(`[${orgId}]: Assistant ${selectedAssistantId} not found`)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -149,7 +154,7 @@ class OpenAIAssistant_Agents implements INode {
|
|||
chatId
|
||||
})
|
||||
if (!chatmsg) {
|
||||
options.logger.error(`Chat Message with Chat Id: ${chatId} not found`)
|
||||
options.logger.error(`[${orgId}]: Chat Message with Chat Id: ${chatId} not found`)
|
||||
return
|
||||
}
|
||||
sessionId = chatmsg.sessionId
|
||||
|
|
@ -160,21 +165,21 @@ class OpenAIAssistant_Agents implements INode {
|
|||
const credentialData = await getCredentialData(assistant.credential ?? '', options)
|
||||
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, nodeData)
|
||||
if (!openAIApiKey) {
|
||||
options.logger.error(`OpenAI ApiKey not found`)
|
||||
options.logger.error(`[${orgId}]: OpenAI ApiKey not found`)
|
||||
return
|
||||
}
|
||||
|
||||
const openai = new OpenAI({ apiKey: openAIApiKey })
|
||||
options.logger.info(`Clearing OpenAI Thread ${sessionId}`)
|
||||
options.logger.info(`[${orgId}]: Clearing OpenAI Thread ${sessionId}`)
|
||||
try {
|
||||
if (sessionId && sessionId.startsWith('thread_')) {
|
||||
await openai.beta.threads.del(sessionId)
|
||||
options.logger.info(`Successfully cleared OpenAI Thread ${sessionId}`)
|
||||
options.logger.info(`[${orgId}]: Successfully cleared OpenAI Thread ${sessionId}`)
|
||||
} else {
|
||||
options.logger.error(`Error clearing OpenAI Thread ${sessionId}`)
|
||||
options.logger.error(`[${orgId}]: Error clearing OpenAI Thread ${sessionId}`)
|
||||
}
|
||||
} catch (e) {
|
||||
options.logger.error(`Error clearing OpenAI Thread ${sessionId}`)
|
||||
options.logger.error(`[${orgId}]: Error clearing OpenAI Thread ${sessionId}`)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -190,6 +195,17 @@ class OpenAIAssistant_Agents implements INode {
|
|||
const shouldStreamResponse = options.shouldStreamResponse
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
const chatId = options.chatId
|
||||
const checkStorage = options.checkStorage
|
||||
? (options.checkStorage as (orgId: string, subscriptionId: string, usageCacheManager: any) => Promise<void>)
|
||||
: undefined
|
||||
const updateStorageUsage = options.updateStorageUsage
|
||||
? (options.updateStorageUsage as (
|
||||
orgId: string,
|
||||
workspaceId: string,
|
||||
totalSize: number,
|
||||
usageCacheManager: any
|
||||
) => Promise<void>)
|
||||
: undefined
|
||||
|
||||
if (moderations && moderations.length > 0) {
|
||||
try {
|
||||
|
|
@ -380,17 +396,30 @@ class OpenAIAssistant_Agents implements INode {
|
|||
// eslint-disable-next-line no-useless-escape
|
||||
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
|
||||
if (!disableFileDownload) {
|
||||
filePath = await downloadFile(
|
||||
if (checkStorage)
|
||||
await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
|
||||
|
||||
const { path, totalSize } = await downloadFile(
|
||||
openAIApiKey,
|
||||
cited_file,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
filePath = path
|
||||
fileAnnotations.push({
|
||||
filePath,
|
||||
fileName
|
||||
})
|
||||
|
||||
if (updateStorageUsage)
|
||||
await updateStorageUsage(
|
||||
options.orgId,
|
||||
options.workspaceId,
|
||||
totalSize,
|
||||
options.usageCacheManager
|
||||
)
|
||||
}
|
||||
} else {
|
||||
const file_path = (annotation as OpenAI.Beta.Threads.Messages.FilePathAnnotation).file_path
|
||||
|
|
@ -399,17 +428,30 @@ class OpenAIAssistant_Agents implements INode {
|
|||
// eslint-disable-next-line no-useless-escape
|
||||
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
|
||||
if (!disableFileDownload) {
|
||||
filePath = await downloadFile(
|
||||
if (checkStorage)
|
||||
await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
|
||||
|
||||
const { path, totalSize } = await downloadFile(
|
||||
openAIApiKey,
|
||||
cited_file,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
filePath = path
|
||||
fileAnnotations.push({
|
||||
filePath,
|
||||
fileName
|
||||
})
|
||||
|
||||
if (updateStorageUsage)
|
||||
await updateStorageUsage(
|
||||
options.orgId,
|
||||
options.workspaceId,
|
||||
totalSize,
|
||||
options.usageCacheManager
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -467,15 +509,21 @@ class OpenAIAssistant_Agents implements INode {
|
|||
const fileId = chunk.image_file.file_id
|
||||
const fileObj = await openai.files.retrieve(fileId)
|
||||
|
||||
const filePath = await downloadImg(
|
||||
if (checkStorage) await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
|
||||
|
||||
const { filePath, totalSize } = await downloadImg(
|
||||
openai,
|
||||
fileId,
|
||||
`${fileObj.filename}.png`,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
artifacts.push({ type: 'png', data: filePath })
|
||||
|
||||
if (updateStorageUsage)
|
||||
await updateStorageUsage(options.orgId, options.workspaceId, totalSize, options.usageCacheManager)
|
||||
|
||||
if (!isStreamingStarted) {
|
||||
isStreamingStarted = true
|
||||
if (sseStreamer) {
|
||||
|
|
@ -530,7 +578,7 @@ class OpenAIAssistant_Agents implements INode {
|
|||
toolOutput
|
||||
})
|
||||
} catch (e) {
|
||||
await analyticHandlers.onToolEnd(toolIds, e)
|
||||
await analyticHandlers.onToolError(toolIds, e)
|
||||
console.error('Error executing tool', e)
|
||||
throw new Error(
|
||||
`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`
|
||||
|
|
@ -655,7 +703,7 @@ class OpenAIAssistant_Agents implements INode {
|
|||
toolOutput
|
||||
})
|
||||
} catch (e) {
|
||||
await analyticHandlers.onToolEnd(toolIds, e)
|
||||
await analyticHandlers.onToolError(toolIds, e)
|
||||
console.error('Error executing tool', e)
|
||||
clearInterval(timeout)
|
||||
reject(
|
||||
|
|
@ -776,7 +824,21 @@ class OpenAIAssistant_Agents implements INode {
|
|||
// eslint-disable-next-line no-useless-escape
|
||||
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
|
||||
if (!disableFileDownload) {
|
||||
filePath = await downloadFile(openAIApiKey, cited_file, fileName, options.chatflowid, options.chatId)
|
||||
if (checkStorage) await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
|
||||
|
||||
const { path, totalSize } = await downloadFile(
|
||||
openAIApiKey,
|
||||
cited_file,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
filePath = path
|
||||
|
||||
if (updateStorageUsage)
|
||||
await updateStorageUsage(options.orgId, options.workspaceId, totalSize, options.usageCacheManager)
|
||||
|
||||
fileAnnotations.push({
|
||||
filePath,
|
||||
fileName
|
||||
|
|
@ -789,13 +851,27 @@ class OpenAIAssistant_Agents implements INode {
|
|||
// eslint-disable-next-line no-useless-escape
|
||||
const fileName = cited_file.filename.split(/[\/\\]/).pop() ?? cited_file.filename
|
||||
if (!disableFileDownload) {
|
||||
filePath = await downloadFile(
|
||||
if (checkStorage)
|
||||
await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
|
||||
|
||||
const { path, totalSize } = await downloadFile(
|
||||
openAIApiKey,
|
||||
cited_file,
|
||||
fileName,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
filePath = path
|
||||
|
||||
if (updateStorageUsage)
|
||||
await updateStorageUsage(
|
||||
options.orgId,
|
||||
options.workspaceId,
|
||||
totalSize,
|
||||
options.usageCacheManager
|
||||
)
|
||||
|
||||
fileAnnotations.push({
|
||||
filePath,
|
||||
fileName
|
||||
|
|
@ -822,7 +898,20 @@ class OpenAIAssistant_Agents implements INode {
|
|||
const fileId = content.image_file.file_id
|
||||
const fileObj = await openai.files.retrieve(fileId)
|
||||
|
||||
const filePath = await downloadImg(openai, fileId, `${fileObj.filename}.png`, options.chatflowid, options.chatId)
|
||||
if (checkStorage) await checkStorage(options.orgId, options.subscriptionId, options.usageCacheManager)
|
||||
|
||||
const { filePath, totalSize } = await downloadImg(
|
||||
openai,
|
||||
fileId,
|
||||
`${fileObj.filename}.png`,
|
||||
options.orgId,
|
||||
options.chatflowid,
|
||||
options.chatId
|
||||
)
|
||||
|
||||
if (updateStorageUsage)
|
||||
await updateStorageUsage(options.orgId, options.workspaceId, totalSize, options.usageCacheManager)
|
||||
|
||||
artifacts.push({ type: 'png', data: filePath })
|
||||
}
|
||||
}
|
||||
|
|
@ -847,7 +936,13 @@ class OpenAIAssistant_Agents implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
const downloadImg = async (openai: OpenAI, fileId: string, fileName: string, ...paths: string[]) => {
|
||||
const downloadImg = async (
|
||||
openai: OpenAI,
|
||||
fileId: string,
|
||||
fileName: string,
|
||||
orgId: string,
|
||||
...paths: string[]
|
||||
): Promise<{ filePath: string; totalSize: number }> => {
|
||||
const response = await openai.files.content(fileId)
|
||||
|
||||
// Extract the binary data from the Response object
|
||||
|
|
@ -857,12 +952,18 @@ const downloadImg = async (openai: OpenAI, fileId: string, fileName: string, ...
|
|||
const image_data_buffer = Buffer.from(image_data)
|
||||
const mime = 'image/png'
|
||||
|
||||
const res = await addSingleFileToStorage(mime, image_data_buffer, fileName, ...paths)
|
||||
const { path, totalSize } = await addSingleFileToStorage(mime, image_data_buffer, fileName, orgId, ...paths)
|
||||
|
||||
return res
|
||||
return { filePath: path, totalSize }
|
||||
}
|
||||
|
||||
const downloadFile = async (openAIApiKey: string, fileObj: any, fileName: string, ...paths: string[]) => {
|
||||
const downloadFile = async (
|
||||
openAIApiKey: string,
|
||||
fileObj: any,
|
||||
fileName: string,
|
||||
orgId: string,
|
||||
...paths: string[]
|
||||
): Promise<{ path: string; totalSize: number }> => {
|
||||
try {
|
||||
const response = await fetch(`https://api.openai.com/v1/files/${fileObj.id}/content`, {
|
||||
method: 'GET',
|
||||
|
|
@ -880,10 +981,12 @@ const downloadFile = async (openAIApiKey: string, fileObj: any, fileName: string
|
|||
const data_buffer = Buffer.from(data)
|
||||
const mime = 'application/octet-stream'
|
||||
|
||||
return await addSingleFileToStorage(mime, data_buffer, fileName, ...paths)
|
||||
const { path, totalSize } = await addSingleFileToStorage(mime, data_buffer, fileName, orgId, ...paths)
|
||||
|
||||
return { path, totalSize }
|
||||
} catch (error) {
|
||||
console.error('Error downloading or writing the file:', error)
|
||||
return ''
|
||||
return { path: '', totalSize: 0 }
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -993,7 +1096,7 @@ async function handleToolSubmission(params: ToolSubmissionParams): Promise<ToolS
|
|||
toolOutput
|
||||
})
|
||||
} catch (e) {
|
||||
await analyticHandlers.onToolEnd(toolIds, e)
|
||||
await analyticHandlers.onToolError(toolIds, e)
|
||||
console.error('Error executing tool', e)
|
||||
throw new Error(`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ class ReActAgentLLM_Agents implements INode {
|
|||
const executor = new AgentExecutor({
|
||||
agent,
|
||||
tools,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -143,7 +143,7 @@ class ToolAgent_Agents implements INode {
|
|||
|
||||
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
// Add custom streaming handler if detailed streaming is enabled
|
||||
|
|
@ -370,7 +370,7 @@ const prepareAgent = async (
|
|||
sessionId: flowObj?.sessionId,
|
||||
chatId: flowObj?.chatId,
|
||||
input: flowObj?.input,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -138,7 +138,7 @@ class XMLAgent_Agents implements INode {
|
|||
}
|
||||
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
let res: ChainValues = {}
|
||||
|
|
@ -278,7 +278,7 @@ const prepareAgent = async (
|
|||
chatId: flowObj?.chatId,
|
||||
input: flowObj?.input,
|
||||
isXML: true,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@ class GETApiChain_Chains implements INode {
|
|||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'GET API Chain'
|
||||
|
|
@ -34,6 +35,7 @@ class GETApiChain_Chains implements INode {
|
|||
this.type = 'GETApiChain'
|
||||
this.icon = 'get.svg'
|
||||
this.category = 'Chains'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.description = 'Chain to run queries against GET API'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
|
||||
this.inputs = [
|
||||
|
|
@ -98,7 +100,7 @@ class GETApiChain_Chains implements INode {
|
|||
const ansPrompt = nodeData.inputs?.ansPrompt as string
|
||||
|
||||
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
const shouldStreamResponse = options.shouldStreamResponse
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
|
|
@ -129,7 +131,7 @@ const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: s
|
|||
const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
|
||||
apiUrlPrompt,
|
||||
apiResponsePrompt,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
|
||||
})
|
||||
return chain
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ class OpenApiChain_Chains implements INode {
|
|||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'OpenAPI Chain'
|
||||
|
|
@ -25,6 +26,7 @@ class OpenApiChain_Chains implements INode {
|
|||
this.type = 'OpenAPIChain'
|
||||
this.icon = 'openapi.svg'
|
||||
this.category = 'Chains'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.description = 'Chain that automatically select and call APIs based only on an OpenAPI spec'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
|
||||
this.inputs = [
|
||||
|
|
@ -71,7 +73,7 @@ class OpenApiChain_Chains implements INode {
|
|||
|
||||
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | object> {
|
||||
const chain = await initChain(nodeData, options)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
const moderations = nodeData.inputs?.inputModeration as Moderation[]
|
||||
const shouldStreamResponse = options.shouldStreamResponse
|
||||
|
|
@ -114,8 +116,9 @@ const initChain = async (nodeData: INodeData, options: ICommonObject) => {
|
|||
} else {
|
||||
if (yamlFileBase64.startsWith('FILE-STORAGE::')) {
|
||||
const file = yamlFileBase64.replace('FILE-STORAGE::', '')
|
||||
const orgId = options.orgId
|
||||
const chatflowid = options.chatflowid
|
||||
const fileData = await getFileFromStorage(file, chatflowid)
|
||||
const fileData = await getFileFromStorage(file, orgId, chatflowid)
|
||||
yamlString = fileData.toString()
|
||||
} else {
|
||||
const splitDataURI = yamlFileBase64.split(',')
|
||||
|
|
@ -128,7 +131,7 @@ const initChain = async (nodeData: INodeData, options: ICommonObject) => {
|
|||
return await createOpenAPIChain(yamlString, {
|
||||
llm: model,
|
||||
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {},
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ class POSTApiChain_Chains implements INode {
|
|||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'POST API Chain'
|
||||
|
|
@ -23,6 +24,7 @@ class POSTApiChain_Chains implements INode {
|
|||
this.type = 'POSTApiChain'
|
||||
this.icon = 'post.svg'
|
||||
this.category = 'Chains'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.description = 'Chain to run queries against POST API'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(APIChain)]
|
||||
this.inputs = [
|
||||
|
|
@ -87,7 +89,7 @@ class POSTApiChain_Chains implements INode {
|
|||
const ansPrompt = nodeData.inputs?.ansPrompt as string
|
||||
|
||||
const chain = await getAPIChain(apiDocs, model, headers, urlPrompt, ansPrompt)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
const shouldStreamResponse = options.shouldStreamResponse
|
||||
|
|
@ -119,7 +121,7 @@ const getAPIChain = async (documents: string, llm: BaseLanguageModel, headers: s
|
|||
const chain = APIChain.fromLLMAndAPIDocs(llm, documents, {
|
||||
apiUrlPrompt,
|
||||
apiResponsePrompt,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
headers: typeof headers === 'object' ? headers : headers ? JSON.parse(headers) : {}
|
||||
})
|
||||
return chain
|
||||
|
|
|
|||
|
|
@ -132,7 +132,7 @@ class ConversationChain_Chains implements INode {
|
|||
}
|
||||
}
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const additionalCallback = await additionalCallbacks(nodeData, options)
|
||||
|
||||
let res = ''
|
||||
|
|
|
|||
|
|
@ -185,6 +185,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
const shouldStreamResponse = options.shouldStreamResponse
|
||||
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
|
||||
const chatId = options.chatId
|
||||
const orgId = options.orgId
|
||||
|
||||
let customResponsePrompt = responsePrompt
|
||||
// If the deprecated systemMessagePrompt is still exists
|
||||
|
|
@ -200,7 +201,8 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
memoryKey: 'chat_history',
|
||||
appDataSource,
|
||||
databaseEntities,
|
||||
chatflowid
|
||||
chatflowid,
|
||||
orgId
|
||||
})
|
||||
}
|
||||
|
||||
|
|
@ -220,7 +222,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
|
|||
|
||||
const history = ((await memory.getChatMessages(this.sessionId, false, prependMessages)) as IMessage[]) ?? []
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const additionalCallback = await additionalCallbacks(nodeData, options)
|
||||
|
||||
let callbacks = [loggerHandler, ...additionalCallback]
|
||||
|
|
@ -407,18 +409,21 @@ interface BufferMemoryExtendedInput {
|
|||
appDataSource: DataSource
|
||||
databaseEntities: IDatabaseEntity
|
||||
chatflowid: string
|
||||
orgId: string
|
||||
}
|
||||
|
||||
class BufferMemory extends FlowiseMemory implements MemoryMethods {
|
||||
appDataSource: DataSource
|
||||
databaseEntities: IDatabaseEntity
|
||||
chatflowid: string
|
||||
orgId: string
|
||||
|
||||
constructor(fields: BufferMemoryInput & BufferMemoryExtendedInput) {
|
||||
super(fields)
|
||||
this.appDataSource = fields.appDataSource
|
||||
this.databaseEntities = fields.databaseEntities
|
||||
this.chatflowid = fields.chatflowid
|
||||
this.orgId = fields.orgId
|
||||
}
|
||||
|
||||
async getChatMessages(
|
||||
|
|
@ -443,7 +448,7 @@ class BufferMemory extends FlowiseMemory implements MemoryMethods {
|
|||
}
|
||||
|
||||
if (returnBaseMessages) {
|
||||
return await mapChatMessageToBaseMessage(chatMessage)
|
||||
return await mapChatMessageToBaseMessage(chatMessage, this.orgId)
|
||||
}
|
||||
|
||||
let returnIMessages: IMessage[] = []
|
||||
|
|
|
|||
|
|
@ -215,7 +215,7 @@ class GraphCypherQA_Chain implements INode {
|
|||
query: input
|
||||
}
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbackHandlers = await additionalCallbacks(nodeData, options)
|
||||
let callbacks = [loggerHandler, ...callbackHandlers]
|
||||
|
||||
|
|
|
|||
|
|
@ -167,7 +167,7 @@ const runPrediction = async (
|
|||
nodeData: INodeData,
|
||||
disableStreaming?: boolean
|
||||
) => {
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
const moderations = nodeData.inputs?.inputModeration as Moderation[]
|
||||
|
|
|
|||
|
|
@ -16,11 +16,13 @@ class MultiPromptChain_Chains implements INode {
|
|||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Multi Prompt Chain'
|
||||
this.name = 'multiPromptChain'
|
||||
this.version = 2.0
|
||||
this.badge = 'DEPRECATING'
|
||||
this.type = 'MultiPromptChain'
|
||||
this.icon = 'prompt.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -66,7 +68,7 @@ class MultiPromptChain_Chains implements INode {
|
|||
promptNames,
|
||||
promptDescriptions,
|
||||
promptTemplates,
|
||||
llmChainOpts: { verbose: process.env.DEBUG === 'true' }
|
||||
llmChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false }
|
||||
})
|
||||
|
||||
return chain
|
||||
|
|
@ -95,7 +97,7 @@ class MultiPromptChain_Chains implements INode {
|
|||
}
|
||||
const obj = { input }
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
if (shouldStreamResponse) {
|
||||
|
|
|
|||
|
|
@ -15,12 +15,14 @@ class MultiRetrievalQAChain_Chains implements INode {
|
|||
category: string
|
||||
baseClasses: string[]
|
||||
description: string
|
||||
badge: string
|
||||
inputs: INodeParams[]
|
||||
|
||||
constructor() {
|
||||
this.label = 'Multi Retrieval QA Chain'
|
||||
this.name = 'multiRetrievalQAChain'
|
||||
this.version = 2.0
|
||||
this.badge = 'DEPRECATING'
|
||||
this.type = 'MultiRetrievalQAChain'
|
||||
this.icon = 'qa.svg'
|
||||
this.category = 'Chains'
|
||||
|
|
@ -74,7 +76,7 @@ class MultiRetrievalQAChain_Chains implements INode {
|
|||
retrieverNames,
|
||||
retrieverDescriptions,
|
||||
retrievers,
|
||||
retrievalQAChainOpts: { verbose: process.env.DEBUG === 'true', returnSourceDocuments }
|
||||
retrievalQAChainOpts: { verbose: process.env.DEBUG === 'true' ? true : false, returnSourceDocuments }
|
||||
})
|
||||
return chain
|
||||
}
|
||||
|
|
@ -101,7 +103,7 @@ class MultiRetrievalQAChain_Chains implements INode {
|
|||
}
|
||||
}
|
||||
const obj = { input }
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
if (shouldStreamResponse) {
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ class RetrievalQAChain_Chains implements INode {
|
|||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'Retrieval QA Chain'
|
||||
|
|
@ -24,6 +25,7 @@ class RetrievalQAChain_Chains implements INode {
|
|||
this.version = 2.0
|
||||
this.type = 'RetrievalQAChain'
|
||||
this.icon = 'qa.svg'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.category = 'Chains'
|
||||
this.description = 'QA chain to answer a question based on the retrieved documents'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(RetrievalQAChain)]
|
||||
|
|
@ -53,7 +55,7 @@ class RetrievalQAChain_Chains implements INode {
|
|||
const model = nodeData.inputs?.model as BaseLanguageModel
|
||||
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
|
||||
|
||||
const chain = RetrievalQAChain.fromLLM(model, vectorStoreRetriever, { verbose: process.env.DEBUG === 'true' })
|
||||
const chain = RetrievalQAChain.fromLLM(model, vectorStoreRetriever, { verbose: process.env.DEBUG === 'true' ? true : false })
|
||||
return chain
|
||||
}
|
||||
|
||||
|
|
@ -80,7 +82,7 @@ class RetrievalQAChain_Chains implements INode {
|
|||
const obj = {
|
||||
query: input
|
||||
}
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
if (shouldStreamResponse) {
|
||||
|
|
|
|||
|
|
@ -194,7 +194,7 @@ class SqlDatabaseChain_Chains implements INode {
|
|||
topK,
|
||||
customPrompt
|
||||
)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
if (shouldStreamResponse) {
|
||||
|
|
@ -241,7 +241,7 @@ const getSQLDBChain = async (
|
|||
const obj: SqlDatabaseChainInput = {
|
||||
llm,
|
||||
database: db,
|
||||
verbose: process.env.DEBUG === 'true',
|
||||
verbose: process.env.DEBUG === 'true' ? true : false,
|
||||
topK: topK
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ class VectorDBQAChain_Chains implements INode {
|
|||
baseClasses: string[]
|
||||
description: string
|
||||
inputs: INodeParams[]
|
||||
badge: string
|
||||
|
||||
constructor() {
|
||||
this.label = 'VectorDB QA Chain'
|
||||
|
|
@ -25,6 +26,7 @@ class VectorDBQAChain_Chains implements INode {
|
|||
this.type = 'VectorDBQAChain'
|
||||
this.icon = 'vectordb.svg'
|
||||
this.category = 'Chains'
|
||||
this.badge = 'DEPRECATING'
|
||||
this.description = 'QA chain for vector databases'
|
||||
this.baseClasses = [this.type, ...getBaseClasses(VectorDBQAChain)]
|
||||
this.inputs = [
|
||||
|
|
@ -55,7 +57,7 @@ class VectorDBQAChain_Chains implements INode {
|
|||
|
||||
const chain = VectorDBQAChain.fromLLM(model, vectorStore, {
|
||||
k: (vectorStore as any)?.k ?? 4,
|
||||
verbose: process.env.DEBUG === 'true'
|
||||
verbose: process.env.DEBUG === 'true' ? true : false
|
||||
})
|
||||
return chain
|
||||
}
|
||||
|
|
@ -84,7 +86,7 @@ class VectorDBQAChain_Chains implements INode {
|
|||
query: input
|
||||
}
|
||||
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger)
|
||||
const loggerHandler = new ConsoleCallbackHandler(options.logger, options?.orgId)
|
||||
const callbacks = await additionalCallbacks(nodeData, options)
|
||||
|
||||
if (shouldStreamResponse) {
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class AWSChatBedrock_ChatModels implements INode {
|
|||
constructor() {
|
||||
this.label = 'AWS ChatBedrock'
|
||||
this.name = 'awsChatBedrock'
|
||||
this.version = 6.0
|
||||
this.version = 6.1
|
||||
this.type = 'AWSChatBedrock'
|
||||
this.icon = 'aws.svg'
|
||||
this.category = 'Chat Models'
|
||||
|
|
@ -100,6 +100,16 @@ class AWSChatBedrock_ChatModels implements INode {
|
|||
'Allow image input. Refer to the <a href="https://docs.flowiseai.com/using-flowise/uploads#image" target="_blank">docs</a> for more details.',
|
||||
default: false,
|
||||
optional: true
|
||||
},
|
||||
{
|
||||
label: 'Latency Optimized',
|
||||
name: 'latencyOptimized',
|
||||
type: 'boolean',
|
||||
description:
|
||||
'Enable latency optimized configuration for supported models. Refer to the supported <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/latency-optimized-inference.html" target="_blank">latecny optimized models</a> for more details.',
|
||||
default: false,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -122,6 +132,7 @@ class AWSChatBedrock_ChatModels implements INode {
|
|||
const iMax_tokens_to_sample = nodeData.inputs?.max_tokens_to_sample as string
|
||||
const cache = nodeData.inputs?.cache as BaseCache
|
||||
const streaming = nodeData.inputs?.streaming as boolean
|
||||
const latencyOptimized = nodeData.inputs?.latencyOptimized as boolean
|
||||
|
||||
const obj: ChatBedrockConverseInput = {
|
||||
region: iRegion,
|
||||
|
|
@ -131,6 +142,10 @@ class AWSChatBedrock_ChatModels implements INode {
|
|||
streaming: streaming ?? true
|
||||
}
|
||||
|
||||
if (latencyOptimized) {
|
||||
obj.performanceConfig = { latency: 'optimized' }
|
||||
}
|
||||
|
||||
/**
|
||||
* Long-term credentials specified in LLM configuration are optional.
|
||||
* Bedrock's credential provider falls back to the AWS SDK to fetch
|
||||
|
|
|
|||
|
|
@ -1,9 +1,10 @@
|
|||
import { AzureOpenAIInput, AzureChatOpenAI as LangchainAzureChatOpenAI, ChatOpenAIFields, OpenAIClient } from '@langchain/openai'
|
||||
import { AzureOpenAIInput, AzureChatOpenAI as LangchainAzureChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
|
||||
import { BaseCache } from '@langchain/core/caches'
|
||||
import { ICommonObject, IMultiModalOption, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
|
||||
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
|
||||
import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
|
||||
import { AzureChatOpenAI } from './FlowiseAzureChatOpenAI'
|
||||
import { OpenAI as OpenAIClient } from 'openai'
|
||||
|
||||
const serverCredentialsExists =
|
||||
!!process.env.AZURE_OPENAI_API_KEY &&
|
||||
|
|
@ -26,7 +27,7 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
constructor() {
|
||||
this.label = 'Azure ChatOpenAI'
|
||||
this.name = 'azureChatOpenAI'
|
||||
this.version = 7.0
|
||||
this.version = 7.1
|
||||
this.type = 'AzureChatOpenAI'
|
||||
this.icon = 'Azure.svg'
|
||||
this.category = 'Chat Models'
|
||||
|
|
@ -154,6 +155,15 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
optional: false,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Reasoning',
|
||||
description: 'Whether the model supports reasoning. Only applicable for reasoning models.',
|
||||
name: 'reasoning',
|
||||
type: 'boolean',
|
||||
default: false,
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
{
|
||||
label: 'Reasoning Effort',
|
||||
description: 'Constrains effort on reasoning for reasoning models. Only applicable for o1 and o3 models.',
|
||||
|
|
@ -173,9 +183,34 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
name: 'high'
|
||||
}
|
||||
],
|
||||
default: 'medium',
|
||||
optional: false,
|
||||
additionalParams: true
|
||||
additionalParams: true,
|
||||
show: {
|
||||
reasoning: true
|
||||
}
|
||||
},
|
||||
{
|
||||
label: 'Reasoning Summary',
|
||||
description: `A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process`,
|
||||
name: 'reasoningSummary',
|
||||
type: 'options',
|
||||
options: [
|
||||
{
|
||||
label: 'Auto',
|
||||
name: 'auto'
|
||||
},
|
||||
{
|
||||
label: 'Concise',
|
||||
name: 'concise'
|
||||
},
|
||||
{
|
||||
label: 'Detailed',
|
||||
name: 'detailed'
|
||||
}
|
||||
],
|
||||
additionalParams: true,
|
||||
show: {
|
||||
reasoning: true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
@ -199,7 +234,8 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
const topP = nodeData.inputs?.topP as string
|
||||
const basePath = nodeData.inputs?.basepath as string
|
||||
const baseOptions = nodeData.inputs?.baseOptions
|
||||
const reasoningEffort = nodeData.inputs?.reasoningEffort as OpenAIClient.Chat.ChatCompletionReasoningEffort
|
||||
const reasoningEffort = nodeData.inputs?.reasoningEffort as OpenAIClient.Chat.ChatCompletionReasoningEffort | null
|
||||
const reasoningSummary = nodeData.inputs?.reasoningSummary as 'auto' | 'concise' | 'detailed' | null
|
||||
|
||||
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
|
||||
const azureOpenAIApiKey = getCredentialParam('azureOpenAIApiKey', credentialData, nodeData)
|
||||
|
|
@ -237,11 +273,22 @@ class AzureChatOpenAI_ChatModels implements INode {
|
|||
console.error('Error parsing base options', exception)
|
||||
}
|
||||
}
|
||||
if (modelName === 'o3-mini' || modelName.includes('o1')) {
|
||||
if (modelName.includes('o1') || modelName.includes('o3') || modelName.includes('gpt-5')) {
|
||||
delete obj.temperature
|
||||
}
|
||||
if ((modelName.includes('o1') || modelName.includes('o3')) && reasoningEffort) {
|
||||
obj.reasoningEffort = reasoningEffort
|
||||
delete obj.stop
|
||||
const reasoning: OpenAIClient.Reasoning = {}
|
||||
if (reasoningEffort) {
|
||||
reasoning.effort = reasoningEffort
|
||||
}
|
||||
if (reasoningSummary) {
|
||||
reasoning.summary = reasoningSummary
|
||||
}
|
||||
obj.reasoning = reasoning
|
||||
|
||||
if (maxTokens) {
|
||||
delete obj.maxTokens
|
||||
obj.maxCompletionTokens = parseInt(maxTokens, 10)
|
||||
}
|
||||
}
|
||||
|
||||
const multiModalOption: IMultiModalOption = {
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ export class AzureChatOpenAI extends LangchainAzureChatOpenAI implements IVision
|
|||
configuredModel: string
|
||||
configuredMaxToken?: number
|
||||
multiModalOption: IMultiModalOption
|
||||
builtInTools: Record<string, any>[] = []
|
||||
id: string
|
||||
|
||||
constructor(
|
||||
|
|
@ -27,7 +28,7 @@ export class AzureChatOpenAI extends LangchainAzureChatOpenAI implements IVision
|
|||
}
|
||||
|
||||
revertToOriginalModel(): void {
|
||||
this.modelName = this.configuredModel
|
||||
this.model = this.configuredModel
|
||||
this.maxTokens = this.configuredMaxToken
|
||||
}
|
||||
|
||||
|
|
@ -38,4 +39,8 @@ export class AzureChatOpenAI extends LangchainAzureChatOpenAI implements IVision
|
|||
setVisionModel(): void {
|
||||
// pass
|
||||
}
|
||||
|
||||
addBuiltInTools(builtInTool: Record<string, any>): void {
|
||||
this.builtInTools.push(builtInTool)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,12 +4,12 @@ Azure OpenAI Chat Model integration for Flowise
|
|||
|
||||
## 🌱 Env Variables
|
||||
|
||||
| Variable | Description | Type | Default |
|
||||
| ---------------------------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
|
||||
| AZURE_OPENAI_API_KEY | Default `credential.azureOpenAIApiKey` for Azure OpenAI Model | String | |
|
||||
| AZURE_OPENAI_API_INSTANCE_NAME | Default `credential.azureOpenAIApiInstanceName` for Azure OpenAI Model | String | |
|
||||
| AZURE_OPENAI_API_DEPLOYMENT_NAME | Default `credential.azureOpenAIApiDeploymentName` for Azure OpenAI Model | String | |
|
||||
| AZURE_OPENAI_API_VERSION | Default `credential.azureOpenAIApiVersion` for Azure OpenAI Model | String | |
|
||||
| Variable | Description | Type | Default |
|
||||
| -------------------------------- | ------------------------------------------------------------------------ | ------ | ------- |
|
||||
| AZURE_OPENAI_API_KEY | Default `credential.azureOpenAIApiKey` for Azure OpenAI Model | String | |
|
||||
| AZURE_OPENAI_API_INSTANCE_NAME | Default `credential.azureOpenAIApiInstanceName` for Azure OpenAI Model | String | |
|
||||
| AZURE_OPENAI_API_DEPLOYMENT_NAME | Default `credential.azureOpenAIApiDeploymentName` for Azure OpenAI Model | String | |
|
||||
| AZURE_OPENAI_API_VERSION | Default `credential.azureOpenAIApiVersion` for Azure OpenAI Model | String | |
|
||||
|
||||
## License
|
||||
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ class ChatAnthropic_ChatModels implements INode {
|
|||
label: 'Extended Thinking',
|
||||
name: 'extendedThinking',
|
||||
type: 'boolean',
|
||||
description: 'Enable extended thinking for reasoning model such as Claude Sonnet 3.7',
|
||||
description: 'Enable extended thinking for reasoning model such as Claude Sonnet 3.7 and Claude 4',
|
||||
optional: true,
|
||||
additionalParams: true
|
||||
},
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue