Compare commits

...

11 Commits

Author SHA1 Message Date
Henry Heng 465005a503
Bugfix/Remove postgres vector store data when deletion (#5536)
Remove postgres vector store data when deletion

- Introduced a new `doc_id` column in MySQL, Postgres, and SQLite record managers to support document identification.
- Updated the `update` method to handle both string and object formats for keys, allowing for better flexibility in document updates.
- Enhanced `listKeys` method to filter by `doc_id` when provided in options.
- Updated vector store integrations to utilize the new `doc_id` filtering capability
2025-11-30 12:01:36 +00:00
Henry Heng e6e0c2d07b
Feature/Gemini Code Interpreter (#5531)
add gemini code interpreter
2025-11-28 19:49:57 +00:00
Henry Heng 660a8e357a
Chore/Remove freeSolo for update state (#5530)
remove freeSolo for update state
2025-11-28 15:08:37 +00:00
Henry Heng 113180d03b
Feature/Gemini Nano Banana (#5529)
* add ability to support gemini nano banana image generation

* increment Agent node version
2025-11-28 13:10:14 +00:00
Nikitas Papadopoulos 069ba28bc0
redis_keep_alive fix on usagecachemanager using keyv/redis (#5519) 2025-11-27 13:36:34 +00:00
dependabot[bot] 20db1597a4
chore(deps): bump nodemailer from 6.9.15 to 7.0.7 in /packages/server in the npm_and_yarn group across 1 directory (#5521)
* chore(deps): bump nodemailer

Bumps the npm_and_yarn group with 1 update in the /packages/server directory: [nodemailer](https://github.com/nodemailer/nodemailer).


Updates `nodemailer` from 6.9.15 to 7.0.7
- [Release notes](https://github.com/nodemailer/nodemailer/releases)
- [Changelog](https://github.com/nodemailer/nodemailer/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodemailer/nodemailer/compare/v6.9.15...v7.0.7)

---
updated-dependencies:
- dependency-name: nodemailer
  dependency-version: 7.0.7
  dependency-type: direct:production
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* update pnpm lock

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Henry <hzj94@hotmail.com>
2025-11-27 13:16:28 +00:00
dependabot[bot] 478a294095
chore(deps): bump multer from 1.4.5-lts.1 to 2.0.2 in /packages/server in the npm_and_yarn group across 1 directory (#5522)
* chore(deps): bump multer

Bumps the npm_and_yarn group with 1 update in the /packages/server directory: [multer](https://github.com/expressjs/multer).


Updates `multer` from 1.4.5-lts.1 to 2.0.2
- [Release notes](https://github.com/expressjs/multer/releases)
- [Changelog](https://github.com/expressjs/multer/blob/main/CHANGELOG.md)
- [Commits](https://github.com/expressjs/multer/compare/v1.4.5-lts.1...v2.0.2)

---
updated-dependencies:
- dependency-name: multer
  dependency-version: 2.0.2
  dependency-type: direct:production
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* update pnpm lock

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Henry <hzj94@hotmail.com>
2025-11-27 13:01:31 +00:00
Nikitas Papadopoulos 6a59af11e6
Feature: Add access to chat history and other useful variables in post-processing (#5511)
* access chat history and other useful variables in post-processing

* cloning data to prevent mutations in post-processing

* Enhance post-processing capabilities by adding support for additional variables and improving the UI for available variables display. Update CustomFunction implementations to utilize post-processing options consistently across components.

---------

Co-authored-by: Henry <hzj94@hotmail.com>
2025-11-27 12:59:00 +00:00
simon-song-wd 562370b8e2
Pin kill-port package version (#5523)
* Update package.json

* Update package.json

* Update pnpm-lock.yaml

* update pnpm lock

---------

Co-authored-by: Simon Song <simon.song@evisort.com>
Co-authored-by: Henry <hzj94@hotmail.com>
2025-11-27 11:25:58 +00:00
Ilango 4e92db6910
feat: handle 429 errors and redirect to rate-limited page (#5440)
* feat: handle 429 errors and redirect to rate-limited page

* fix: simplify rate-limited page and better 429 error handling

* fix: status code in quotaUsage

* update: add back to home button rate-limited page

* chore: fix typos in docker/worker/Dockerfile (#5435)

Fix typos in docker/worker/Dockerfile

* chore: fix typos in packages/components/nodes/agentflow/Condition/Condition.ts (#5436)

Fix typos in packages/components/nodes/agentflow/Condition/Condition.ts

* chore: fix typos in packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts (#5437)

Fix typos in packages/components/nodes/chatmodels/ChatHuggingFace/ChatHuggingFace.ts

* chore: fix typos in packages/components/nodes/prompts/ChatPromptTemplate/ChatPromptTemplate.ts (#5438)

Fix typos in packages/components/nodes/prompts/ChatPromptTemplate/ChatPromptTemplate.ts

* docs: fix typos in packages/ui/src/layout/MainLayout/Sidebar/MenuList/NavGroup/index.jsx (#5444)

Fix typos in packages/ui/src/layout/MainLayout/Sidebar/MenuList/NavGroup/index.jsx

* docs: fix typos in packages/components/nodes/engine/SubQuestionQueryEngine/SubQuestionQueryEngine.ts (#5446)

Fix typos in packages/components/nodes/engine/SubQuestionQueryEngine/SubQuestionQueryEngine.ts

* docs: fix typos in packages/components/nodes/embeddings/AWSBedrockEmbedding/AWSBedrockEmbedding.ts (#5447)

Fix typos in packages/components/nodes/embeddings/AWSBedrockEmbedding/AWSBedrockEmbedding.ts

* docs: fix typos in packages/server/README.md (#5445)

Fix typos in packages/server/README.md

* Bugfix/Supervisor Node AzureChatOpenAI (#5448)

Integrate AzureChatOpenAI into the Supervisor node to handle user requests alongside ChatOpenAI. This enhancement allows for improved multi-agent conversation management.

* Chore/JSON Array (#5467)

* add separate by JSON object

* add file check for Unstructured

* Enhance JSON DocumentLoader: Update label and description for 'Separate by JSON Object' option, and add type check for JSON objects in array processing.

* Chore/Remove Deprecated File Path Unstructured (#5478)

* Refactor UnstructuredFile and UnstructuredFolder loaders to remove deprecated file path handling and enhance folder path validation. Ensure folder paths are sanitized and validated against path traversal attacks.

* Update UnstructuredFolder.ts

* feat(security): enhance file path validation and implement non-root D… (#5474)

* feat(security): enhance file path validation and implement non-root Docker user

- Validate resolved full file paths including workspace boundaries in SecureFileStore
- Resolve paths before validation in readFile and writeFile operations
- Run Docker container as non-root flowise user (uid/gid 1001)
- Apply proper file ownership and permissions for application files

Prevents path traversal attacks and follows container security best practices

* Add sensitive system directory validation and Flowise internal file protection

* Update Dockerfile to use default node user

* update validation patterns to include additional system binary directories (/usr/bin, /usr/sbin, /usr/local/bin)

* added isSafeBrowserExecutable function to validate browser executable paths for Playwright and Puppeteer loaders

---------

Co-authored-by: taraka-vishnumolakala <taraka.vishnumolakala@workday.com>
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
Co-authored-by: Henry <hzj94@hotmail.com>

* Chore/docker file non root (#5479)

* update dockerfile

* Update Dockerfile

* remove read write file tools and imports (#5480)

* Bugfix/Custom Function Libraries (#5472)

* Updated the executeJavaScriptCode function to automatically detect and install required libraries from import/require statements in the provided code.

* Update utils.ts

* lint-fix

* Release/3.0.11 (#5481)

flowise@3.0.11

* flowise@3.0.11

* Chore/Disable Unstructure Folder (#5483)

* commented out unstructure folder node

* Update packages/components/nodes/documentloaders/Unstructured/UnstructuredFolder.ts

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* update: condition for handling 429 errors

* update: handle rate limit errors in auth pages

* fix: crash due to missing import

---------

Co-authored-by: Lê Nam Khánh <55955273+khanhkhanhlele@users.noreply.github.com>
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
Co-authored-by: Taraka Vishnumolakala <tvishnumolakala@gmail.com>
Co-authored-by: taraka-vishnumolakala <taraka.vishnumolakala@workday.com>
Co-authored-by: Henry <hzj94@hotmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-27 11:23:49 +00:00
Henry Heng 7cc2c13694
Chore/Opus 4.5 (#5520)
* add gemini flash

* add gemin flash to vertex

* add gemin-1.5-flash-preview to vertex

* add azure gpt 4o

* add claude 3.5 sonnet

* add mistral nemo

* add groq llama3.1

* add gpt4o-mini to azure

* o1 mini

* add groq llama 3.2

* update anthropic models

* add 3.5 haiku

* update vertex embedding models

* add azure o1 models

* add o3 mini

* add wolframalpha tool

* Update pnpm-lock.yaml

* add claude sonnet 3.7 to vertex and bedrock

* Update pnpm-lock.yaml

* update gemini

* Update pnpm-lock.yaml

* add opus 4.5

* Update CONTRIBUTING-ZH.md

* Update compose.yaml
2025-11-26 15:59:30 +00:00
51 changed files with 1846 additions and 588 deletions

View File

@ -189,7 +189,7 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
- [Railway](https://docs.flowiseai.com/configuration/deployment/railway)
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
- [Northflank](https://northflank.com/stacks/deploy-flowiseai)
[![Deploy to Northflank](https://assets.northflank.com/deploy_to_northflank_smm_36700fb050.svg)](https://northflank.com/stacks/deploy-flowiseai)

View File

@ -1,38 +1,38 @@
### Responsible Disclosure Policy
### Responsible Disclosure Policy
At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
### Out of scope vulnerabilities
### Out of scope vulnerabilities
- Clickjacking on pages without sensitive actions
- CSRF on unauthenticated/logout/login pages
- Attacks requiring MITM (Man-in-the-Middle) or physical device access
- Social engineering attacks
- Activities that cause service disruption (DoS)
- Content spoofing and text injection without a valid attack vector
- Email spoofing
- Absence of DNSSEC, CAA, CSP headers
- Missing Secure or HTTP-only flag on non-sensitive cookies
- Deadlinks
- User enumeration
- Clickjacking on pages without sensitive actions
- CSRF on unauthenticated/logout/login pages
- Attacks requiring MITM (Man-in-the-Middle) or physical device access
- Social engineering attacks
- Activities that cause service disruption (DoS)
- Content spoofing and text injection without a valid attack vector
- Email spoofing
- Absence of DNSSEC, CAA, CSP headers
- Missing Secure or HTTP-only flag on non-sensitive cookies
- Deadlinks
- User enumeration
### Reporting Guidelines
### Reporting Guidelines
- Submit your findings to https://github.com/FlowiseAI/Flowise/security
- Provide clear details to help us reproduce and fix the issue quickly.
- Submit your findings to https://github.com/FlowiseAI/Flowise/security
- Provide clear details to help us reproduce and fix the issue quickly.
### Disclosure Guidelines
### Disclosure Guidelines
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
- Avoid including:
- Data from any Flowise customer projects
- Flowise user/customer information
- Details about Flowise employees, contractors, or partners
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
- Avoid including:
- Data from any Flowise customer projects
- Flowise user/customer information
- Details about Flowise employees, contractors, or partners
### Response to Reports
### Response to Reports
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
- Your report will be kept **confidential**, and your details will not be shared without your consent.
We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
- Your report will be kept **confidential**, and your details will not be shared without your consent.
We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.

View File

@ -51,7 +51,7 @@
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-unused-imports": "^2.0.0",
"husky": "^8.0.1",
"kill-port": "^2.0.1",
"kill-port": "2.0.1",
"lint-staged": "^13.0.3",
"prettier": "^2.7.1",
"pretty-quick": "^3.1.3",

View File

@ -3,6 +3,13 @@
{
"name": "awsChatBedrock",
"models": [
{
"label": "anthropic.claude-opus-4-5-20251101-v1:0",
"name": "anthropic.claude-opus-4-5-20251101-v1:0",
"description": "Claude 4.5 Opus",
"input_cost": 0.000005,
"output_cost": 0.000025
},
{
"label": "anthropic.claude-sonnet-4-5-20250929-v1:0",
"name": "anthropic.claude-sonnet-4-5-20250929-v1:0",
@ -505,6 +512,13 @@
{
"name": "chatAnthropic",
"models": [
{
"label": "claude-opus-4-5",
"name": "claude-opus-4-5",
"description": "Claude 4.5 Opus",
"input_cost": 0.000005,
"output_cost": 0.000025
},
{
"label": "claude-sonnet-4-5",
"name": "claude-sonnet-4-5",
@ -633,6 +647,12 @@
"input_cost": 0.00002,
"output_cost": 0.00012
},
{
"label": "gemini-3-pro-image-preview",
"name": "gemini-3-pro-image-preview",
"input_cost": 0.00002,
"output_cost": 0.00012
},
{
"label": "gemini-2.5-pro",
"name": "gemini-2.5-pro",
@ -645,6 +665,12 @@
"input_cost": 1.25e-6,
"output_cost": 0.00001
},
{
"label": "gemini-2.5-flash-image",
"name": "gemini-2.5-flash-image",
"input_cost": 1.25e-6,
"output_cost": 0.00001
},
{
"label": "gemini-2.5-flash-lite",
"name": "gemini-2.5-flash-lite",
@ -769,6 +795,13 @@
"input_cost": 1.25e-7,
"output_cost": 3.75e-7
},
{
"label": "claude-opus-4-5@20251101",
"name": "claude-opus-4-5@20251101",
"description": "Claude 4.5 Opus",
"input_cost": 0.000005,
"output_cost": 0.000025
},
{
"label": "claude-sonnet-4-5@20250929",
"name": "claude-sonnet-4-5@20250929",

View File

@ -22,21 +22,16 @@ import zodToJsonSchema from 'zod-to-json-schema'
import { getErrorMessage } from '../../../src/error'
import { DataSource } from 'typeorm'
import {
addImageArtifactsToMessages,
extractArtifactsFromResponse,
getPastChatHistoryImageMessages,
getUniqueImageMessages,
processMessagesWithImages,
replaceBase64ImagesWithFileReferences,
replaceInlineDataWithFileReferences,
updateFlowState
} from '../utils'
import {
convertMultiOptionsToStringArray,
getCredentialData,
getCredentialParam,
processTemplateVariables,
configureStructuredOutput
} from '../../../src/utils'
import { addSingleFileToStorage } from '../../../src/storageUtils'
import fetch from 'node-fetch'
import { convertMultiOptionsToStringArray, processTemplateVariables, configureStructuredOutput } from '../../../src/utils'
interface ITool {
agentSelectedTool: string
@ -87,7 +82,7 @@ class Agent_Agentflow implements INode {
constructor() {
this.label = 'Agent'
this.name = 'agentAgentflow'
this.version = 2.2
this.version = 3.2
this.type = 'Agent'
this.category = 'Agent Flows'
this.description = 'Dynamically choose and utilize tools during runtime, enabling multi-step reasoning'
@ -182,6 +177,11 @@ class Agent_Agentflow implements INode {
label: 'Google Search',
name: 'googleSearch',
description: 'Search real-time web content'
},
{
label: 'Code Execution',
name: 'codeExecution',
description: 'Write and run Python code in a sandboxed environment'
}
],
show: {
@ -514,8 +514,7 @@ class Agent_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',
@ -1072,12 +1071,6 @@ class Agent_Agentflow implements INode {
llmIds = await analyticHandlers.onLLMStart(llmLabel, messages, options.parentTraceIds)
}
// Track execution time
const startTime = Date.now()
// Get initial response from LLM
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
// Handle tool calls with support for recursion
let usedTools: IUsedTool[] = []
let sourceDocuments: Array<any> = []
@ -1090,12 +1083,24 @@ class Agent_Agentflow implements INode {
const messagesBeforeToolCalls = [...messages]
let _toolCallMessages: BaseMessageLike[] = []
/**
* Add image artifacts from previous assistant responses as user messages
* Images are converted from FILE-STORAGE::<image_path> to base 64 image_url format
*/
await addImageArtifactsToMessages(messages, options)
// Check if this is hummanInput for tool calls
const _humanInput = nodeData.inputs?.humanInput
const humanInput: IHumanInput = typeof _humanInput === 'string' ? JSON.parse(_humanInput) : _humanInput
const humanInputAction = options.humanInputAction
const iterationContext = options.iterationContext
// Track execution time
const startTime = Date.now()
// Get initial response from LLM
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
if (humanInput) {
if (humanInput.type !== 'proceed' && humanInput.type !== 'reject') {
throw new Error(`Invalid human input type. Expected 'proceed' or 'reject', but got '${humanInput.type}'`)
@ -1205,7 +1210,15 @@ class Agent_Agentflow implements INode {
// Skip this if structured output is enabled - it will be streamed after conversion
let finalResponse = ''
if (response.content && Array.isArray(response.content)) {
finalResponse = response.content.map((item: any) => item.text).join('\n')
finalResponse = response.content
.map((item: any) => {
if ((item.text && !item.type) || (item.type === 'text' && item.text)) {
return item.text
}
return ''
})
.filter((text: string) => text)
.join('\n')
} else if (response.content && typeof response.content === 'string') {
finalResponse = response.content
} else {
@ -1234,9 +1247,53 @@ class Agent_Agentflow implements INode {
// Prepare final response and output object
let finalResponse = ''
if (response.content && Array.isArray(response.content)) {
finalResponse = response.content.map((item: any) => item.text).join('\n')
// Process items and concatenate consecutive text items
const processedParts: string[] = []
let currentTextBuffer = ''
for (const item of response.content) {
const itemAny = item as any
const isTextItem = (itemAny.text && !itemAny.type) || (itemAny.type === 'text' && itemAny.text)
if (isTextItem) {
// Accumulate consecutive text items
currentTextBuffer += itemAny.text
} else {
// Flush accumulated text before processing other types
if (currentTextBuffer) {
processedParts.push(currentTextBuffer)
currentTextBuffer = ''
}
// Process non-text items
if (itemAny.type === 'executableCode' && itemAny.executableCode) {
// Format executable code as a code block
const language = itemAny.executableCode.language?.toLowerCase() || 'python'
processedParts.push(`\n\`\`\`${language}\n${itemAny.executableCode.code}\n\`\`\`\n`)
} else if (itemAny.type === 'codeExecutionResult' && itemAny.codeExecutionResult) {
// Format code execution result
const outcome = itemAny.codeExecutionResult.outcome || 'OUTCOME_OK'
const output = itemAny.codeExecutionResult.output || ''
if (outcome === 'OUTCOME_OK' && output) {
processedParts.push(`**Code Output:**\n\`\`\`\n${output}\n\`\`\`\n`)
} else if (outcome !== 'OUTCOME_OK') {
processedParts.push(`**Code Execution Error:**\n\`\`\`\n${output}\n\`\`\`\n`)
}
}
}
}
// Flush any remaining text
if (currentTextBuffer) {
processedParts.push(currentTextBuffer)
}
finalResponse = processedParts.filter((text) => text).join('\n')
} else if (response.content && typeof response.content === 'string') {
finalResponse = response.content
} else if (response.content === '') {
// Empty response content, this could happen when there is only image data
finalResponse = ''
} else {
finalResponse = JSON.stringify(response, null, 2)
}
@ -1252,10 +1309,13 @@ class Agent_Agentflow implements INode {
}
}
// Extract artifacts from annotations in response metadata
// Extract artifacts from annotations in response metadata and replace inline data
if (response.response_metadata) {
const { artifacts: extractedArtifacts, fileAnnotations: extractedFileAnnotations } =
await this.extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
const {
artifacts: extractedArtifacts,
fileAnnotations: extractedFileAnnotations,
savedInlineImages
} = await extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
if (extractedArtifacts.length > 0) {
artifacts = [...artifacts, ...extractedArtifacts]
@ -1273,6 +1333,11 @@ class Agent_Agentflow implements INode {
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
}
}
// Replace inlineData base64 with file references in the response
if (savedInlineImages && savedInlineImages.length > 0) {
replaceInlineDataWithFileReferences(response, savedInlineImages)
}
}
// Replace sandbox links with proper download URLs. Example: [Download the script](sandbox:/mnt/data/dummy_bar_graph.py)
@ -1331,9 +1396,15 @@ class Agent_Agentflow implements INode {
// Process template variables in state
newState = processTemplateVariables(newState, finalResponse)
/**
* Remove the temporarily added image artifact messages before storing
* This is to avoid storing the actual base64 data into database
*/
const messagesToStore = messages.filter((msg: any) => !msg._isTemporaryImageMessage)
// Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messages,
messagesToStore,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
)
@ -1472,7 +1543,12 @@ class Agent_Agentflow implements INode {
// Handle Gemini googleSearch tool
if (groundingMetadata && groundingMetadata.webSearchQueries && Array.isArray(groundingMetadata.webSearchQueries)) {
// Check for duplicates
if (!builtInUsedTools.find((tool) => tool.tool === 'googleSearch')) {
const isDuplicate = builtInUsedTools.find(
(tool) =>
tool.tool === 'googleSearch' &&
JSON.stringify((tool.toolInput as any)?.queries) === JSON.stringify(groundingMetadata.webSearchQueries)
)
if (!isDuplicate) {
builtInUsedTools.push({
tool: 'googleSearch',
toolInput: {
@ -1486,7 +1562,12 @@ class Agent_Agentflow implements INode {
// Handle Gemini urlContext tool
if (urlContextMetadata && urlContextMetadata.urlMetadata && Array.isArray(urlContextMetadata.urlMetadata)) {
// Check for duplicates
if (!builtInUsedTools.find((tool) => tool.tool === 'urlContext')) {
const isDuplicate = builtInUsedTools.find(
(tool) =>
tool.tool === 'urlContext' &&
JSON.stringify((tool.toolInput as any)?.urlMetadata) === JSON.stringify(urlContextMetadata.urlMetadata)
)
if (!isDuplicate) {
builtInUsedTools.push({
tool: 'urlContext',
toolInput: {
@ -1497,45 +1578,53 @@ class Agent_Agentflow implements INode {
}
}
return builtInUsedTools
}
// Handle Gemini codeExecution tool
if (response.content && Array.isArray(response.content)) {
for (let i = 0; i < response.content.length; i++) {
const item = response.content[i]
/**
* Saves base64 image data to storage and returns file information
*/
private async saveBase64Image(
outputItem: any,
options: ICommonObject
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> {
try {
if (!outputItem.result) {
return null
if (item.type === 'executableCode' && item.executableCode) {
const language = item.executableCode.language || 'PYTHON'
const code = item.executableCode.code || ''
let toolOutput = ''
// Check for duplicates
const isDuplicate = builtInUsedTools.find(
(tool) =>
tool.tool === 'codeExecution' &&
(tool.toolInput as any)?.language === language &&
(tool.toolInput as any)?.code === code
)
if (isDuplicate) {
continue
}
// Check the next item for the output
const nextItem = i + 1 < response.content.length ? response.content[i + 1] : null
if (nextItem) {
if (nextItem.type === 'codeExecutionResult' && nextItem.codeExecutionResult) {
const outcome = nextItem.codeExecutionResult.outcome
const output = nextItem.codeExecutionResult.output || ''
toolOutput = outcome === 'OUTCOME_OK' ? output : `Error: ${output}`
} else if (nextItem.type === 'inlineData') {
toolOutput = 'Generated image data'
}
}
builtInUsedTools.push({
tool: 'codeExecution',
toolInput: {
language,
code
},
toolOutput
})
}
}
// Extract base64 data and create buffer
const base64Data = outputItem.result
const imageBuffer = Buffer.from(base64Data, 'base64')
// Determine file extension and MIME type
const outputFormat = outputItem.output_format || 'png'
const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
// Save the image using the existing storage utility
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
imageBuffer,
fileName,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, fileName, totalSize }
} catch (error) {
console.error('Error saving base64 image:', error)
return null
}
return builtInUsedTools
}
/**
@ -1713,8 +1802,26 @@ class Agent_Agentflow implements INode {
if (typeof chunk === 'string') {
content = chunk
} else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
const contents = chunk.content as MessageContentText[]
content = contents.map((item) => item.text).join('')
content = chunk.content
.map((item: any) => {
if ((item.text && !item.type) || (item.type === 'text' && item.text)) {
return item.text
} else if (item.type === 'executableCode' && item.executableCode) {
const language = item.executableCode.language?.toLowerCase() || 'python'
return `\n\`\`\`${language}\n${item.executableCode.code}\n\`\`\`\n`
} else if (item.type === 'codeExecutionResult' && item.codeExecutionResult) {
const outcome = item.codeExecutionResult.outcome || 'OUTCOME_OK'
const output = item.codeExecutionResult.output || ''
if (outcome === 'OUTCOME_OK' && output) {
return `**Code Output:**\n\`\`\`\n${output}\n\`\`\`\n`
} else if (outcome !== 'OUTCOME_OK') {
return `**Code Execution Error:**\n\`\`\`\n${output}\n\`\`\`\n`
}
}
return ''
})
.filter((text: string) => text)
.join('')
} else if (chunk.content) {
content = chunk.content.toString()
}
@ -1728,9 +1835,16 @@ class Agent_Agentflow implements INode {
console.error('Error during streaming:', error)
throw error
}
// Only convert to string if all content items are text (no inlineData or other special types)
if (Array.isArray(response.content) && response.content.length > 0) {
const responseContents = response.content as MessageContentText[]
response.content = responseContents.map((item) => item.text).join('')
const hasNonTextContent = response.content.some(
(item: any) => item.type === 'inlineData' || item.type === 'executableCode' || item.type === 'codeExecutionResult'
)
if (!hasNonTextContent) {
const responseContents = response.content as MessageContentText[]
response.content = responseContents.map((item) => item.text).join('')
}
}
return response
}
@ -2484,190 +2598,6 @@ class Agent_Agentflow implements INode {
return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput }
}
/**
* Extracts artifacts from response metadata (both annotations and built-in tools)
*/
private async extractArtifactsFromResponse(
responseMetadata: any,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{ artifacts: any[]; fileAnnotations: any[] }> {
const artifacts: any[] = []
const fileAnnotations: any[] = []
if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
return { artifacts, fileAnnotations }
}
for (const outputItem of responseMetadata.output) {
// Handle container file citations from annotations
if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
for (const contentItem of outputItem.content) {
if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
for (const annotation of contentItem.annotations) {
if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
try {
// Download and store the file content
const downloadResult = await this.downloadContainerFile(
annotation.container_id,
annotation.file_id,
annotation.filename,
modelNodeData,
options
)
if (downloadResult) {
const fileType = this.getArtifactTypeFromFilename(annotation.filename)
if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
const artifact = {
type: fileType,
data: downloadResult.filePath
}
artifacts.push(artifact)
} else {
fileAnnotations.push({
filePath: downloadResult.filePath,
fileName: annotation.filename
})
}
}
} catch (error) {
console.error('Error processing annotation:', error)
}
}
}
}
}
}
// Handle built-in tool artifacts (like image generation)
if (outputItem.type === 'image_generation_call' && outputItem.result) {
try {
const savedImageResult = await this.saveBase64Image(outputItem, options)
if (savedImageResult) {
// Replace the base64 result with the file path in the response metadata
outputItem.result = savedImageResult.filePath
// Create artifact in the same format as other image artifacts
const fileType = this.getArtifactTypeFromFilename(savedImageResult.fileName)
artifacts.push({
type: fileType,
data: savedImageResult.filePath
})
}
} catch (error) {
console.error('Error processing image generation artifact:', error)
}
}
}
return { artifacts, fileAnnotations }
}
/**
* Downloads file content from container file citation
*/
private async downloadContainerFile(
containerId: string,
fileId: string,
filename: string,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{ filePath: string; totalSize: number } | null> {
try {
const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
if (!openAIApiKey) {
console.warn('No OpenAI API key available for downloading container file')
return null
}
// Download the file using OpenAI Container API
const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
method: 'GET',
headers: {
Accept: '*/*',
Authorization: `Bearer ${openAIApiKey}`
}
})
if (!response.ok) {
console.warn(
`Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
)
return null
}
// Extract the binary data from the Response object
const data = await response.arrayBuffer()
const dataBuffer = Buffer.from(data)
const mimeType = this.getMimeTypeFromFilename(filename)
// Store the file using the same storage utility as OpenAIAssistant
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
dataBuffer,
filename,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, totalSize }
} catch (error) {
console.error('Error downloading container file:', error)
return null
}
}
/**
* Gets MIME type from filename extension
*/
private getMimeTypeFromFilename(filename: string): string {
const extension = filename.toLowerCase().split('.').pop()
const mimeTypes: { [key: string]: string } = {
png: 'image/png',
jpg: 'image/jpeg',
jpeg: 'image/jpeg',
gif: 'image/gif',
pdf: 'application/pdf',
txt: 'text/plain',
csv: 'text/csv',
json: 'application/json',
html: 'text/html',
xml: 'application/xml'
}
return mimeTypes[extension || ''] || 'application/octet-stream'
}
/**
* Gets artifact type from filename extension for UI rendering
*/
private getArtifactTypeFromFilename(filename: string): string {
const extension = filename.toLowerCase().split('.').pop()
const artifactTypes: { [key: string]: string } = {
png: 'png',
jpg: 'jpeg',
jpeg: 'jpeg',
html: 'html',
htm: 'html',
md: 'markdown',
markdown: 'markdown',
json: 'json',
js: 'javascript',
javascript: 'javascript',
tex: 'latex',
latex: 'latex',
txt: 'text',
csv: 'text',
pdf: 'text'
}
return artifactTypes[extension || ''] || 'text'
}
/**
* Processes sandbox links in the response text and converts them to file annotations
*/

View File

@ -60,7 +60,7 @@ class CustomFunction_Agentflow implements INode {
constructor() {
this.label = 'Custom Function'
this.name = 'customFunctionAgentflow'
this.version = 1.0
this.version = 1.1
this.type = 'CustomFunction'
this.category = 'Agent Flows'
this.description = 'Execute custom function'
@ -107,8 +107,7 @@ class CustomFunction_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',
@ -134,7 +133,7 @@ class CustomFunction_Agentflow implements INode {
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
const javascriptFunction = nodeData.inputs?.customFunctionJavascriptFunction as string
const functionInputVariables = nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]
const functionInputVariables = (nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]) ?? []
const _customFunctionUpdateState = nodeData.inputs?.customFunctionUpdateState
const state = options.agentflowRuntime?.state as ICommonObject
@ -147,11 +146,17 @@ class CustomFunction_Agentflow implements INode {
const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
const flow = {
input,
state,
chatflowId: options.chatflowid,
sessionId: options.sessionId,
chatId: options.chatId,
input,
state
rawOutput: options.postProcessing?.rawOutput || '',
chatHistory: options.postProcessing?.chatHistory || [],
sourceDocuments: options.postProcessing?.sourceDocuments,
usedTools: options.postProcessing?.usedTools,
artifacts: options.postProcessing?.artifacts,
fileAnnotations: options.postProcessing?.fileAnnotations
}
// Create additional sandbox variables for custom function inputs

View File

@ -30,7 +30,7 @@ class ExecuteFlow_Agentflow implements INode {
constructor() {
this.label = 'Execute Flow'
this.name = 'executeFlowAgentflow'
this.version = 1.1
this.version = 1.2
this.type = 'ExecuteFlow'
this.category = 'Agent Flows'
this.description = 'Execute another flow'
@ -102,8 +102,7 @@ class ExecuteFlow_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',

View File

@ -5,10 +5,13 @@ import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
import { AnalyticHandler } from '../../../src/handler'
import { ILLMMessage } from '../Interface.Agentflow'
import {
addImageArtifactsToMessages,
extractArtifactsFromResponse,
getPastChatHistoryImageMessages,
getUniqueImageMessages,
processMessagesWithImages,
replaceBase64ImagesWithFileReferences,
replaceInlineDataWithFileReferences,
updateFlowState
} from '../utils'
import { processTemplateVariables, configureStructuredOutput } from '../../../src/utils'
@ -31,7 +34,7 @@ class LLM_Agentflow implements INode {
constructor() {
this.label = 'LLM'
this.name = 'llmAgentflow'
this.version = 1.0
this.version = 1.1
this.type = 'LLM'
this.category = 'Agent Flows'
this.description = 'Large language models to analyze user-provided inputs and generate responses'
@ -287,8 +290,7 @@ class LLM_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',
@ -448,6 +450,12 @@ class LLM_Agentflow implements INode {
}
delete nodeData.inputs?.llmMessages
/**
* Add image artifacts from previous assistant responses as user messages
* Images are converted from FILE-STORAGE::<image_path> to base 64 image_url format
*/
await addImageArtifactsToMessages(messages, options)
// Configure structured output if specified
const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0
if (isStructuredOutput) {
@ -467,9 +475,11 @@ class LLM_Agentflow implements INode {
// Track execution time
const startTime = Date.now()
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
/*
* Invoke LLM
*/
if (isStreamable) {
response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
} else {
@ -494,6 +504,40 @@ class LLM_Agentflow implements INode {
const endTime = Date.now()
const timeDelta = endTime - startTime
// Extract artifacts and file annotations from response metadata
let artifacts: any[] = []
let fileAnnotations: any[] = []
if (response.response_metadata) {
const {
artifacts: extractedArtifacts,
fileAnnotations: extractedFileAnnotations,
savedInlineImages
} = await extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
if (extractedArtifacts.length > 0) {
artifacts = extractedArtifacts
// Stream artifacts if this is the last node
if (isLastNode && sseStreamer) {
sseStreamer.streamArtifactsEvent(chatId, artifacts)
}
}
if (extractedFileAnnotations.length > 0) {
fileAnnotations = extractedFileAnnotations
// Stream file annotations if this is the last node
if (isLastNode && sseStreamer) {
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
}
}
// Replace inlineData base64 with file references in the response
if (savedInlineImages && savedInlineImages.length > 0) {
replaceInlineDataWithFileReferences(response, savedInlineImages)
}
}
// Update flow state if needed
let newState = { ...state }
if (_llmUpdateState && Array.isArray(_llmUpdateState) && _llmUpdateState.length > 0) {
@ -513,10 +557,22 @@ class LLM_Agentflow implements INode {
finalResponse = response.content.map((item: any) => item.text).join('\n')
} else if (response.content && typeof response.content === 'string') {
finalResponse = response.content
} else if (response.content === '') {
// Empty response content, this could happen when there is only image data
finalResponse = ''
} else {
finalResponse = JSON.stringify(response, null, 2)
}
const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta, isStructuredOutput)
const output = this.prepareOutputObject(
response,
finalResponse,
startTime,
endTime,
timeDelta,
isStructuredOutput,
artifacts,
fileAnnotations
)
// End analytics tracking
if (analyticHandlers && llmIds) {
@ -528,12 +584,23 @@ class LLM_Agentflow implements INode {
this.sendStreamingEvents(options, chatId, response)
}
// Stream file annotations if any were extracted
if (fileAnnotations.length > 0 && isLastNode && sseStreamer) {
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
}
// Process template variables in state
newState = processTemplateVariables(newState, finalResponse)
/**
* Remove the temporarily added image artifact messages before storing
* This is to avoid storing the actual base64 data into database
*/
const messagesToStore = messages.filter((msg: any) => !msg._isTemporaryImageMessage)
// Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messages,
messagesToStore,
runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef
)
@ -584,7 +651,13 @@ class LLM_Agentflow implements INode {
{
role: returnRole,
content: finalResponse,
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id,
...(((artifacts && artifacts.length > 0) || (fileAnnotations && fileAnnotations.length > 0)) && {
additional_kwargs: {
...(artifacts && artifacts.length > 0 && { artifacts }),
...(fileAnnotations && fileAnnotations.length > 0 && { fileAnnotations })
}
})
}
]
}
@ -805,7 +878,9 @@ class LLM_Agentflow implements INode {
startTime: number,
endTime: number,
timeDelta: number,
isStructuredOutput: boolean
isStructuredOutput: boolean,
artifacts: any[] = [],
fileAnnotations: any[] = []
): any {
const output: any = {
content: finalResponse,
@ -824,6 +899,10 @@ class LLM_Agentflow implements INode {
output.usageMetadata = response.usage_metadata
}
if (response.response_metadata) {
output.responseMetadata = response.response_metadata
}
if (isStructuredOutput && typeof response === 'object') {
const structuredOutput = response as Record<string, any>
for (const key in structuredOutput) {
@ -833,6 +912,14 @@ class LLM_Agentflow implements INode {
}
}
if (artifacts && artifacts.length > 0) {
output.artifacts = flatten(artifacts)
}
if (fileAnnotations && fileAnnotations.length > 0) {
output.fileAnnotations = fileAnnotations
}
return output
}

View File

@ -20,7 +20,7 @@ class Loop_Agentflow implements INode {
constructor() {
this.label = 'Loop'
this.name = 'loopAgentflow'
this.version = 1.1
this.version = 1.2
this.type = 'Loop'
this.category = 'Agent Flows'
this.description = 'Loop back to a previous node'
@ -64,8 +64,7 @@ class Loop_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',

View File

@ -36,7 +36,7 @@ class Retriever_Agentflow implements INode {
constructor() {
this.label = 'Retriever'
this.name = 'retrieverAgentflow'
this.version = 1.0
this.version = 1.1
this.type = 'Retriever'
this.category = 'Agent Flows'
this.description = 'Retrieve information from vector database'
@ -87,8 +87,7 @@ class Retriever_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',

View File

@ -29,7 +29,7 @@ class Tool_Agentflow implements INode {
constructor() {
this.label = 'Tool'
this.name = 'toolAgentflow'
this.version = 1.1
this.version = 1.2
this.type = 'Tool'
this.category = 'Agent Flows'
this.description = 'Tools allow LLM to interact with external systems'
@ -80,8 +80,7 @@ class Tool_Agentflow implements INode {
label: 'Key',
name: 'key',
type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys',
freeSolo: true
loadMethod: 'listRuntimeStateKeys'
},
{
label: 'Value',

View File

@ -1,10 +1,11 @@
import { BaseMessage, MessageContentImageUrl } from '@langchain/core/messages'
import { BaseMessage, MessageContentImageUrl, AIMessageChunk } from '@langchain/core/messages'
import { getImageUploads } from '../../src/multiModalUtils'
import { getFileFromStorage } from '../../src/storageUtils'
import { ICommonObject, IFileUpload } from '../../src/Interface'
import { addSingleFileToStorage, getFileFromStorage } from '../../src/storageUtils'
import { ICommonObject, IFileUpload, INodeData } from '../../src/Interface'
import { BaseMessageLike } from '@langchain/core/messages'
import { IFlowState } from './Interface.Agentflow'
import { handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
import { getCredentialData, getCredentialParam, handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
import fetch from 'node-fetch'
export const addImagesToMessages = async (
options: ICommonObject,
@ -18,7 +19,8 @@ export const addImagesToMessages = async (
for (const upload of imageUploads) {
let bf = upload.data
if (upload.type == 'stored-file') {
const contents = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
const fileName = upload.name.replace(/^FILE-STORAGE::/, '')
const contents = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64
bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64')
@ -89,8 +91,9 @@ export const processMessagesWithImages = async (
if (item.type === 'stored-file' && item.name && item.mime.startsWith('image/')) {
hasImageReferences = true
try {
const fileName = item.name.replace(/^FILE-STORAGE::/, '')
// Get file contents from storage
const contents = await getFileFromStorage(item.name, options.orgId, options.chatflowid, options.chatId)
const contents = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
// Create base64 data URL
const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64')
@ -322,7 +325,8 @@ export const getPastChatHistoryImageMessages = async (
const imageContents: MessageContentImageUrl[] = []
for (const upload of uploads) {
if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) {
const fileData = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
const fileName = upload.name.replace(/^FILE-STORAGE::/, '')
const fileData = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64
const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64')
@ -456,6 +460,437 @@ export const getPastChatHistoryImageMessages = async (
}
}
/**
* Gets MIME type from filename extension
*/
export const getMimeTypeFromFilename = (filename: string): string => {
const extension = filename.toLowerCase().split('.').pop()
const mimeTypes: { [key: string]: string } = {
png: 'image/png',
jpg: 'image/jpeg',
jpeg: 'image/jpeg',
gif: 'image/gif',
pdf: 'application/pdf',
txt: 'text/plain',
csv: 'text/csv',
json: 'application/json',
html: 'text/html',
xml: 'application/xml'
}
return mimeTypes[extension || ''] || 'application/octet-stream'
}
/**
* Gets artifact type from filename extension for UI rendering
*/
export const getArtifactTypeFromFilename = (filename: string): string => {
const extension = filename.toLowerCase().split('.').pop()
const artifactTypes: { [key: string]: string } = {
png: 'png',
jpg: 'jpeg',
jpeg: 'jpeg',
html: 'html',
htm: 'html',
md: 'markdown',
markdown: 'markdown',
json: 'json',
js: 'javascript',
javascript: 'javascript',
tex: 'latex',
latex: 'latex',
txt: 'text',
csv: 'text',
pdf: 'text'
}
return artifactTypes[extension || ''] || 'text'
}
/**
* Saves base64 image data to storage and returns file information
*/
export const saveBase64Image = async (
outputItem: any,
options: ICommonObject
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> => {
try {
if (!outputItem.result) {
return null
}
// Extract base64 data and create buffer
const base64Data = outputItem.result
const imageBuffer = Buffer.from(base64Data, 'base64')
// Determine file extension and MIME type
const outputFormat = outputItem.output_format || 'png'
const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
// Save the image using the existing storage utility
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
imageBuffer,
fileName,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, fileName, totalSize }
} catch (error) {
console.error('Error saving base64 image:', error)
return null
}
}
/**
* Saves Gemini inline image data to storage and returns file information
*/
export const saveGeminiInlineImage = async (
inlineItem: any,
options: ICommonObject
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> => {
try {
if (!inlineItem.data || !inlineItem.mimeType) {
return null
}
// Extract base64 data and create buffer
const base64Data = inlineItem.data
const imageBuffer = Buffer.from(base64Data, 'base64')
// Determine file extension from MIME type
const mimeType = inlineItem.mimeType
let extension = 'png'
if (mimeType.includes('jpeg') || mimeType.includes('jpg')) {
extension = 'jpg'
} else if (mimeType.includes('png')) {
extension = 'png'
} else if (mimeType.includes('gif')) {
extension = 'gif'
} else if (mimeType.includes('webp')) {
extension = 'webp'
}
const fileName = `gemini_generated_image_${Date.now()}.${extension}`
// Save the image using the existing storage utility
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
imageBuffer,
fileName,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, fileName, totalSize }
} catch (error) {
console.error('Error saving Gemini inline image:', error)
return null
}
}
/**
* Downloads file content from container file citation
*/
export const downloadContainerFile = async (
containerId: string,
fileId: string,
filename: string,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{ filePath: string; totalSize: number } | null> => {
try {
const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
if (!openAIApiKey) {
console.warn('No OpenAI API key available for downloading container file')
return null
}
// Download the file using OpenAI Container API
const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
method: 'GET',
headers: {
Accept: '*/*',
Authorization: `Bearer ${openAIApiKey}`
}
})
if (!response.ok) {
console.warn(
`Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
)
return null
}
// Extract the binary data from the Response object
const data = await response.arrayBuffer()
const dataBuffer = Buffer.from(data)
const mimeType = getMimeTypeFromFilename(filename)
// Store the file using the same storage utility as OpenAIAssistant
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
dataBuffer,
filename,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, totalSize }
} catch (error) {
console.error('Error downloading container file:', error)
return null
}
}
/**
* Replace inlineData base64 with file references in the response content
*/
export const replaceInlineDataWithFileReferences = (
response: AIMessageChunk,
savedInlineImages: Array<{ filePath: string; fileName: string; mimeType: string }>
): void => {
// Check if content is an array
if (!Array.isArray(response.content)) {
return
}
// Replace base64 data with file references in response content
let savedImageIndex = 0
for (let i = 0; i < response.content.length; i++) {
const contentItem = response.content[i]
if (
typeof contentItem === 'object' &&
contentItem.type === 'inlineData' &&
contentItem.inlineData &&
savedImageIndex < savedInlineImages.length
) {
const savedImage = savedInlineImages[savedImageIndex]
// Replace with file reference
response.content[i] = {
type: 'stored-file',
name: savedImage.fileName,
mime: savedImage.mimeType,
path: savedImage.filePath
}
savedImageIndex++
}
}
// Clear the inlineData from response_metadata to avoid duplication
if (response.response_metadata?.inlineData) {
delete response.response_metadata.inlineData
}
}
/**
* Extracts artifacts from response metadata (both annotations and built-in tools)
*/
export const extractArtifactsFromResponse = async (
responseMetadata: any,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{
artifacts: any[]
fileAnnotations: any[]
savedInlineImages?: Array<{ filePath: string; fileName: string; mimeType: string }>
}> => {
const artifacts: any[] = []
const fileAnnotations: any[] = []
const savedInlineImages: Array<{ filePath: string; fileName: string; mimeType: string }> = []
// Handle Gemini inline data (image generation)
if (responseMetadata?.inlineData && Array.isArray(responseMetadata.inlineData)) {
for (const inlineItem of responseMetadata.inlineData) {
if (inlineItem.type === 'gemini_inline_data' && inlineItem.data && inlineItem.mimeType) {
try {
const savedImageResult = await saveGeminiInlineImage(inlineItem, options)
if (savedImageResult) {
// Create artifact in the same format as other image artifacts
const fileType = getArtifactTypeFromFilename(savedImageResult.fileName)
artifacts.push({
type: fileType,
data: savedImageResult.filePath
})
// Track saved image for replacing base64 data in content
savedInlineImages.push({
filePath: savedImageResult.filePath,
fileName: savedImageResult.fileName,
mimeType: inlineItem.mimeType
})
}
} catch (error) {
console.error('Error processing Gemini inline image artifact:', error)
}
}
}
}
if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
return { artifacts, fileAnnotations, savedInlineImages: savedInlineImages.length > 0 ? savedInlineImages : undefined }
}
for (const outputItem of responseMetadata.output) {
// Handle container file citations from annotations
if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
for (const contentItem of outputItem.content) {
if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
for (const annotation of contentItem.annotations) {
if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
try {
// Download and store the file content
const downloadResult = await downloadContainerFile(
annotation.container_id,
annotation.file_id,
annotation.filename,
modelNodeData,
options
)
if (downloadResult) {
const fileType = getArtifactTypeFromFilename(annotation.filename)
if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
const artifact = {
type: fileType,
data: downloadResult.filePath
}
artifacts.push(artifact)
} else {
fileAnnotations.push({
filePath: downloadResult.filePath,
fileName: annotation.filename
})
}
}
} catch (error) {
console.error('Error processing annotation:', error)
}
}
}
}
}
}
// Handle built-in tool artifacts (like image generation)
if (outputItem.type === 'image_generation_call' && outputItem.result) {
try {
const savedImageResult = await saveBase64Image(outputItem, options)
if (savedImageResult) {
// Replace the base64 result with the file path in the response metadata
outputItem.result = savedImageResult.filePath
// Create artifact in the same format as other image artifacts
const fileType = getArtifactTypeFromFilename(savedImageResult.fileName)
artifacts.push({
type: fileType,
data: savedImageResult.filePath
})
}
} catch (error) {
console.error('Error processing image generation artifact:', error)
}
}
}
return { artifacts, fileAnnotations, savedInlineImages: savedInlineImages.length > 0 ? savedInlineImages : undefined }
}
/**
* Add image artifacts from previous assistant messages as user messages
* This allows the LLM to see and reference the generated images in the conversation
* Messages are marked with a special flag for later removal
*/
export const addImageArtifactsToMessages = async (messages: BaseMessageLike[], options: ICommonObject): Promise<void> => {
const imageExtensions = ['png', 'jpg', 'jpeg', 'gif', 'webp']
const messagesToInsert: Array<{ index: number; message: any }> = []
// Iterate through messages to find assistant messages with image artifacts
for (let i = 0; i < messages.length; i++) {
const message = messages[i] as any
// Check if this is an assistant message with artifacts
if (
(message.role === 'assistant' || message.role === 'ai') &&
message.additional_kwargs?.artifacts &&
Array.isArray(message.additional_kwargs.artifacts)
) {
const artifacts = message.additional_kwargs.artifacts
const imageArtifacts: Array<{ type: string; name: string; mime: string }> = []
// Extract image artifacts
for (const artifact of artifacts) {
if (artifact.type && artifact.data) {
// Check if this is an image artifact by file type
if (imageExtensions.includes(artifact.type.toLowerCase())) {
// Extract filename from the file path
const fileName = artifact.data.split('/').pop() || artifact.data
const mimeType = `image/${artifact.type.toLowerCase()}`
imageArtifacts.push({
type: 'stored-file',
name: fileName,
mime: mimeType
})
}
}
}
// If we found image artifacts, prepare to insert a user message after this assistant message
if (imageArtifacts.length > 0) {
// Check if the next message already contains these image artifacts to avoid duplicates
const nextMessage = messages[i + 1] as any
const shouldInsert =
!nextMessage ||
nextMessage.role !== 'user' ||
!Array.isArray(nextMessage.content) ||
!nextMessage.content.some(
(item: any) =>
(item.type === 'stored-file' || item.type === 'image_url') &&
imageArtifacts.some((artifact) => {
// Compare with and without FILE-STORAGE:: prefix
const artifactName = artifact.name.replace('FILE-STORAGE::', '')
const itemName = item.name?.replace('FILE-STORAGE::', '') || ''
return artifactName === itemName
})
)
if (shouldInsert) {
messagesToInsert.push({
index: i + 1,
message: {
role: 'user',
content: imageArtifacts,
_isTemporaryImageMessage: true // Mark for later removal
}
})
}
}
}
}
// Insert messages in reverse order to maintain correct indices
for (let i = messagesToInsert.length - 1; i >= 0; i--) {
const { index, message } = messagesToInsert[i]
messages.splice(index, 0, message)
}
// Convert stored-file references to base64 image_url format
if (messagesToInsert.length > 0) {
const { updatedMessages } = await processMessagesWithImages(messages, options)
// Replace the messages array content with the updated messages
messages.length = 0
messages.push(...updatedMessages)
}
}
/**
* Updates the flow state with new values
*/

View File

@ -607,7 +607,12 @@ export class LangchainChatGoogleGenerativeAI
private client: GenerativeModel
get _isMultimodalModel() {
return this.model.includes('vision') || this.model.startsWith('gemini-1.5') || this.model.startsWith('gemini-2')
return (
this.model.includes('vision') ||
this.model.startsWith('gemini-1.5') ||
this.model.startsWith('gemini-2') ||
this.model.startsWith('gemini-3')
)
}
constructor(fields: GoogleGenerativeAIChatInput) {

View File

@ -452,6 +452,7 @@ export function mapGenerateContentResultToChatResult(
const [candidate] = response.candidates
const { content: candidateContent, ...generationInfo } = candidate
let content: MessageContent | undefined
const inlineDataItems: any[] = []
if (Array.isArray(candidateContent?.parts) && candidateContent.parts.length === 1 && candidateContent.parts[0].text) {
content = candidateContent.parts[0].text
@ -472,6 +473,18 @@ export function mapGenerateContentResultToChatResult(
type: 'codeExecutionResult',
codeExecutionResult: p.codeExecutionResult
}
} else if ('inlineData' in p && p.inlineData) {
// Extract inline image data for processing by Agent
inlineDataItems.push({
type: 'gemini_inline_data',
mimeType: p.inlineData.mimeType,
data: p.inlineData.data
})
// Return the inline data as part of the content structure
return {
type: 'inlineData',
inlineData: p.inlineData
}
}
return p
})
@ -488,6 +501,12 @@ export function mapGenerateContentResultToChatResult(
text = block?.text ?? text
}
// Build response_metadata with inline data if present
const response_metadata: any = {}
if (inlineDataItems.length > 0) {
response_metadata.inlineData = inlineDataItems
}
const generation: ChatGeneration = {
text,
message: new AIMessage({
@ -502,7 +521,8 @@ export function mapGenerateContentResultToChatResult(
additional_kwargs: {
...generationInfo
},
usage_metadata: extra?.usageMetadata
usage_metadata: extra?.usageMetadata,
response_metadata: Object.keys(response_metadata).length > 0 ? response_metadata : undefined
}),
generationInfo
}
@ -533,6 +553,8 @@ export function convertResponseContentToChatGenerationChunk(
const [candidate] = response.candidates
const { content: candidateContent, ...generationInfo } = candidate
let content: MessageContent | undefined
const inlineDataItems: any[] = []
// Checks if some parts do not have text. If false, it means that the content is a string.
if (Array.isArray(candidateContent?.parts) && candidateContent.parts.every((p) => 'text' in p)) {
content = candidateContent.parts.map((p) => p.text).join('')
@ -553,6 +575,18 @@ export function convertResponseContentToChatGenerationChunk(
type: 'codeExecutionResult',
codeExecutionResult: p.codeExecutionResult
}
} else if ('inlineData' in p && p.inlineData) {
// Extract inline image data for processing by Agent
inlineDataItems.push({
type: 'gemini_inline_data',
mimeType: p.inlineData.mimeType,
data: p.inlineData.data
})
// Return the inline data as part of the content structure
return {
type: 'inlineData',
inlineData: p.inlineData
}
}
return p
})
@ -582,6 +616,12 @@ export function convertResponseContentToChatGenerationChunk(
)
}
// Build response_metadata with inline data if present
const response_metadata: any = {}
if (inlineDataItems.length > 0) {
response_metadata.inlineData = inlineDataItems
}
return new ChatGenerationChunk({
text,
message: new AIMessageChunk({
@ -591,7 +631,8 @@ export function convertResponseContentToChatGenerationChunk(
// Each chunk can have unique "generationInfo", and merging strategy is unclear,
// so leave blank for now.
additional_kwargs: {},
usage_metadata: extra.usageMetadata
usage_metadata: extra.usageMetadata,
response_metadata: Object.keys(response_metadata).length > 0 ? response_metadata : undefined
}),
generationInfo
})

View File

@ -62,7 +62,6 @@ class MySQLRecordManager_RecordManager implements INode {
label: 'Namespace',
name: 'namespace',
type: 'string',
description: 'If not specified, chatflowid will be used',
additionalParams: true,
optional: true
},
@ -219,7 +218,16 @@ class MySQLRecordManager implements RecordManagerInterface {
unique key \`unique_key_namespace\` (\`key\`,
\`namespace\`));`)
const columns = [`updated_at`, `key`, `namespace`, `group_id`]
// Add doc_id column if it doesn't exist (migration for existing tables)
const checkColumn = await queryRunner.manager.query(
`SELECT COUNT(1) ColumnExists FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema=DATABASE() AND table_name='${tableName}' AND column_name='doc_id';`
)
if (checkColumn[0].ColumnExists === 0) {
await queryRunner.manager.query(`ALTER TABLE \`${tableName}\` ADD COLUMN \`doc_id\` longtext;`)
}
const columns = [`updated_at`, `key`, `namespace`, `group_id`, `doc_id`]
for (const column of columns) {
// MySQL does not support 'IF NOT EXISTS' function for Index
const Check = await queryRunner.manager.query(
@ -261,7 +269,7 @@ class MySQLRecordManager implements RecordManagerInterface {
}
}
async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> {
if (keys.length === 0) {
return
}
@ -277,23 +285,23 @@ class MySQLRecordManager implements RecordManagerInterface {
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
}
const groupIds = _groupIds ?? keys.map(() => null)
// Handle both new format (objects with uid and docId) and old format (strings)
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
if (groupIds.length !== keys.length) {
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids (${groupIds.length})`)
const groupIds = _groupIds ?? keyStrings.map(() => null)
if (groupIds.length !== keyStrings.length) {
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids (${groupIds.length})`)
}
const recordsToUpsert = keys.map((key, i) => [
key,
this.namespace,
updatedAt,
groupIds[i] ?? null // Ensure groupIds[i] is null if undefined
])
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i] ?? null, docIds[i] ?? null])
const query = `
INSERT INTO \`${tableName}\` (\`key\`, \`namespace\`, \`updated_at\`, \`group_id\`)
VALUES (?, ?, ?, ?)
ON DUPLICATE KEY UPDATE \`updated_at\` = VALUES(\`updated_at\`)`
INSERT INTO \`${tableName}\` (\`key\`, \`namespace\`, \`updated_at\`, \`group_id\`, \`doc_id\`)
VALUES (?, ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE \`updated_at\` = VALUES(\`updated_at\`), \`doc_id\` = VALUES(\`doc_id\`)`
// To handle multiple files upsert
try {
@ -349,13 +357,13 @@ class MySQLRecordManager implements RecordManagerInterface {
}
}
async listKeys(options?: ListKeyOptions): Promise<string[]> {
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> {
const dataSource = await this.getDataSource()
const queryRunner = dataSource.createQueryRunner()
const tableName = this.sanitizeTableName(this.tableName)
try {
const { before, after, limit, groupIds } = options ?? {}
const { before, after, limit, groupIds, docId } = options ?? {}
let query = `SELECT \`key\` FROM \`${tableName}\` WHERE \`namespace\` = ?`
const values: (string | number | string[])[] = [this.namespace]
@ -382,6 +390,11 @@ class MySQLRecordManager implements RecordManagerInterface {
values.push(...groupIds.filter((gid): gid is string => gid !== null))
}
if (docId) {
query += ` AND \`doc_id\` = ?`
values.push(docId)
}
query += ';'
// Directly using try/catch with async/await for cleaner flow

View File

@ -78,7 +78,6 @@ class PostgresRecordManager_RecordManager implements INode {
label: 'Namespace',
name: 'namespace',
type: 'string',
description: 'If not specified, chatflowid will be used',
additionalParams: true,
optional: true
},
@ -241,6 +240,19 @@ class PostgresRecordManager implements RecordManagerInterface {
CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace);
CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
// Add doc_id column if it doesn't exist (migration for existing tables)
await queryRunner.manager.query(`
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = '${tableName}' AND column_name = 'doc_id'
) THEN
ALTER TABLE "${tableName}" ADD COLUMN doc_id TEXT;
CREATE INDEX IF NOT EXISTS doc_id_index ON "${tableName}" (doc_id);
END IF;
END $$;`)
await queryRunner.release()
} catch (e: any) {
// This error indicates that the table already exists
@ -286,7 +298,7 @@ class PostgresRecordManager implements RecordManagerInterface {
return `(${placeholders.join(', ')})`
}
async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> {
if (keys.length === 0) {
return
}
@ -302,17 +314,22 @@ class PostgresRecordManager implements RecordManagerInterface {
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
}
const groupIds = _groupIds ?? keys.map(() => null)
// Handle both new format (objects with uid and docId) and old format (strings)
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
if (groupIds.length !== keys.length) {
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids ${groupIds.length})`)
const groupIds = _groupIds ?? keyStrings.map(() => null)
if (groupIds.length !== keyStrings.length) {
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids ${groupIds.length})`)
}
const recordsToUpsert = keys.map((key, i) => [key, this.namespace, updatedAt, groupIds[i]])
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i], docIds[i]])
const valuesPlaceholders = recordsToUpsert.map((_, j) => this.generatePlaceholderForRowAt(j, recordsToUpsert[0].length)).join(', ')
const query = `INSERT INTO "${tableName}" (key, namespace, updated_at, group_id) VALUES ${valuesPlaceholders} ON CONFLICT (key, namespace) DO UPDATE SET updated_at = EXCLUDED.updated_at;`
const query = `INSERT INTO "${tableName}" (key, namespace, updated_at, group_id, doc_id) VALUES ${valuesPlaceholders} ON CONFLICT (key, namespace) DO UPDATE SET updated_at = EXCLUDED.updated_at, doc_id = EXCLUDED.doc_id;`
try {
await queryRunner.manager.query(query, recordsToUpsert.flat())
await queryRunner.release()
@ -351,8 +368,8 @@ class PostgresRecordManager implements RecordManagerInterface {
}
}
async listKeys(options?: ListKeyOptions): Promise<string[]> {
const { before, after, limit, groupIds } = options ?? {}
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> {
const { before, after, limit, groupIds, docId } = options ?? {}
const tableName = this.sanitizeTableName(this.tableName)
let query = `SELECT key FROM "${tableName}" WHERE namespace = $1`
@ -383,6 +400,12 @@ class PostgresRecordManager implements RecordManagerInterface {
index += 1
}
if (docId) {
values.push(docId)
query += ` AND doc_id = $${index}`
index += 1
}
query += ';'
const dataSource = await this.getDataSource()

View File

@ -51,7 +51,6 @@ class SQLiteRecordManager_RecordManager implements INode {
label: 'Namespace',
name: 'namespace',
type: 'string',
description: 'If not specified, chatflowid will be used',
additionalParams: true,
optional: true
},
@ -198,6 +197,15 @@ CREATE INDEX IF NOT EXISTS key_index ON "${tableName}" (key);
CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace);
CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
// Add doc_id column if it doesn't exist (migration for existing tables)
const checkColumn = await queryRunner.manager.query(
`SELECT COUNT(*) as count FROM pragma_table_info('${tableName}') WHERE name='doc_id';`
)
if (checkColumn[0].count === 0) {
await queryRunner.manager.query(`ALTER TABLE "${tableName}" ADD COLUMN doc_id TEXT;`)
await queryRunner.manager.query(`CREATE INDEX IF NOT EXISTS doc_id_index ON "${tableName}" (doc_id);`)
}
await queryRunner.release()
} catch (e: any) {
// This error indicates that the table already exists
@ -228,7 +236,7 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
}
}
async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> {
if (keys.length === 0) {
return
}
@ -243,23 +251,23 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
}
const groupIds = _groupIds ?? keys.map(() => null)
// Handle both new format (objects with uid and docId) and old format (strings)
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
if (groupIds.length !== keys.length) {
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids (${groupIds.length})`)
const groupIds = _groupIds ?? keyStrings.map(() => null)
if (groupIds.length !== keyStrings.length) {
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids (${groupIds.length})`)
}
const recordsToUpsert = keys.map((key, i) => [
key,
this.namespace,
updatedAt,
groupIds[i] ?? null // Ensure groupIds[i] is null if undefined
])
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i] ?? null, docIds[i] ?? null])
const query = `
INSERT INTO "${tableName}" (key, namespace, updated_at, group_id)
VALUES (?, ?, ?, ?)
ON CONFLICT (key, namespace) DO UPDATE SET updated_at = excluded.updated_at`
INSERT INTO "${tableName}" (key, namespace, updated_at, group_id, doc_id)
VALUES (?, ?, ?, ?, ?)
ON CONFLICT (key, namespace) DO UPDATE SET updated_at = excluded.updated_at, doc_id = excluded.doc_id`
try {
// To handle multiple files upsert
@ -314,8 +322,8 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
}
}
async listKeys(options?: ListKeyOptions): Promise<string[]> {
const { before, after, limit, groupIds } = options ?? {}
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> {
const { before, after, limit, groupIds, docId } = options ?? {}
const tableName = this.sanitizeTableName(this.tableName)
let query = `SELECT key FROM "${tableName}" WHERE namespace = ?`
@ -344,6 +352,11 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
values.push(...groupIds.filter((gid): gid is string => gid !== null))
}
if (docId) {
query += ` AND doc_id = ?`
values.push(docId)
}
query += ';'
const dataSource = await this.getDataSource()

View File

@ -84,11 +84,16 @@ class CustomFunction_Utilities implements INode {
const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
const flow = {
input,
chatflowId: options.chatflowid,
sessionId: options.sessionId,
chatId: options.chatId,
rawOutput: options.rawOutput || '',
input
rawOutput: options.postProcessing?.rawOutput || '',
chatHistory: options.postProcessing?.chatHistory || [],
sourceDocuments: options.postProcessing?.sourceDocuments,
usedTools: options.postProcessing?.usedTools,
artifacts: options.postProcessing?.artifacts,
fileAnnotations: options.postProcessing?.fileAnnotations
}
let inputVars: ICommonObject = {}

View File

@ -186,7 +186,11 @@ class Chroma_VectorStores implements INode {
const vectorStoreName = collectionName
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
const chromaStore = new ChromaExtended(embeddings, obj)

View File

@ -198,7 +198,11 @@ class Elasticsearch_VectorStores implements INode {
const vectorStoreName = indexName
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -212,7 +212,11 @@ class Pinecone_VectorStores implements INode {
const vectorStoreName = pineconeNamespace
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await pineconeStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -49,7 +49,7 @@ class Postgres_VectorStores implements INode {
constructor() {
this.label = 'Postgres'
this.name = 'postgres'
this.version = 7.0
this.version = 7.1
this.type = 'Postgres'
this.icon = 'postgres.svg'
this.category = 'Vector Stores'
@ -173,6 +173,15 @@ class Postgres_VectorStores implements INode {
additionalParams: true,
optional: true
},
{
label: 'Upsert Batch Size',
name: 'batchSize',
type: 'number',
step: 1,
description: 'Upsert in batches of size N',
additionalParams: true,
optional: true
},
{
label: 'Additional Configuration',
name: 'additionalConfig',
@ -232,6 +241,7 @@ class Postgres_VectorStores implements INode {
const docs = nodeData.inputs?.document as Document[]
const recordManager = nodeData.inputs?.recordManager
const isFileUploadEnabled = nodeData.inputs?.fileUpload as boolean
const _batchSize = nodeData.inputs?.batchSize
const vectorStoreDriver: VectorStoreDriver = Postgres_VectorStores.getDriverFromConfig(nodeData, options)
const flattenDocs = docs && docs.length ? flatten(docs) : []
@ -265,7 +275,15 @@ class Postgres_VectorStores implements INode {
return res
} else {
await vectorStoreDriver.fromDocuments(finalDocs)
if (_batchSize) {
const batchSize = parseInt(_batchSize, 10)
for (let i = 0; i < finalDocs.length; i += batchSize) {
const batch = finalDocs.slice(i, i + batchSize)
await vectorStoreDriver.fromDocuments(batch)
}
} else {
await vectorStoreDriver.fromDocuments(finalDocs)
}
return { numAdded: finalDocs.length, addedDocs: finalDocs }
}
@ -285,7 +303,11 @@ class Postgres_VectorStores implements INode {
const vectorStoreName = tableName
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -5,6 +5,11 @@ import { TypeORMVectorStore, TypeORMVectorStoreArgs, TypeORMVectorStoreDocument
import { VectorStore } from '@langchain/core/vectorstores'
import { Document } from '@langchain/core/documents'
import { Pool } from 'pg'
import { v4 as uuid } from 'uuid'
type TypeORMAddDocumentOptions = {
ids?: string[]
}
export class TypeORMDriver extends VectorStoreDriver {
protected _postgresConnectionOptions: DataSourceOptions
@ -95,15 +100,45 @@ export class TypeORMDriver extends VectorStoreDriver {
try {
instance.appDataSource.getRepository(instance.documentEntity).delete(ids)
} catch (e) {
console.error('Failed to delete')
console.error('Failed to delete', e)
}
}
}
const baseAddVectorsFn = instance.addVectors.bind(instance)
instance.addVectors = async (
vectors: number[][],
documents: Document[],
documentOptions?: TypeORMAddDocumentOptions
): Promise<void> => {
const rows = vectors.map((embedding, idx) => {
const embeddingString = `[${embedding.join(',')}]`
const documentRow = {
id: documentOptions?.ids?.length ? documentOptions.ids[idx] : uuid(),
pageContent: documents[idx].pageContent,
embedding: embeddingString,
metadata: documents[idx].metadata
}
return documentRow
})
instance.addVectors = async (vectors, documents) => {
return baseAddVectorsFn(vectors, this.sanitizeDocuments(documents))
const documentRepository = instance.appDataSource.getRepository(instance.documentEntity)
const _batchSize = this.nodeData.inputs?.batchSize
const chunkSize = _batchSize ? parseInt(_batchSize, 10) : 500
for (let i = 0; i < rows.length; i += chunkSize) {
const chunk = rows.slice(i, i + chunkSize)
try {
await documentRepository.save(chunk)
} catch (e) {
console.error(e)
throw new Error(`Error inserting: ${chunk[0].pageContent}`)
}
}
}
instance.addDocuments = async (documents: Document[], options?: { ids?: string[] }): Promise<void> => {
const texts = documents.map(({ pageContent }) => pageContent)
return (instance.addVectors as any)(await this.getEmbeddings().embedDocuments(texts), documents, options)
}
return instance

View File

@ -385,7 +385,11 @@ class Qdrant_VectorStores implements INode {
const vectorStoreName = collectionName
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -197,7 +197,11 @@ class Supabase_VectorStores implements INode {
const vectorStoreName = tableName + '_' + queryName
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await supabaseStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -187,7 +187,11 @@ class Upstash_VectorStores implements INode {
const vectorStoreName = UPSTASH_VECTOR_REST_URL
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await upstashStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -252,7 +252,11 @@ class Weaviate_VectorStores implements INode {
const vectorStoreName = weaviateTextKey ? weaviateIndex + '_' + weaviateTextKey : weaviateIndex
await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const keys: string[] = await recordManager.listKeys({})
const filterKeys: ICommonObject = {}
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await weaviateStore.delete({ ids: keys })
await recordManager.deleteKeys(keys)

View File

@ -8,6 +8,10 @@ import { IndexingResult } from './Interface'
type Metadata = Record<string, unknown>
export interface ExtendedRecordManagerInterface extends RecordManagerInterface {
update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: Record<string, any>): Promise<void>
}
type StringOrDocFunc = string | ((doc: DocumentInterface) => string)
export interface HashedDocumentInterface extends DocumentInterface {
@ -207,7 +211,7 @@ export const _isBaseDocumentLoader = (arg: any): arg is BaseDocumentLoader => {
interface IndexArgs {
docsSource: BaseDocumentLoader | DocumentInterface[]
recordManager: RecordManagerInterface
recordManager: ExtendedRecordManagerInterface
vectorStore: VectorStore
options?: IndexOptions
}
@ -275,7 +279,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
const uids: string[] = []
const docsToIndex: DocumentInterface[] = []
const docsToUpdate: string[] = []
const docsToUpdate: Array<{ uid: string; docId: string }> = []
const seenDocs = new Set<string>()
hashedDocs.forEach((hashedDoc, i) => {
const docExists = batchExists[i]
@ -283,7 +287,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
if (forceUpdate) {
seenDocs.add(hashedDoc.uid)
} else {
docsToUpdate.push(hashedDoc.uid)
docsToUpdate.push({ uid: hashedDoc.uid, docId: hashedDoc.metadata.docId as string })
return
}
}
@ -308,7 +312,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
}
await recordManager.update(
hashedDocs.map((doc) => doc.uid),
hashedDocs.map((doc) => ({ uid: doc.uid, docId: doc.metadata.docId as string })),
{ timeAtLeast: indexStartDt, groupIds: sourceIds }
)

View File

@ -119,12 +119,12 @@
"lodash": "^4.17.21",
"moment": "^2.29.3",
"moment-timezone": "^0.5.34",
"multer": "^1.4.5-lts.1",
"multer": "^2.0.2",
"multer-cloud-storage": "^4.0.0",
"multer-s3": "^3.0.1",
"mysql2": "^3.11.3",
"nanoid": "3",
"nodemailer": "^6.9.14",
"nodemailer": "^7.0.7",
"openai": "^4.96.0",
"passport": "^0.7.0",
"passport-auth0": "^1.4.4",

View File

@ -37,7 +37,19 @@ export class UsageCacheManager {
if (process.env.MODE === MODE.QUEUE) {
let redisConfig: string | Record<string, any>
if (process.env.REDIS_URL) {
redisConfig = process.env.REDIS_URL
redisConfig = {
url: process.env.REDIS_URL,
socket: {
keepAlive:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
},
pingInterval:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
}
} else {
redisConfig = {
username: process.env.REDIS_USERNAME || undefined,
@ -48,8 +60,16 @@ export class UsageCacheManager {
tls: process.env.REDIS_TLS === 'true',
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
}
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined,
keepAlive:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
},
pingInterval:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
}
}
this.cache = createCache({

View File

@ -465,9 +465,10 @@ const insertIntoVectorStore = async (req: Request, res: Response, next: NextFunc
}
const subscriptionId = req.user?.activeOrganizationSubscriptionId || ''
const body = req.body
const isStrictSave = body.isStrictSave ?? false
const apiResponse = await documentStoreService.insertIntoVectorStoreMiddleware(
body,
false,
isStrictSave,
orgId,
workspaceId,
subscriptionId,
@ -513,7 +514,11 @@ const deleteVectorStoreFromStore = async (req: Request, res: Response, next: Nex
`Error: documentStoreController.deleteVectorStoreFromStore - workspaceId not provided!`
)
}
const apiResponse = await documentStoreService.deleteVectorStoreFromStore(req.params.storeId, workspaceId)
const apiResponse = await documentStoreService.deleteVectorStoreFromStore(
req.params.storeId,
workspaceId,
(req.query.docId as string) || undefined
)
return res.json(apiResponse)
} catch (error) {
next(error)

View File

@ -391,7 +391,7 @@ const deleteDocumentStoreFileChunk = async (storeId: string, docId: string, chun
}
}
const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string) => {
const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string, docId?: string) => {
try {
const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes
@ -461,7 +461,7 @@ const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string)
// Call the delete method of the vector store
if (vectorStoreObj.vectorStoreMethods.delete) {
await vectorStoreObj.vectorStoreMethods.delete(vStoreNodeData, idsToDelete, options)
await vectorStoreObj.vectorStoreMethods.delete(vStoreNodeData, idsToDelete, { ...options, docId })
}
} catch (error) {
throw new InternalFlowiseError(
@ -1157,6 +1157,18 @@ const updateVectorStoreConfigOnly = async (data: ICommonObject, workspaceId: str
)
}
}
/**
* Saves vector store configuration to the document store entity.
* Handles embedding, vector store, and record manager configurations.
*
* @example
* // Strict mode: Only save what's provided, clear the rest
* await saveVectorStoreConfig(ds, { storeId, embeddingName, embeddingConfig }, true, wsId)
*
* @example
* // Lenient mode: Reuse existing configs if not provided
* await saveVectorStoreConfig(ds, { storeId, vectorStoreName, vectorStoreConfig }, false, wsId)
*/
const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObject, isStrictSave = true, workspaceId: string) => {
try {
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
@ -1221,6 +1233,15 @@ const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObj
}
}
/**
* Inserts documents from document store into the configured vector store.
*
* Process:
* 1. Saves vector store configuration (embedding, vector store, record manager)
* 2. Sets document store status to UPSERTING
* 3. Performs the actual vector store upsert operation
* 4. Updates status to UPSERTED upon completion
*/
export const insertIntoVectorStore = async ({
appDataSource,
componentNodes,
@ -1231,19 +1252,16 @@ export const insertIntoVectorStore = async ({
workspaceId
}: IExecuteVectorStoreInsert) => {
try {
// Step 1: Save configuration based on isStrictSave mode
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId)
// Step 2: Mark as UPSERTING before starting the operation
entity.status = DocumentStoreStatus.UPSERTING
await appDataSource.getRepository(DocumentStore).save(entity)
const indexResult = await _insertIntoVectorStoreWorkerThread(
appDataSource,
componentNodes,
telemetry,
data,
isStrictSave,
orgId,
workspaceId
)
// Step 3: Perform the actual vector store upsert
// Note: Configuration already saved above, worker thread just retrieves and uses it
const indexResult = await _insertIntoVectorStoreWorkerThread(appDataSource, componentNodes, telemetry, data, orgId, workspaceId)
return indexResult
} catch (error) {
throw new InternalFlowiseError(
@ -1308,12 +1326,18 @@ const _insertIntoVectorStoreWorkerThread = async (
componentNodes: IComponentNodes,
telemetry: Telemetry,
data: ICommonObject,
isStrictSave = true,
orgId: string,
workspaceId: string
) => {
try {
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId)
// Configuration already saved by insertIntoVectorStore, just retrieve the entity
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId,
workspaceId: workspaceId
})
if (!entity) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Document store ${data.storeId} not found`)
}
let upsertHistory: Record<string, any> = {}
const chatflowid = data.storeId // fake chatflowid because this is not tied to any chatflow
@ -1350,7 +1374,10 @@ const _insertIntoVectorStoreWorkerThread = async (
const docs: Document[] = chunks.map((chunk: DocumentStoreFileChunk) => {
return new Document({
pageContent: chunk.pageContent,
metadata: JSON.parse(chunk.metadata)
metadata: {
...JSON.parse(chunk.metadata),
docId: chunk.docId
}
})
})
vStoreNodeData.inputs.document = docs
@ -1911,6 +1938,8 @@ const upsertDocStore = async (
recordManagerConfig
}
// Use isStrictSave: false to preserve existing configurations during upsert
// This allows the operation to reuse existing embedding/vector store/record manager configs
const res = await insertIntoVectorStore({
appDataSource,
componentNodes,

View File

@ -2122,7 +2122,62 @@ export const executeAgentFlow = async ({
// check if last agentFlowExecutedData.data.output contains the key "content"
const lastNodeOutput = agentFlowExecutedData[agentFlowExecutedData.length - 1].data?.output as ICommonObject | undefined
const content = (lastNodeOutput?.content as string) ?? ' '
let content = (lastNodeOutput?.content as string) ?? ' '
/* Check for post-processing settings */
let chatflowConfig: ICommonObject = {}
try {
if (chatflow.chatbotConfig) {
chatflowConfig = typeof chatflow.chatbotConfig === 'string' ? JSON.parse(chatflow.chatbotConfig) : chatflow.chatbotConfig
}
} catch (e) {
logger.error('[server]: Error parsing chatflow config:', e)
}
if (chatflowConfig?.postProcessing?.enabled === true && content) {
try {
const postProcessingFunction = JSON.parse(chatflowConfig?.postProcessing?.customFunction)
const nodeInstanceFilePath = componentNodes['customFunctionAgentflow'].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
//set the outputs.output to EndingNode to prevent json escaping of content...
const nodeData = {
inputs: { customFunctionJavascriptFunction: postProcessingFunction }
}
const runtimeChatHistory = agentflowRuntime.chatHistory || []
const chatHistory = [...pastChatHistory, ...runtimeChatHistory]
const options: ICommonObject = {
chatflowid: chatflow.id,
sessionId,
chatId,
input: question || form,
postProcessing: {
rawOutput: content,
chatHistory: cloneDeep(chatHistory),
sourceDocuments: lastNodeOutput?.sourceDocuments ? cloneDeep(lastNodeOutput.sourceDocuments) : undefined,
usedTools: lastNodeOutput?.usedTools ? cloneDeep(lastNodeOutput.usedTools) : undefined,
artifacts: lastNodeOutput?.artifacts ? cloneDeep(lastNodeOutput.artifacts) : undefined,
fileAnnotations: lastNodeOutput?.fileAnnotations ? cloneDeep(lastNodeOutput.fileAnnotations) : undefined
},
appDataSource,
databaseEntities,
workspaceId,
orgId,
logger
}
const customFuncNodeInstance = new nodeModule.nodeClass()
const customFunctionResponse = await customFuncNodeInstance.run(nodeData, question || form, options)
const moderatedResponse = customFunctionResponse.output.content
if (typeof moderatedResponse === 'string') {
content = moderatedResponse
} else if (typeof moderatedResponse === 'object') {
content = '```json\n' + JSON.stringify(moderatedResponse, null, 2) + '\n```'
} else {
content = moderatedResponse
}
} catch (e) {
logger.error('[server]: Post Processing Error:', e)
}
}
// remove credentialId from agentFlowExecutedData
agentFlowExecutedData = agentFlowExecutedData.map((data) => _removeCredentialId(data))

View File

@ -2,7 +2,7 @@ import { Request } from 'express'
import * as path from 'path'
import { DataSource } from 'typeorm'
import { v4 as uuidv4 } from 'uuid'
import { omit } from 'lodash'
import { omit, cloneDeep } from 'lodash'
import {
IFileUpload,
convertSpeechToText,
@ -817,7 +817,14 @@ export const executeFlow = async ({
sessionId,
chatId,
input: question,
rawOutput: resultText,
postProcessing: {
rawOutput: resultText,
chatHistory: cloneDeep(chatHistory),
sourceDocuments: result?.sourceDocuments ? cloneDeep(result.sourceDocuments) : undefined,
usedTools: result?.usedTools ? cloneDeep(result.usedTools) : undefined,
artifacts: result?.artifacts ? cloneDeep(result.artifacts) : undefined,
fileAnnotations: result?.fileAnnotations ? cloneDeep(result.fileAnnotations) : undefined
},
appDataSource,
databaseEntities,
workspaceId,

View File

@ -70,7 +70,7 @@ export const checkUsageLimit = async (
if (limit === -1) return
if (currentUsage > limit) {
throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, `Limit exceeded: ${type}`)
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, `Limit exceeded: ${type}`)
}
}
@ -135,7 +135,7 @@ export const checkPredictions = async (orgId: string, subscriptionId: string, us
if (predictionsLimit === -1) return
if (currentPredictions >= predictionsLimit) {
throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, 'Predictions limit exceeded')
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, 'Predictions limit exceeded')
}
return {
@ -161,7 +161,7 @@ export const checkStorage = async (orgId: string, subscriptionId: string, usageC
if (storageLimit === -1) return
if (currentStorageUsage >= storageLimit) {
throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, 'Storage limit exceeded')
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, 'Storage limit exceeded')
}
return {

View File

@ -22,7 +22,10 @@ const refreshLoader = (storeId) => client.post(`/document-store/refresh/${storeI
const insertIntoVectorStore = (body) => client.post(`/document-store/vectorstore/insert`, body)
const saveVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/save`, body)
const updateVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/update`, body)
const deleteVectorStoreDataFromStore = (storeId) => client.delete(`/document-store/vectorstore/${storeId}`)
const deleteVectorStoreDataFromStore = (storeId, docId) => {
const url = docId ? `/document-store/vectorstore/${storeId}?docId=${docId}` : `/document-store/vectorstore/${storeId}`
return client.delete(url)
}
const queryVectorStore = (body) => client.post(`/document-store/vectorstore/query`, body)
const getVectorStoreProviders = () => client.get('/document-store/components/vectorstore')
const getEmbeddingProviders = () => client.get('/document-store/components/embeddings')

View File

@ -10,6 +10,7 @@ const VerifyEmailPage = Loadable(lazy(() => import('@/views/auth/verify-email'))
const ForgotPasswordPage = Loadable(lazy(() => import('@/views/auth/forgotPassword')))
const ResetPasswordPage = Loadable(lazy(() => import('@/views/auth/resetPassword')))
const UnauthorizedPage = Loadable(lazy(() => import('@/views/auth/unauthorized')))
const RateLimitedPage = Loadable(lazy(() => import('@/views/auth/rateLimited')))
const OrganizationSetupPage = Loadable(lazy(() => import('@/views/organization/index')))
const LicenseExpiredPage = Loadable(lazy(() => import('@/views/auth/expired')))
@ -45,6 +46,10 @@ const AuthRoutes = {
path: '/unauthorized',
element: <UnauthorizedPage />
},
{
path: '/rate-limited',
element: <RateLimitedPage />
},
{
path: '/organization-setup',
element: <OrganizationSetupPage />

View File

@ -10,11 +10,29 @@ const ErrorContext = createContext()
export const ErrorProvider = ({ children }) => {
const [error, setError] = useState(null)
const [authRateLimitError, setAuthRateLimitError] = useState(null)
const navigate = useNavigate()
const handleError = async (err) => {
console.error(err)
if (err?.response?.status === 403) {
if (err?.response?.status === 429 && err?.response?.data?.type === 'authentication_rate_limit') {
setAuthRateLimitError("You're making a lot of requests. Please wait and try again later.")
} else if (err?.response?.status === 429 && err?.response?.data?.type !== 'authentication_rate_limit') {
const retryAfterHeader = err?.response?.headers?.['retry-after']
let retryAfter = 60 // Default in seconds
if (retryAfterHeader) {
const parsedSeconds = parseInt(retryAfterHeader, 10)
if (Number.isNaN(parsedSeconds)) {
const retryDate = new Date(retryAfterHeader)
if (!Number.isNaN(retryDate.getTime())) {
retryAfter = Math.max(0, Math.ceil((retryDate.getTime() - Date.now()) / 1000))
}
} else {
retryAfter = parsedSeconds
}
}
navigate('/rate-limited', { state: { retryAfter } })
} else if (err?.response?.status === 403) {
navigate('/unauthorized')
} else if (err?.response?.status === 401) {
if (ErrorMessage.INVALID_MISSING_TOKEN === err?.response?.data?.message) {
@ -44,7 +62,9 @@ export const ErrorProvider = ({ children }) => {
value={{
error,
setError,
handleError
handleError,
authRateLimitError,
setAuthRateLimitError
}}
>
{children}

View File

@ -53,8 +53,7 @@ const CHATFLOW_CONFIGURATION_TABS = [
},
{
label: 'Post Processing',
id: 'postProcessing',
hideInAgentFlow: true
id: 'postProcessing'
}
]

View File

@ -4,8 +4,25 @@ import PropTypes from 'prop-types'
import { useSelector } from 'react-redux'
// material-ui
import { IconButton, Button, Box, Typography } from '@mui/material'
import { IconArrowsMaximize, IconBulb, IconX } from '@tabler/icons-react'
import {
IconButton,
Button,
Box,
Typography,
TableContainer,
Table,
TableHead,
TableBody,
TableRow,
TableCell,
Paper,
Accordion,
AccordionSummary,
AccordionDetails,
Card
} from '@mui/material'
import { IconArrowsMaximize, IconX } from '@tabler/icons-react'
import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
import { useTheme } from '@mui/material/styles'
// Project import
@ -21,7 +38,11 @@ import useNotifier from '@/utils/useNotifier'
// API
import chatflowsApi from '@/api/chatflows'
const sampleFunction = `return $flow.rawOutput + " This is a post processed response!";`
const sampleFunction = `// Access chat history as a string
const chatHistory = JSON.stringify($flow.chatHistory, null, 2);
// Return a modified response
return $flow.rawOutput + " This is a post processed response!";`
const PostProcessing = ({ dialogProps }) => {
const dispatch = useDispatch()
@ -175,31 +196,105 @@ const PostProcessing = ({ dialogProps }) => {
/>
</div>
</Box>
<div
style={{
display: 'flex',
flexDirection: 'column',
borderRadius: 10,
background: '#d8f3dc',
padding: 10,
marginTop: 10
}}
>
<div
style={{
display: 'flex',
flexDirection: 'row',
alignItems: 'center',
paddingTop: 10
<Card sx={{ borderColor: theme.palette.primary[200] + 75, mt: 2, mb: 2 }} variant='outlined'>
<Accordion
disableGutters
sx={{
'&:before': {
display: 'none'
}
}}
>
<IconBulb size={30} color='#2d6a4f' />
<span style={{ color: '#2d6a4f', marginLeft: 10, fontWeight: 500 }}>
The following variables are available to use in the custom function:{' '}
<pre>$flow.rawOutput, $flow.input, $flow.chatflowId, $flow.sessionId, $flow.chatId</pre>
</span>
</div>
</div>
<AccordionSummary expandIcon={<ExpandMoreIcon />}>
<Typography>Available Variables</Typography>
</AccordionSummary>
<AccordionDetails sx={{ p: 0 }}>
<TableContainer component={Paper}>
<Table aria-label='available variables table'>
<TableHead>
<TableRow>
<TableCell sx={{ width: '30%' }}>Variable</TableCell>
<TableCell sx={{ width: '15%' }}>Type</TableCell>
<TableCell sx={{ width: '55%' }}>Description</TableCell>
</TableRow>
</TableHead>
<TableBody>
<TableRow>
<TableCell>
<code>$flow.rawOutput</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>The raw output response from the flow</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.input</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>The user input message</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.chatHistory</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>Array of previous messages in the conversation</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.chatflowId</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>Unique identifier for the chatflow</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.sessionId</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>Current session identifier</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.chatId</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>Current chat identifier</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.sourceDocuments</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>Source documents used in retrieval (if applicable)</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.usedTools</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>List of tools used during execution</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.artifacts</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>List of artifacts generated during execution</TableCell>
</TableRow>
<TableRow>
<TableCell sx={{ borderBottom: 'none' }}>
<code>$flow.fileAnnotations</code>
</TableCell>
<TableCell sx={{ borderBottom: 'none' }}>array</TableCell>
<TableCell sx={{ borderBottom: 'none' }}>File annotations associated with the response</TableCell>
</TableRow>
</TableBody>
</Table>
</TableContainer>
</AccordionDetails>
</Accordion>
</Card>
<StyledButton
style={{ marginBottom: 10, marginTop: 10 }}
variant='contained'

View File

@ -150,6 +150,8 @@ const AgentFlowNode = ({ data }) => {
return <IconWorldWww size={14} color={'white'} />
case 'googleSearch':
return <IconBrandGoogle size={14} color={'white'} />
case 'codeExecution':
return <IconCode size={14} color={'white'} />
default:
return null
}

View File

@ -16,6 +16,7 @@ import accountApi from '@/api/account.api'
// Hooks
import useApi from '@/hooks/useApi'
import { useConfig } from '@/store/context/ConfigContext'
import { useError } from '@/store/context/ErrorContext'
// utils
import useNotifier from '@/utils/useNotifier'
@ -41,10 +42,13 @@ const ForgotPasswordPage = () => {
const [isLoading, setLoading] = useState(false)
const [responseMsg, setResponseMsg] = useState(undefined)
const { authRateLimitError, setAuthRateLimitError } = useError()
const forgotPasswordApi = useApi(accountApi.forgotPassword)
const sendResetRequest = async (event) => {
event.preventDefault()
setAuthRateLimitError(null)
const body = {
user: {
email: usernameVal
@ -54,6 +58,11 @@ const ForgotPasswordPage = () => {
await forgotPasswordApi.request(body)
}
useEffect(() => {
setAuthRateLimitError(null)
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [setAuthRateLimitError])
useEffect(() => {
if (forgotPasswordApi.error) {
const errMessage =
@ -89,6 +98,11 @@ const ForgotPasswordPage = () => {
{responseMsg.msg}
</Alert>
)}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
{responseMsg && responseMsg?.type !== 'error' && (
<Alert icon={<IconCircleCheck />} variant='filled' severity='success'>
{responseMsg.msg}

View File

@ -0,0 +1,51 @@
import { Box, Button, Stack, Typography } from '@mui/material'
import { Link, useLocation } from 'react-router-dom'
import unauthorizedSVG from '@/assets/images/unauthorized.svg'
import MainCard from '@/ui-component/cards/MainCard'
// ==============================|| RateLimitedPage ||============================== //
const RateLimitedPage = () => {
const location = useLocation()
const retryAfter = location.state?.retryAfter || 60
return (
<MainCard>
<Box
sx={{
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
height: 'calc(100vh - 210px)'
}}
>
<Stack
sx={{
alignItems: 'center',
justifyContent: 'center',
maxWidth: '500px'
}}
flexDirection='column'
>
<Box sx={{ p: 2, height: 'auto' }}>
<img style={{ objectFit: 'cover', height: '20vh', width: 'auto' }} src={unauthorizedSVG} alt='rateLimitedSVG' />
</Box>
<Typography sx={{ mb: 2 }} variant='h4' component='div' fontWeight='bold'>
429 Too Many Requests
</Typography>
<Typography variant='body1' component='div' sx={{ mb: 2, textAlign: 'center' }}>
{`You have made too many requests in a short period of time. Please wait ${retryAfter}s before trying again.`}
</Typography>
<Link to='/'>
<Button variant='contained' color='primary'>
Back to Home
</Button>
</Link>
</Stack>
</Box>
</MainCard>
)
}
export default RateLimitedPage

View File

@ -18,6 +18,7 @@ import ssoApi from '@/api/sso'
// Hooks
import useApi from '@/hooks/useApi'
import { useConfig } from '@/store/context/ConfigContext'
import { useError } from '@/store/context/ErrorContext'
// utils
import useNotifier from '@/utils/useNotifier'
@ -111,7 +112,9 @@ const RegisterPage = () => {
const [loading, setLoading] = useState(false)
const [authError, setAuthError] = useState('')
const [successMsg, setSuccessMsg] = useState(undefined)
const [successMsg, setSuccessMsg] = useState('')
const { authRateLimitError, setAuthRateLimitError } = useError()
const registerApi = useApi(accountApi.registerAccount)
const ssoLoginApi = useApi(ssoApi.ssoLogin)
@ -120,6 +123,7 @@ const RegisterPage = () => {
const register = async (event) => {
event.preventDefault()
setAuthRateLimitError(null)
if (isEnterpriseLicensed) {
const result = RegisterEnterpriseUserSchema.safeParse({
username,
@ -192,6 +196,7 @@ const RegisterPage = () => {
}, [registerApi.error])
useEffect(() => {
setAuthRateLimitError(null)
if (!isOpenSource) {
getDefaultProvidersApi.request()
}
@ -274,6 +279,11 @@ const RegisterPage = () => {
)}
</Alert>
)}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
{successMsg && (
<Alert icon={<IconCircleCheck />} variant='filled' severity='success'>
{successMsg}

View File

@ -1,4 +1,4 @@
import { useState } from 'react'
import { useEffect, useState } from 'react'
import { useDispatch } from 'react-redux'
import { Link, useNavigate, useSearchParams } from 'react-router-dom'
@ -19,6 +19,9 @@ import accountApi from '@/api/account.api'
import useNotifier from '@/utils/useNotifier'
import { validatePassword } from '@/utils/validation'
// Hooks
import { useError } from '@/store/context/ErrorContext'
// Icons
import { IconExclamationCircle, IconX } from '@tabler/icons-react'
@ -70,6 +73,8 @@ const ResetPasswordPage = () => {
const [loading, setLoading] = useState(false)
const [authErrors, setAuthErrors] = useState([])
const { authRateLimitError, setAuthRateLimitError } = useError()
const goLogin = () => {
navigate('/signin', { replace: true })
}
@ -78,6 +83,7 @@ const ResetPasswordPage = () => {
event.preventDefault()
const validationErrors = []
setAuthErrors([])
setAuthRateLimitError(null)
if (!tokenVal) {
validationErrors.push('Token cannot be left blank!')
}
@ -142,6 +148,11 @@ const ResetPasswordPage = () => {
}
}
useEffect(() => {
setAuthRateLimitError(null)
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
return (
<>
<MainCard>
@ -155,6 +166,11 @@ const ResetPasswordPage = () => {
</ul>
</Alert>
)}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
<Stack sx={{ gap: 1 }}>
<Typography variant='h1'>Reset Password</Typography>
<Typography variant='body2' sx={{ color: theme.palette.grey[600] }}>

View File

@ -14,6 +14,7 @@ import { Input } from '@/ui-component/input/Input'
// Hooks
import useApi from '@/hooks/useApi'
import { useConfig } from '@/store/context/ConfigContext'
import { useError } from '@/store/context/ErrorContext'
// API
import authApi from '@/api/auth'
@ -62,6 +63,8 @@ const SignInPage = () => {
const [showResendButton, setShowResendButton] = useState(false)
const [successMessage, setSuccessMessage] = useState('')
const { authRateLimitError, setAuthRateLimitError } = useError()
const loginApi = useApi(authApi.login)
const ssoLoginApi = useApi(ssoApi.ssoLogin)
const getDefaultProvidersApi = useApi(loginMethodApi.getDefaultLoginMethods)
@ -71,6 +74,7 @@ const SignInPage = () => {
const doLogin = (event) => {
event.preventDefault()
setAuthRateLimitError(null)
setLoading(true)
const body = {
email: usernameVal,
@ -92,11 +96,12 @@ const SignInPage = () => {
useEffect(() => {
store.dispatch(logoutSuccess())
setAuthRateLimitError(null)
if (!isOpenSource) {
getDefaultProvidersApi.request()
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
}, [setAuthRateLimitError, isOpenSource])
useEffect(() => {
// Parse the "user" query parameter from the URL
@ -179,6 +184,11 @@ const SignInPage = () => {
{successMessage}
</Alert>
)}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
{authError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authError}

View File

@ -18,11 +18,15 @@ import {
TableContainer,
TableRow,
TableCell,
Checkbox,
FormControlLabel,
DialogActions
DialogActions,
Card,
Stack,
Link
} from '@mui/material'
import { useTheme } from '@mui/material/styles'
import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
import SettingsIcon from '@mui/icons-material/Settings'
import { IconAlertTriangle } from '@tabler/icons-react'
import { TableViewOnly } from '@/ui-component/table/Table'
import { v4 as uuidv4 } from 'uuid'
@ -36,12 +40,13 @@ import { initNode } from '@/utils/genericHelper'
const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
const portalElement = document.getElementById('portal')
const theme = useTheme()
const [nodeConfigExpanded, setNodeConfigExpanded] = useState({})
const [removeFromVS, setRemoveFromVS] = useState(false)
const [vsFlowData, setVSFlowData] = useState([])
const [rmFlowData, setRMFlowData] = useState([])
const getSpecificNodeApi = useApi(nodesApi.getSpecificNode)
const getVectorStoreNodeApi = useApi(nodesApi.getSpecificNode)
const getRecordManagerNodeApi = useApi(nodesApi.getSpecificNode)
const handleAccordionChange = (nodeName) => (event, isExpanded) => {
const accordianNodes = { ...nodeConfigExpanded }
@ -52,42 +57,37 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
useEffect(() => {
if (dialogProps.recordManagerConfig) {
const nodeName = dialogProps.recordManagerConfig.name
if (nodeName) getSpecificNodeApi.request(nodeName)
if (nodeName) getRecordManagerNodeApi.request(nodeName)
}
if (dialogProps.vectorStoreConfig) {
const nodeName = dialogProps.vectorStoreConfig.name
if (nodeName) getSpecificNodeApi.request(nodeName)
}
if (dialogProps.vectorStoreConfig) {
const nodeName = dialogProps.vectorStoreConfig.name
if (nodeName) getVectorStoreNodeApi.request(nodeName)
}
return () => {
setNodeConfigExpanded({})
setRemoveFromVS(false)
setVSFlowData([])
setRMFlowData([])
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [dialogProps])
// Process Vector Store node data
useEffect(() => {
if (getSpecificNodeApi.data) {
const nodeData = cloneDeep(initNode(getSpecificNodeApi.data, uuidv4()))
let config = 'vectorStoreConfig'
if (nodeData.category === 'Record Manager') config = 'recordManagerConfig'
if (getVectorStoreNodeApi.data && dialogProps.vectorStoreConfig) {
const nodeData = cloneDeep(initNode(getVectorStoreNodeApi.data, uuidv4()))
const paramValues = []
for (const inputName in dialogProps[config].config) {
for (const inputName in dialogProps.vectorStoreConfig.config) {
const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName)
if (!inputParam) continue
if (inputParam.type === 'credential') continue
let paramValue = {}
const inputValue = dialogProps[config].config[inputName]
const inputValue = dialogProps.vectorStoreConfig.config[inputName]
if (!inputValue) continue
@ -95,40 +95,71 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
continue
}
paramValue = {
paramValues.push({
label: inputParam?.label,
name: inputParam?.name,
type: inputParam?.type,
value: inputValue
}
paramValues.push(paramValue)
})
}
if (config === 'vectorStoreConfig') {
setVSFlowData([
{
label: nodeData.label,
name: nodeData.name,
category: nodeData.category,
id: nodeData.id,
paramValues
}
])
} else if (config === 'recordManagerConfig') {
setRMFlowData([
{
label: nodeData.label,
name: nodeData.name,
category: nodeData.category,
id: nodeData.id,
paramValues
}
])
}
setVSFlowData([
{
label: nodeData.label,
name: nodeData.name,
category: nodeData.category,
id: nodeData.id,
paramValues
}
])
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [getSpecificNodeApi.data])
}, [getVectorStoreNodeApi.data])
// Process Record Manager node data
useEffect(() => {
if (getRecordManagerNodeApi.data && dialogProps.recordManagerConfig) {
const nodeData = cloneDeep(initNode(getRecordManagerNodeApi.data, uuidv4()))
const paramValues = []
for (const inputName in dialogProps.recordManagerConfig.config) {
const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName)
if (!inputParam) continue
if (inputParam.type === 'credential') continue
const inputValue = dialogProps.recordManagerConfig.config[inputName]
if (!inputValue) continue
if (typeof inputValue === 'string' && inputValue.startsWith('{{') && inputValue.endsWith('}}')) {
continue
}
paramValues.push({
label: inputParam?.label,
name: inputParam?.name,
type: inputParam?.type,
value: inputValue
})
}
setRMFlowData([
{
label: nodeData.label,
name: nodeData.name,
category: nodeData.category,
id: nodeData.id,
paramValues
}
])
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [getRecordManagerNodeApi.data])
const component = show ? (
<Dialog
@ -142,91 +173,130 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
<DialogTitle sx={{ fontSize: '1rem', p: 3, pb: 0 }} id='alert-dialog-title'>
{dialogProps.title}
</DialogTitle>
<DialogContent sx={{ display: 'flex', flexDirection: 'column', gap: 2, maxHeight: '75vh', position: 'relative', px: 3, pb: 3 }}>
<DialogContent
sx={{
display: 'flex',
flexDirection: 'column',
gap: 2,
maxHeight: '75vh',
position: 'relative',
px: 3,
pb: 3,
overflow: 'auto'
}}
>
<span style={{ marginTop: '20px' }}>{dialogProps.description}</span>
{dialogProps.type === 'STORE' && dialogProps.recordManagerConfig && (
<FormControlLabel
control={<Checkbox checked={removeFromVS} onChange={(event) => setRemoveFromVS(event.target.checked)} />}
label='Remove data from vector store and record manager'
/>
{dialogProps.vectorStoreConfig && !dialogProps.recordManagerConfig && (
<div
style={{
display: 'flex',
flexDirection: 'row',
alignItems: 'center',
borderRadius: 10,
background: 'rgb(254,252,191)',
padding: 10
}}
>
<IconAlertTriangle size={70} color='orange' />
<span style={{ color: 'rgb(116,66,16)', marginLeft: 10 }}>
<strong>Note:</strong> Without a Record Manager configured, only the document chunks will be removed from the
document store. The actual vector embeddings in your vector store database will remain unchanged. To enable
automatic cleanup of vector store data, please configure a Record Manager.{' '}
<Link
href='https://docs.flowiseai.com/integrations/langchain/record-managers'
target='_blank'
rel='noopener noreferrer'
sx={{ fontWeight: 500, color: 'rgb(116,66,16)', textDecoration: 'underline' }}
>
Learn more
</Link>
</span>
</div>
)}
{removeFromVS && (
<div>
<TableContainer component={Paper}>
<Table sx={{ minWidth: 650 }} aria-label='simple table'>
<TableBody>
<TableRow sx={{ '& td': { border: 0 } }}>
<TableCell sx={{ pb: 0, pt: 0 }} colSpan={6}>
<Box>
{([...vsFlowData, ...rmFlowData] || []).map((node, index) => {
return (
<Accordion
expanded={nodeConfigExpanded[node.name] || true}
onChange={handleAccordionChange(node.name)}
key={index}
disableGutters
>
<AccordionSummary
expandIcon={<ExpandMoreIcon />}
aria-controls={`nodes-accordian-${node.name}`}
id={`nodes-accordian-header-${node.name}`}
{vsFlowData && vsFlowData.length > 0 && rmFlowData && rmFlowData.length > 0 && (
<Card sx={{ borderColor: theme.palette.primary[200] + 75, p: 2 }} variant='outlined'>
<Stack sx={{ mt: 1, mb: 2, ml: 1, alignItems: 'center' }} direction='row' spacing={2}>
<SettingsIcon />
<Typography variant='h4'>Configuration</Typography>
</Stack>
<Stack direction='column'>
<TableContainer component={Paper} sx={{ maxHeight: '400px', overflow: 'auto' }}>
<Table sx={{ minWidth: 650 }} aria-label='simple table'>
<TableBody>
<TableRow sx={{ '& td': { border: 0 } }}>
<TableCell sx={{ pb: 0, pt: 0 }} colSpan={6}>
<Box>
{([...vsFlowData, ...rmFlowData] || []).map((node, index) => {
return (
<Accordion
expanded={nodeConfigExpanded[node.name] || false}
onChange={handleAccordionChange(node.name)}
key={index}
disableGutters
>
<div
style={{ display: 'flex', flexDirection: 'row', alignItems: 'center' }}
<AccordionSummary
expandIcon={<ExpandMoreIcon />}
aria-controls={`nodes-accordian-${node.name}`}
id={`nodes-accordian-header-${node.name}`}
>
<div
style={{
width: 40,
height: 40,
marginRight: 10,
borderRadius: '50%',
backgroundColor: 'white'
display: 'flex',
flexDirection: 'row',
alignItems: 'center'
}}
>
<img
<div
style={{
width: '100%',
height: '100%',
padding: 7,
width: 40,
height: 40,
marginRight: 10,
borderRadius: '50%',
objectFit: 'contain'
backgroundColor: 'white'
}}
alt={node.name}
src={`${baseURL}/api/v1/node-icon/${node.name}`}
/>
>
<img
style={{
width: '100%',
height: '100%',
padding: 7,
borderRadius: '50%',
objectFit: 'contain'
}}
alt={node.name}
src={`${baseURL}/api/v1/node-icon/${node.name}`}
/>
</div>
<Typography variant='h5'>{node.label}</Typography>
</div>
<Typography variant='h5'>{node.label}</Typography>
</div>
</AccordionSummary>
<AccordionDetails>
{node.paramValues[0] && (
<TableViewOnly
sx={{ minWidth: 150 }}
rows={node.paramValues}
columns={Object.keys(node.paramValues[0])}
/>
)}
</AccordionDetails>
</Accordion>
)
})}
</Box>
</TableCell>
</TableRow>
</TableBody>
</Table>
</TableContainer>
<span style={{ marginTop: '30px', fontStyle: 'italic', color: '#b35702' }}>
* Only data that were upserted with Record Manager will be deleted from vector store
</span>
</div>
</AccordionSummary>
<AccordionDetails sx={{ p: 0 }}>
{node.paramValues[0] && (
<TableViewOnly
sx={{ minWidth: 150 }}
rows={node.paramValues}
columns={Object.keys(node.paramValues[0])}
/>
)}
</AccordionDetails>
</Accordion>
)
})}
</Box>
</TableCell>
</TableRow>
</TableBody>
</Table>
</TableContainer>
</Stack>
</Card>
)}
</DialogContent>
<DialogActions sx={{ pr: 3, pb: 3 }}>
<Button onClick={onCancel} color='primary'>
Cancel
</Button>
<Button variant='contained' onClick={() => onDelete(dialogProps.type, dialogProps.file, removeFromVS)} color='error'>
<Button variant='contained' onClick={() => onDelete(dialogProps.type, dialogProps.file)} color='error'>
Delete
</Button>
</DialogActions>

View File

@ -186,19 +186,19 @@ const DocumentStoreDetails = () => {
setShowDocumentLoaderListDialog(true)
}
const deleteVectorStoreDataFromStore = async (storeId) => {
const deleteVectorStoreDataFromStore = async (storeId, docId) => {
try {
await documentsApi.deleteVectorStoreDataFromStore(storeId)
await documentsApi.deleteVectorStoreDataFromStore(storeId, docId)
} catch (error) {
console.error(error)
}
}
const onDocStoreDelete = async (type, file, removeFromVectorStore) => {
const onDocStoreDelete = async (type, file) => {
setBackdropLoading(true)
setShowDeleteDocStoreDialog(false)
if (type === 'STORE') {
if (removeFromVectorStore) {
if (documentStore.recordManagerConfig) {
await deleteVectorStoreDataFromStore(storeId)
}
try {
@ -239,6 +239,9 @@ const DocumentStoreDetails = () => {
})
}
} else if (type === 'LOADER') {
if (documentStore.recordManagerConfig) {
await deleteVectorStoreDataFromStore(storeId, file.id)
}
try {
const deleteResp = await documentsApi.deleteLoaderFromStore(storeId, file.id)
setBackdropLoading(false)
@ -280,9 +283,40 @@ const DocumentStoreDetails = () => {
}
const onLoaderDelete = (file, vectorStoreConfig, recordManagerConfig) => {
// Get the display name in the format "LoaderName (sourceName)"
const loaderName = file.loaderName || 'Unknown'
let sourceName = ''
// Prefer files.name when files array exists and has items
if (file.files && Array.isArray(file.files) && file.files.length > 0) {
sourceName = file.files.map((f) => f.name).join(', ')
} else if (file.source) {
// Fallback to source logic
if (typeof file.source === 'string' && file.source.includes('base64')) {
sourceName = getFileName(file.source)
} else if (typeof file.source === 'string' && file.source.startsWith('[') && file.source.endsWith(']')) {
sourceName = JSON.parse(file.source).join(', ')
} else if (typeof file.source === 'string') {
sourceName = file.source
}
}
const displayName = sourceName ? `${loaderName} (${sourceName})` : loaderName
let description = `Delete "${displayName}"? This will delete all the associated document chunks from the document store.`
if (
recordManagerConfig &&
vectorStoreConfig &&
Object.keys(recordManagerConfig).length > 0 &&
Object.keys(vectorStoreConfig).length > 0
) {
description = `Delete "${displayName}"? This will delete all the associated document chunks from the document store and remove the actual data from the vector store database.`
}
const props = {
title: `Delete`,
description: `Delete Loader ${file.loaderName} ? This will delete all the associated document chunks.`,
description,
vectorStoreConfig,
recordManagerConfig,
type: 'LOADER',
@ -294,9 +328,20 @@ const DocumentStoreDetails = () => {
}
const onStoreDelete = (vectorStoreConfig, recordManagerConfig) => {
let description = `Delete Store ${getSpecificDocumentStore.data?.name}? This will delete all the associated loaders and document chunks from the document store.`
if (
recordManagerConfig &&
vectorStoreConfig &&
Object.keys(recordManagerConfig).length > 0 &&
Object.keys(vectorStoreConfig).length > 0
) {
description = `Delete Store ${getSpecificDocumentStore.data?.name}? This will delete all the associated loaders and document chunks from the document store, and remove the actual data from the vector store database.`
}
const props = {
title: `Delete`,
description: `Delete Store ${getSpecificDocumentStore.data?.name} ? This will delete all the associated loaders and document chunks.`,
description,
vectorStoreConfig,
recordManagerConfig,
type: 'STORE'
@ -481,7 +526,10 @@ const DocumentStoreDetails = () => {
>
<MenuItem
disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'}
onClick={() => showStoredChunks('all')}
onClick={() => {
handleClose()
showStoredChunks('all')
}}
disableRipple
>
<FileChunksIcon />
@ -490,7 +538,10 @@ const DocumentStoreDetails = () => {
<Available permission={'documentStores:upsert-config'}>
<MenuItem
disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'}
onClick={() => showVectorStore(documentStore.id)}
onClick={() => {
handleClose()
showVectorStore(documentStore.id)
}}
disableRipple
>
<NoteAddIcon />
@ -499,7 +550,10 @@ const DocumentStoreDetails = () => {
</Available>
<MenuItem
disabled={documentStore?.totalChunks <= 0 || documentStore?.status !== 'UPSERTED'}
onClick={() => showVectorStoreQuery(documentStore.id)}
onClick={() => {
handleClose()
showVectorStoreQuery(documentStore.id)
}}
disableRipple
>
<SearchIcon />
@ -518,7 +572,10 @@ const DocumentStoreDetails = () => {
</Available>
<Divider sx={{ my: 0.5 }} />
<MenuItem
onClick={() => onStoreDelete(documentStore.vectorStoreConfig, documentStore.recordManagerConfig)}
onClick={() => {
handleClose()
onStoreDelete(documentStore.vectorStoreConfig, documentStore.recordManagerConfig)
}}
disableRipple
>
<FileDeleteIcon />
@ -756,20 +813,26 @@ function LoaderRow(props) {
setAnchorEl(null)
}
const formatSources = (files, source) => {
const formatSources = (files, source, loaderName) => {
let sourceName = ''
// Prefer files.name when files array exists and has items
if (files && Array.isArray(files) && files.length > 0) {
return files.map((file) => file.name).join(', ')
sourceName = files.map((file) => file.name).join(', ')
} else if (source && typeof source === 'string' && source.includes('base64')) {
// Fallback to original source logic
sourceName = getFileName(source)
} else if (source && typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
sourceName = JSON.parse(source).join(', ')
} else if (source) {
sourceName = source
}
// Fallback to original source logic
if (source && typeof source === 'string' && source.includes('base64')) {
return getFileName(source)
// Return format: "LoaderName (sourceName)" or just "LoaderName" if no source
if (!sourceName) {
return loaderName || 'No source'
}
if (source && typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
return JSON.parse(source).join(', ')
}
return source || 'No source'
return loaderName ? `${loaderName} (${sourceName})` : sourceName
}
return (
@ -823,32 +886,62 @@ function LoaderRow(props) {
onClose={handleClose}
>
<Available permission={'documentStores:preview-process'}>
<MenuItem onClick={props.onEditClick} disableRipple>
<MenuItem
onClick={() => {
handleClose()
props.onEditClick()
}}
disableRipple
>
<FileEditIcon />
Preview & Process
</MenuItem>
</Available>
<Available permission={'documentStores:preview-process'}>
<MenuItem onClick={props.onViewChunksClick} disableRipple>
<MenuItem
onClick={() => {
handleClose()
props.onViewChunksClick()
}}
disableRipple
>
<FileChunksIcon />
View & Edit Chunks
</MenuItem>
</Available>
<Available permission={'documentStores:preview-process'}>
<MenuItem onClick={props.onChunkUpsert} disableRipple>
<MenuItem
onClick={() => {
handleClose()
props.onChunkUpsert()
}}
disableRipple
>
<NoteAddIcon />
Upsert Chunks
</MenuItem>
</Available>
<Available permission={'documentStores:preview-process'}>
<MenuItem onClick={props.onViewUpsertAPI} disableRipple>
<MenuItem
onClick={() => {
handleClose()
props.onViewUpsertAPI()
}}
disableRipple
>
<CodeIcon />
View API
</MenuItem>
</Available>
<Divider sx={{ my: 0.5 }} />
<Available permission={'documentStores:delete-loader'}>
<MenuItem onClick={props.onDeleteClick} disableRipple>
<MenuItem
onClick={() => {
handleClose()
props.onDeleteClick()
}}
disableRipple
>
<FileDeleteIcon />
Delete
</MenuItem>

View File

@ -26,6 +26,7 @@ import useApi from '@/hooks/useApi'
import useConfirm from '@/hooks/useConfirm'
import useNotifier from '@/utils/useNotifier'
import { useAuth } from '@/hooks/useAuth'
import { getFileName } from '@/utils/genericHelper'
// store
import { closeSnackbar as closeSnackbarAction, enqueueSnackbar as enqueueSnackbarAction } from '@/store/actions'
@ -76,6 +77,7 @@ const ShowStoredChunks = () => {
const [showExpandedChunkDialog, setShowExpandedChunkDialog] = useState(false)
const [expandedChunkDialogProps, setExpandedChunkDialogProps] = useState({})
const [fileNames, setFileNames] = useState([])
const [loaderDisplayName, setLoaderDisplayName] = useState('')
const chunkSelected = (chunkId) => {
const selectedChunk = documentChunks.find((chunk) => chunk.id === chunkId)
@ -212,13 +214,32 @@ const ShowStoredChunks = () => {
setCurrentPage(data.currentPage)
setStart(data.currentPage * 50 - 49)
setEnd(data.currentPage * 50 > data.count ? data.count : data.currentPage * 50)
// Build the loader display name in format "LoaderName (sourceName)"
const loaderName = data.file?.loaderName || data.storeName || ''
let sourceName = ''
if (data.file?.files && data.file.files.length > 0) {
const fileNames = []
for (const attachedFile of data.file.files) {
fileNames.push(attachedFile.name)
}
setFileNames(fileNames)
sourceName = fileNames.join(', ')
} else if (data.file?.source) {
const source = data.file.source
if (typeof source === 'string' && source.includes('base64')) {
sourceName = getFileName(source)
} else if (typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
sourceName = JSON.parse(source).join(', ')
} else if (typeof source === 'string') {
sourceName = source
}
}
// Set display name in format "LoaderName (sourceName)" or just "LoaderName"
const displayName = sourceName ? `${loaderName} (${sourceName})` : loaderName
setLoaderDisplayName(displayName)
}
// eslint-disable-next-line react-hooks/exhaustive-deps
@ -234,7 +255,7 @@ const ShowStoredChunks = () => {
<ViewHeader
isBackButton={true}
search={false}
title={getChunksApi.data?.file?.loaderName || getChunksApi.data?.storeName}
title={loaderDisplayName}
description={getChunksApi.data?.file?.splitterName || getChunksApi.data?.description}
onBack={() => navigate(-1)}
></ViewHeader>

View File

@ -40,7 +40,7 @@ import Storage from '@mui/icons-material/Storage'
import DynamicFeed from '@mui/icons-material/Filter1'
// utils
import { initNode, showHideInputParams } from '@/utils/genericHelper'
import { initNode, showHideInputParams, getFileName } from '@/utils/genericHelper'
import useNotifier from '@/utils/useNotifier'
// const
@ -69,6 +69,7 @@ const VectorStoreConfigure = () => {
const [loading, setLoading] = useState(true)
const [documentStore, setDocumentStore] = useState({})
const [dialogProps, setDialogProps] = useState({})
const [currentLoader, setCurrentLoader] = useState(null)
const [showEmbeddingsListDialog, setShowEmbeddingsListDialog] = useState(false)
const [selectedEmbeddingsProvider, setSelectedEmbeddingsProvider] = useState({})
@ -245,7 +246,8 @@ const VectorStoreConfigure = () => {
const prepareConfigData = () => {
const data = {
storeId: storeId,
docId: docId
docId: docId,
isStrictSave: true
}
// Set embedding config
if (selectedEmbeddingsProvider.inputs) {
@ -353,6 +355,39 @@ const VectorStoreConfigure = () => {
return Object.keys(selectedEmbeddingsProvider).length === 0
}
const getLoaderDisplayName = (loader) => {
if (!loader) return ''
const loaderName = loader.loaderName || 'Unknown'
let sourceName = ''
// Prefer files.name when files array exists and has items
if (loader.files && Array.isArray(loader.files) && loader.files.length > 0) {
sourceName = loader.files.map((file) => file.name).join(', ')
} else if (loader.source) {
// Fallback to source logic
if (typeof loader.source === 'string' && loader.source.includes('base64')) {
sourceName = getFileName(loader.source)
} else if (typeof loader.source === 'string' && loader.source.startsWith('[') && loader.source.endsWith(']')) {
sourceName = JSON.parse(loader.source).join(', ')
} else if (typeof loader.source === 'string') {
sourceName = loader.source
}
}
// Return format: "LoaderName (sourceName)" or just "LoaderName" if no source
return sourceName ? `${loaderName} (${sourceName})` : loaderName
}
const getViewHeaderTitle = () => {
const storeName = getSpecificDocumentStoreApi.data?.name || ''
if (docId && currentLoader) {
const loaderName = getLoaderDisplayName(currentLoader)
return `${storeName} / ${loaderName}`
}
return storeName
}
useEffect(() => {
if (saveVectorStoreConfigApi.data) {
setLoading(false)
@ -411,6 +446,15 @@ const VectorStoreConfigure = () => {
return
}
setDocumentStore(docStore)
// Find the current loader if docId is provided
if (docId && docStore.loaders) {
const loader = docStore.loaders.find((l) => l.id === docId)
if (loader) {
setCurrentLoader(loader)
}
}
if (docStore.embeddingConfig) {
getEmbeddingNodeDetailsApi.request(docStore.embeddingConfig.name)
}
@ -473,7 +517,7 @@ const VectorStoreConfigure = () => {
<ViewHeader
isBackButton={true}
search={false}
title={getSpecificDocumentStoreApi.data?.name}
title={getViewHeaderTitle()}
description='Configure Embeddings, Vector Store and Record Manager'
onBack={() => navigate(-1)}
>

View File

@ -78,7 +78,7 @@ importers:
specifier: ^8.0.1
version: 8.0.3
kill-port:
specifier: ^2.0.1
specifier: 2.0.1
version: 2.0.1
lint-staged:
specifier: ^13.0.3
@ -791,8 +791,8 @@ importers:
specifier: ^0.5.34
version: 0.5.45
multer:
specifier: ^1.4.5-lts.1
version: 1.4.5-lts.1
specifier: ^2.0.2
version: 2.0.2
multer-cloud-storage:
specifier: ^4.0.0
version: 4.0.0(encoding@0.1.13)
@ -806,8 +806,8 @@ importers:
specifier: '3'
version: 3.3.7
nodemailer:
specifier: ^6.9.14
version: 6.9.15
specifier: ^7.0.7
version: 7.0.11
openai:
specifier: 4.96.0
version: 4.96.0(encoding@0.1.13)(ws@8.18.3(bufferutil@4.0.8)(utf-8-validate@6.0.4))(zod@3.22.4)
@ -13830,6 +13830,10 @@ packages:
engines: { node: '>= 6.0.0' }
deprecated: Multer 1.x is impacted by a number of vulnerabilities, which have been patched in 2.x. You should upgrade to the latest 2.x version.
multer@2.0.2:
resolution: { integrity: sha512-u7f2xaZ/UG8oLXHvtF/oWTRvT44p9ecwBBqTwgJVq0+4BW1g8OW01TyMEGWBHbyMOYVHXslaut7qEQ1meATXgw== }
engines: { node: '>= 10.16.0' }
multicast-dns@7.2.5:
resolution: { integrity: sha512-2eznPJP8z2BFLX50tf0LuODrpINqP1RVIm/CObbTcBRITQgmC/TjcREF1NeTBzIcR5XO/ukWo+YHOjBbFwIupg== }
hasBin: true
@ -14013,8 +14017,8 @@ packages:
node-releases@2.0.14:
resolution: { integrity: sha512-y10wOWt8yZpqXmOgRo77WaHEmhYQYGNA6y421PKsKYWEK8aW+cqAphborZDhqfyKrbZEN92CN1X2KbafY2s7Yw== }
nodemailer@6.9.15:
resolution: { integrity: sha512-AHf04ySLC6CIfuRtRiEYtGEXgRfa6INgWGluDhnxTZhHSKvrBu7lc1VVchQ0d8nPc4cFaZoPq8vkyNoZr0TpGQ== }
nodemailer@7.0.11:
resolution: { integrity: sha512-gnXhNRE0FNhD7wPSCGhdNh46Hs6nm+uTyg+Kq0cZukNQiYdnCsoQjodNP9BQVG9XrcK/v6/MgpAPBUFyzh9pvw== }
engines: { node: '>=6.0.0' }
nodemon@2.0.22:
@ -36155,6 +36159,16 @@ snapshots:
type-is: 1.6.18
xtend: 4.0.2
multer@2.0.2:
dependencies:
append-field: 1.0.0
busboy: 1.6.0
concat-stream: 2.0.0
mkdirp: 0.5.6
object-assign: 4.1.1
type-is: 1.6.18
xtend: 4.0.2
multicast-dns@7.2.5:
dependencies:
dns-packet: 5.6.1
@ -36350,7 +36364,7 @@ snapshots:
node-releases@2.0.14: {}
nodemailer@6.9.15: {}
nodemailer@7.0.11: {}
nodemon@2.0.22:
dependencies: