Compare commits

..

1 Commits

Author SHA1 Message Date
Henry 3820c463fa flowise@3.0.11 2025-11-15 23:09:25 +00:00
80 changed files with 894 additions and 2824 deletions

View File

@ -189,7 +189,7 @@ Deploy Flowise self-hosted in your existing infrastructure, we support various [
- [Railway](https://docs.flowiseai.com/configuration/deployment/railway) - [Railway](https://docs.flowiseai.com/configuration/deployment/railway)
[![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/pn4G8S?referralCode=WVNPD9) [![Deploy on Railway](https://railway.app/button.svg)](https://railway.app/template/pn4G8S?referralCode=WVNPD9)
- [Northflank](https://northflank.com/stacks/deploy-flowiseai) - [Northflank](https://northflank.com/stacks/deploy-flowiseai)
[![Deploy to Northflank](https://assets.northflank.com/deploy_to_northflank_smm_36700fb050.svg)](https://northflank.com/stacks/deploy-flowiseai) [![Deploy to Northflank](https://assets.northflank.com/deploy_to_northflank_smm_36700fb050.svg)](https://northflank.com/stacks/deploy-flowiseai)

View File

@ -1,38 +1,38 @@
### Responsible Disclosure Policy ### Responsible Disclosure Policy
At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users. At Flowise, we prioritize security and continuously work to safeguard our systems. However, vulnerabilities can still exist. If you identify a security issue, please report it to us so we can address it promptly. Your cooperation helps us better protect our platform and users.
### Out of scope vulnerabilities ### Out of scope vulnerabilities
- Clickjacking on pages without sensitive actions - Clickjacking on pages without sensitive actions
- CSRF on unauthenticated/logout/login pages - CSRF on unauthenticated/logout/login pages
- Attacks requiring MITM (Man-in-the-Middle) or physical device access - Attacks requiring MITM (Man-in-the-Middle) or physical device access
- Social engineering attacks - Social engineering attacks
- Activities that cause service disruption (DoS) - Activities that cause service disruption (DoS)
- Content spoofing and text injection without a valid attack vector - Content spoofing and text injection without a valid attack vector
- Email spoofing - Email spoofing
- Absence of DNSSEC, CAA, CSP headers - Absence of DNSSEC, CAA, CSP headers
- Missing Secure or HTTP-only flag on non-sensitive cookies - Missing Secure or HTTP-only flag on non-sensitive cookies
- Deadlinks - Deadlinks
- User enumeration - User enumeration
### Reporting Guidelines ### Reporting Guidelines
- Submit your findings to https://github.com/FlowiseAI/Flowise/security - Submit your findings to https://github.com/FlowiseAI/Flowise/security
- Provide clear details to help us reproduce and fix the issue quickly. - Provide clear details to help us reproduce and fix the issue quickly.
### Disclosure Guidelines ### Disclosure Guidelines
- Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users. - Do not publicly disclose vulnerabilities until we have assessed, resolved, and notified affected users.
- If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review. - If you plan to present your research (e.g., at a conference or in a blog), share a draft with us at least **30 days in advance** for review.
- Avoid including: - Avoid including:
- Data from any Flowise customer projects - Data from any Flowise customer projects
- Flowise user/customer information - Flowise user/customer information
- Details about Flowise employees, contractors, or partners - Details about Flowise employees, contractors, or partners
### Response to Reports ### Response to Reports
- We will acknowledge your report within **5 business days** and provide an estimated resolution timeline. - We will acknowledge your report within **5 business days** and provide an estimated resolution timeline.
- Your report will be kept **confidential**, and your details will not be shared without your consent. - Your report will be kept **confidential**, and your details will not be shared without your consent.
We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly. We appreciate your efforts in helping us maintain a secure platform and look forward to working together to resolve any issues responsibly.

View File

@ -1,6 +1,6 @@
{ {
"name": "flowise", "name": "flowise",
"version": "3.0.11", "version": "10",
"private": true, "private": true,
"homepage": "https://flowiseai.com", "homepage": "https://flowiseai.com",
"workspaces": [ "workspaces": [
@ -51,7 +51,7 @@
"eslint-plugin-react-hooks": "^4.6.0", "eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-unused-imports": "^2.0.0", "eslint-plugin-unused-imports": "^2.0.0",
"husky": "^8.0.1", "husky": "^8.0.1",
"kill-port": "2.0.1", "kill-port": "^2.0.1",
"lint-staged": "^13.0.3", "lint-staged": "^13.0.3",
"prettier": "^2.7.1", "prettier": "^2.7.1",
"pretty-quick": "^3.1.3", "pretty-quick": "^3.1.3",

View File

@ -3,13 +3,6 @@
{ {
"name": "awsChatBedrock", "name": "awsChatBedrock",
"models": [ "models": [
{
"label": "anthropic.claude-opus-4-5-20251101-v1:0",
"name": "anthropic.claude-opus-4-5-20251101-v1:0",
"description": "Claude 4.5 Opus",
"input_cost": 0.000005,
"output_cost": 0.000025
},
{ {
"label": "anthropic.claude-sonnet-4-5-20250929-v1:0", "label": "anthropic.claude-sonnet-4-5-20250929-v1:0",
"name": "anthropic.claude-sonnet-4-5-20250929-v1:0", "name": "anthropic.claude-sonnet-4-5-20250929-v1:0",
@ -322,12 +315,6 @@
{ {
"name": "azureChatOpenAI", "name": "azureChatOpenAI",
"models": [ "models": [
{
"label": "gpt-5.1",
"name": "gpt-5.1",
"input_cost": 0.00000125,
"output_cost": 0.00001
},
{ {
"label": "gpt-5", "label": "gpt-5",
"name": "gpt-5", "name": "gpt-5",
@ -512,13 +499,6 @@
{ {
"name": "chatAnthropic", "name": "chatAnthropic",
"models": [ "models": [
{
"label": "claude-opus-4-5",
"name": "claude-opus-4-5",
"description": "Claude 4.5 Opus",
"input_cost": 0.000005,
"output_cost": 0.000025
},
{ {
"label": "claude-sonnet-4-5", "label": "claude-sonnet-4-5",
"name": "claude-sonnet-4-5", "name": "claude-sonnet-4-5",
@ -641,18 +621,6 @@
{ {
"name": "chatGoogleGenerativeAI", "name": "chatGoogleGenerativeAI",
"models": [ "models": [
{
"label": "gemini-3-pro-preview",
"name": "gemini-3-pro-preview",
"input_cost": 0.00002,
"output_cost": 0.00012
},
{
"label": "gemini-3-pro-image-preview",
"name": "gemini-3-pro-image-preview",
"input_cost": 0.00002,
"output_cost": 0.00012
},
{ {
"label": "gemini-2.5-pro", "label": "gemini-2.5-pro",
"name": "gemini-2.5-pro", "name": "gemini-2.5-pro",
@ -665,12 +633,6 @@
"input_cost": 1.25e-6, "input_cost": 1.25e-6,
"output_cost": 0.00001 "output_cost": 0.00001
}, },
{
"label": "gemini-2.5-flash-image",
"name": "gemini-2.5-flash-image",
"input_cost": 1.25e-6,
"output_cost": 0.00001
},
{ {
"label": "gemini-2.5-flash-lite", "label": "gemini-2.5-flash-lite",
"name": "gemini-2.5-flash-lite", "name": "gemini-2.5-flash-lite",
@ -723,12 +685,6 @@
{ {
"name": "chatGoogleVertexAI", "name": "chatGoogleVertexAI",
"models": [ "models": [
{
"label": "gemini-3-pro-preview",
"name": "gemini-3-pro-preview",
"input_cost": 0.00002,
"output_cost": 0.00012
},
{ {
"label": "gemini-2.5-pro", "label": "gemini-2.5-pro",
"name": "gemini-2.5-pro", "name": "gemini-2.5-pro",
@ -795,13 +751,6 @@
"input_cost": 1.25e-7, "input_cost": 1.25e-7,
"output_cost": 3.75e-7 "output_cost": 3.75e-7
}, },
{
"label": "claude-opus-4-5@20251101",
"name": "claude-opus-4-5@20251101",
"description": "Claude 4.5 Opus",
"input_cost": 0.000005,
"output_cost": 0.000025
},
{ {
"label": "claude-sonnet-4-5@20250929", "label": "claude-sonnet-4-5@20250929",
"name": "claude-sonnet-4-5@20250929", "name": "claude-sonnet-4-5@20250929",
@ -1047,12 +996,6 @@
{ {
"name": "chatOpenAI", "name": "chatOpenAI",
"models": [ "models": [
{
"label": "gpt-5.1",
"name": "gpt-5.1",
"input_cost": 0.00000125,
"output_cost": 0.00001
},
{ {
"label": "gpt-5", "label": "gpt-5",
"name": "gpt-5", "name": "gpt-5",

View File

@ -22,16 +22,15 @@ import zodToJsonSchema from 'zod-to-json-schema'
import { getErrorMessage } from '../../../src/error' import { getErrorMessage } from '../../../src/error'
import { DataSource } from 'typeorm' import { DataSource } from 'typeorm'
import { import {
addImageArtifactsToMessages,
extractArtifactsFromResponse,
getPastChatHistoryImageMessages, getPastChatHistoryImageMessages,
getUniqueImageMessages, getUniqueImageMessages,
processMessagesWithImages, processMessagesWithImages,
replaceBase64ImagesWithFileReferences, replaceBase64ImagesWithFileReferences,
replaceInlineDataWithFileReferences,
updateFlowState updateFlowState
} from '../utils' } from '../utils'
import { convertMultiOptionsToStringArray, processTemplateVariables, configureStructuredOutput } from '../../../src/utils' import { convertMultiOptionsToStringArray, getCredentialData, getCredentialParam, processTemplateVariables } from '../../../src/utils'
import { addSingleFileToStorage } from '../../../src/storageUtils'
import fetch from 'node-fetch'
interface ITool { interface ITool {
agentSelectedTool: string agentSelectedTool: string
@ -82,7 +81,7 @@ class Agent_Agentflow implements INode {
constructor() { constructor() {
this.label = 'Agent' this.label = 'Agent'
this.name = 'agentAgentflow' this.name = 'agentAgentflow'
this.version = 3.2 this.version = 2.2
this.type = 'Agent' this.type = 'Agent'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Dynamically choose and utilize tools during runtime, enabling multi-step reasoning' this.description = 'Dynamically choose and utilize tools during runtime, enabling multi-step reasoning'
@ -177,11 +176,6 @@ class Agent_Agentflow implements INode {
label: 'Google Search', label: 'Google Search',
name: 'googleSearch', name: 'googleSearch',
description: 'Search real-time web content' description: 'Search real-time web content'
},
{
label: 'Code Execution',
name: 'codeExecution',
description: 'Write and run Python code in a sandboxed environment'
} }
], ],
show: { show: {
@ -400,108 +394,6 @@ class Agent_Agentflow implements INode {
], ],
default: 'userMessage' default: 'userMessage'
}, },
{
label: 'JSON Structured Output',
name: 'agentStructuredOutput',
description: 'Instruct the Agent to give output in a JSON structured schema',
type: 'array',
optional: true,
acceptVariable: true,
array: [
{
label: 'Key',
name: 'key',
type: 'string'
},
{
label: 'Type',
name: 'type',
type: 'options',
options: [
{
label: 'String',
name: 'string'
},
{
label: 'String Array',
name: 'stringArray'
},
{
label: 'Number',
name: 'number'
},
{
label: 'Boolean',
name: 'boolean'
},
{
label: 'Enum',
name: 'enum'
},
{
label: 'JSON Array',
name: 'jsonArray'
}
]
},
{
label: 'Enum Values',
name: 'enumValues',
type: 'string',
placeholder: 'value1, value2, value3',
description: 'Enum values. Separated by comma',
optional: true,
show: {
'agentStructuredOutput[$index].type': 'enum'
}
},
{
label: 'JSON Schema',
name: 'jsonSchema',
type: 'code',
placeholder: `{
"answer": {
"type": "string",
"description": "Value of the answer"
},
"reason": {
"type": "string",
"description": "Reason for the answer"
},
"optional": {
"type": "boolean"
},
"count": {
"type": "number"
},
"children": {
"type": "array",
"items": {
"type": "object",
"properties": {
"value": {
"type": "string",
"description": "Value of the children's answer"
}
}
}
}
}`,
description: 'JSON schema for the structured output',
optional: true,
hideCodeExecute: true,
show: {
'agentStructuredOutput[$index].type': 'jsonArray'
}
},
{
label: 'Description',
name: 'description',
type: 'string',
placeholder: 'Description of the key'
}
]
},
{ {
label: 'Update Flow State', label: 'Update Flow State',
name: 'agentUpdateState', name: 'agentUpdateState',
@ -514,7 +406,8 @@ class Agent_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',
@ -877,7 +770,6 @@ class Agent_Agentflow implements INode {
const memoryType = nodeData.inputs?.agentMemoryType as string const memoryType = nodeData.inputs?.agentMemoryType as string
const userMessage = nodeData.inputs?.agentUserMessage as string const userMessage = nodeData.inputs?.agentUserMessage as string
const _agentUpdateState = nodeData.inputs?.agentUpdateState const _agentUpdateState = nodeData.inputs?.agentUpdateState
const _agentStructuredOutput = nodeData.inputs?.agentStructuredOutput
const agentMessages = (nodeData.inputs?.agentMessages as unknown as ILLMMessage[]) ?? [] const agentMessages = (nodeData.inputs?.agentMessages as unknown as ILLMMessage[]) ?? []
// Extract runtime state and history // Extract runtime state and history
@ -903,8 +795,6 @@ class Agent_Agentflow implements INode {
const llmWithoutToolsBind = (await newLLMNodeInstance.init(newNodeData, '', options)) as BaseChatModel const llmWithoutToolsBind = (await newLLMNodeInstance.init(newNodeData, '', options)) as BaseChatModel
let llmNodeInstance = llmWithoutToolsBind let llmNodeInstance = llmWithoutToolsBind
const isStructuredOutput = _agentStructuredOutput && Array.isArray(_agentStructuredOutput) && _agentStructuredOutput.length > 0
const agentToolsBuiltInOpenAI = convertMultiOptionsToStringArray(nodeData.inputs?.agentToolsBuiltInOpenAI) const agentToolsBuiltInOpenAI = convertMultiOptionsToStringArray(nodeData.inputs?.agentToolsBuiltInOpenAI)
if (agentToolsBuiltInOpenAI && agentToolsBuiltInOpenAI.length > 0) { if (agentToolsBuiltInOpenAI && agentToolsBuiltInOpenAI.length > 0) {
for (const tool of agentToolsBuiltInOpenAI) { for (const tool of agentToolsBuiltInOpenAI) {
@ -1063,7 +953,7 @@ class Agent_Agentflow implements INode {
// Initialize response and determine if streaming is possible // Initialize response and determine if streaming is possible
let response: AIMessageChunk = new AIMessageChunk('') let response: AIMessageChunk = new AIMessageChunk('')
const isLastNode = options.isLastNode as boolean const isLastNode = options.isLastNode as boolean
const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false && !isStructuredOutput const isStreamable = isLastNode && options.sseStreamer !== undefined && modelConfig?.streaming !== false
// Start analytics // Start analytics
if (analyticHandlers && options.parentTraceIds) { if (analyticHandlers && options.parentTraceIds) {
@ -1071,6 +961,12 @@ class Agent_Agentflow implements INode {
llmIds = await analyticHandlers.onLLMStart(llmLabel, messages, options.parentTraceIds) llmIds = await analyticHandlers.onLLMStart(llmLabel, messages, options.parentTraceIds)
} }
// Track execution time
const startTime = Date.now()
// Get initial response from LLM
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
// Handle tool calls with support for recursion // Handle tool calls with support for recursion
let usedTools: IUsedTool[] = [] let usedTools: IUsedTool[] = []
let sourceDocuments: Array<any> = [] let sourceDocuments: Array<any> = []
@ -1083,24 +979,12 @@ class Agent_Agentflow implements INode {
const messagesBeforeToolCalls = [...messages] const messagesBeforeToolCalls = [...messages]
let _toolCallMessages: BaseMessageLike[] = [] let _toolCallMessages: BaseMessageLike[] = []
/**
* Add image artifacts from previous assistant responses as user messages
* Images are converted from FILE-STORAGE::<image_path> to base 64 image_url format
*/
await addImageArtifactsToMessages(messages, options)
// Check if this is hummanInput for tool calls // Check if this is hummanInput for tool calls
const _humanInput = nodeData.inputs?.humanInput const _humanInput = nodeData.inputs?.humanInput
const humanInput: IHumanInput = typeof _humanInput === 'string' ? JSON.parse(_humanInput) : _humanInput const humanInput: IHumanInput = typeof _humanInput === 'string' ? JSON.parse(_humanInput) : _humanInput
const humanInputAction = options.humanInputAction const humanInputAction = options.humanInputAction
const iterationContext = options.iterationContext const iterationContext = options.iterationContext
// Track execution time
const startTime = Date.now()
// Get initial response from LLM
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
if (humanInput) { if (humanInput) {
if (humanInput.type !== 'proceed' && humanInput.type !== 'reject') { if (humanInput.type !== 'proceed' && humanInput.type !== 'reject') {
throw new Error(`Invalid human input type. Expected 'proceed' or 'reject', but got '${humanInput.type}'`) throw new Error(`Invalid human input type. Expected 'proceed' or 'reject', but got '${humanInput.type}'`)
@ -1118,8 +1002,7 @@ class Agent_Agentflow implements INode {
llmWithoutToolsBind, llmWithoutToolsBind,
isStreamable, isStreamable,
isLastNode, isLastNode,
iterationContext, iterationContext
isStructuredOutput
}) })
response = result.response response = result.response
@ -1148,14 +1031,7 @@ class Agent_Agentflow implements INode {
} }
} else { } else {
if (isStreamable) { if (isStreamable) {
response = await this.handleStreamingResponse( response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
sseStreamer,
llmNodeInstance,
messages,
chatId,
abortController,
isStructuredOutput
)
} else { } else {
response = await llmNodeInstance.invoke(messages, { signal: abortController?.signal }) response = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
} }
@ -1177,8 +1053,7 @@ class Agent_Agentflow implements INode {
llmNodeInstance, llmNodeInstance,
isStreamable, isStreamable,
isLastNode, isLastNode,
iterationContext, iterationContext
isStructuredOutput
}) })
response = result.response response = result.response
@ -1205,20 +1080,11 @@ class Agent_Agentflow implements INode {
sseStreamer.streamArtifactsEvent(chatId, flatten(artifacts)) sseStreamer.streamArtifactsEvent(chatId, flatten(artifacts))
} }
} }
} else if (!humanInput && !isStreamable && isLastNode && sseStreamer && !isStructuredOutput) { } else if (!humanInput && !isStreamable && isLastNode && sseStreamer) {
// Stream whole response back to UI if not streaming and no tool calls // Stream whole response back to UI if not streaming and no tool calls
// Skip this if structured output is enabled - it will be streamed after conversion
let finalResponse = '' let finalResponse = ''
if (response.content && Array.isArray(response.content)) { if (response.content && Array.isArray(response.content)) {
finalResponse = response.content finalResponse = response.content.map((item: any) => item.text).join('\n')
.map((item: any) => {
if ((item.text && !item.type) || (item.type === 'text' && item.text)) {
return item.text
}
return ''
})
.filter((text: string) => text)
.join('\n')
} else if (response.content && typeof response.content === 'string') { } else if (response.content && typeof response.content === 'string') {
finalResponse = response.content finalResponse = response.content
} else { } else {
@ -1247,53 +1113,9 @@ class Agent_Agentflow implements INode {
// Prepare final response and output object // Prepare final response and output object
let finalResponse = '' let finalResponse = ''
if (response.content && Array.isArray(response.content)) { if (response.content && Array.isArray(response.content)) {
// Process items and concatenate consecutive text items finalResponse = response.content.map((item: any) => item.text).join('\n')
const processedParts: string[] = []
let currentTextBuffer = ''
for (const item of response.content) {
const itemAny = item as any
const isTextItem = (itemAny.text && !itemAny.type) || (itemAny.type === 'text' && itemAny.text)
if (isTextItem) {
// Accumulate consecutive text items
currentTextBuffer += itemAny.text
} else {
// Flush accumulated text before processing other types
if (currentTextBuffer) {
processedParts.push(currentTextBuffer)
currentTextBuffer = ''
}
// Process non-text items
if (itemAny.type === 'executableCode' && itemAny.executableCode) {
// Format executable code as a code block
const language = itemAny.executableCode.language?.toLowerCase() || 'python'
processedParts.push(`\n\`\`\`${language}\n${itemAny.executableCode.code}\n\`\`\`\n`)
} else if (itemAny.type === 'codeExecutionResult' && itemAny.codeExecutionResult) {
// Format code execution result
const outcome = itemAny.codeExecutionResult.outcome || 'OUTCOME_OK'
const output = itemAny.codeExecutionResult.output || ''
if (outcome === 'OUTCOME_OK' && output) {
processedParts.push(`**Code Output:**\n\`\`\`\n${output}\n\`\`\`\n`)
} else if (outcome !== 'OUTCOME_OK') {
processedParts.push(`**Code Execution Error:**\n\`\`\`\n${output}\n\`\`\`\n`)
}
}
}
}
// Flush any remaining text
if (currentTextBuffer) {
processedParts.push(currentTextBuffer)
}
finalResponse = processedParts.filter((text) => text).join('\n')
} else if (response.content && typeof response.content === 'string') { } else if (response.content && typeof response.content === 'string') {
finalResponse = response.content finalResponse = response.content
} else if (response.content === '') {
// Empty response content, this could happen when there is only image data
finalResponse = ''
} else { } else {
finalResponse = JSON.stringify(response, null, 2) finalResponse = JSON.stringify(response, null, 2)
} }
@ -1309,13 +1131,10 @@ class Agent_Agentflow implements INode {
} }
} }
// Extract artifacts from annotations in response metadata and replace inline data // Extract artifacts from annotations in response metadata
if (response.response_metadata) { if (response.response_metadata) {
const { const { artifacts: extractedArtifacts, fileAnnotations: extractedFileAnnotations } =
artifacts: extractedArtifacts, await this.extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
fileAnnotations: extractedFileAnnotations,
savedInlineImages
} = await extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
if (extractedArtifacts.length > 0) { if (extractedArtifacts.length > 0) {
artifacts = [...artifacts, ...extractedArtifacts] artifacts = [...artifacts, ...extractedArtifacts]
@ -1333,11 +1152,6 @@ class Agent_Agentflow implements INode {
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations) sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
} }
} }
// Replace inlineData base64 with file references in the response
if (savedInlineImages && savedInlineImages.length > 0) {
replaceInlineDataWithFileReferences(response, savedInlineImages)
}
} }
// Replace sandbox links with proper download URLs. Example: [Download the script](sandbox:/mnt/data/dummy_bar_graph.py) // Replace sandbox links with proper download URLs. Example: [Download the script](sandbox:/mnt/data/dummy_bar_graph.py)
@ -1345,23 +1159,6 @@ class Agent_Agentflow implements INode {
finalResponse = await this.processSandboxLinks(finalResponse, options.baseURL, options.chatflowid, chatId) finalResponse = await this.processSandboxLinks(finalResponse, options.baseURL, options.chatflowid, chatId)
} }
// If is structured output, then invoke LLM again with structured output at the very end after all tool calls
if (isStructuredOutput) {
llmNodeInstance = configureStructuredOutput(llmNodeInstance, _agentStructuredOutput)
const prompt = 'Convert the following response to the structured output format: ' + finalResponse
response = await llmNodeInstance.invoke(prompt, { signal: abortController?.signal })
if (typeof response === 'object') {
finalResponse = '```json\n' + JSON.stringify(response, null, 2) + '\n```'
} else {
finalResponse = response
}
if (isLastNode && sseStreamer) {
sseStreamer.streamTokenEvent(chatId, finalResponse)
}
}
const output = this.prepareOutputObject( const output = this.prepareOutputObject(
response, response,
availableTools, availableTools,
@ -1374,8 +1171,7 @@ class Agent_Agentflow implements INode {
artifacts, artifacts,
additionalTokens, additionalTokens,
isWaitingForHumanInput, isWaitingForHumanInput,
fileAnnotations, fileAnnotations
isStructuredOutput
) )
// End analytics tracking // End analytics tracking
@ -1396,15 +1192,9 @@ class Agent_Agentflow implements INode {
// Process template variables in state // Process template variables in state
newState = processTemplateVariables(newState, finalResponse) newState = processTemplateVariables(newState, finalResponse)
/**
* Remove the temporarily added image artifact messages before storing
* This is to avoid storing the actual base64 data into database
*/
const messagesToStore = messages.filter((msg: any) => !msg._isTemporaryImageMessage)
// Replace the actual messages array with one that includes the file references for images instead of base64 data // Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences( const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messagesToStore, messages,
runtimeImageMessagesWithFileRef, runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef pastImageMessagesWithFileRef
) )
@ -1543,12 +1333,7 @@ class Agent_Agentflow implements INode {
// Handle Gemini googleSearch tool // Handle Gemini googleSearch tool
if (groundingMetadata && groundingMetadata.webSearchQueries && Array.isArray(groundingMetadata.webSearchQueries)) { if (groundingMetadata && groundingMetadata.webSearchQueries && Array.isArray(groundingMetadata.webSearchQueries)) {
// Check for duplicates // Check for duplicates
const isDuplicate = builtInUsedTools.find( if (!builtInUsedTools.find((tool) => tool.tool === 'googleSearch')) {
(tool) =>
tool.tool === 'googleSearch' &&
JSON.stringify((tool.toolInput as any)?.queries) === JSON.stringify(groundingMetadata.webSearchQueries)
)
if (!isDuplicate) {
builtInUsedTools.push({ builtInUsedTools.push({
tool: 'googleSearch', tool: 'googleSearch',
toolInput: { toolInput: {
@ -1562,12 +1347,7 @@ class Agent_Agentflow implements INode {
// Handle Gemini urlContext tool // Handle Gemini urlContext tool
if (urlContextMetadata && urlContextMetadata.urlMetadata && Array.isArray(urlContextMetadata.urlMetadata)) { if (urlContextMetadata && urlContextMetadata.urlMetadata && Array.isArray(urlContextMetadata.urlMetadata)) {
// Check for duplicates // Check for duplicates
const isDuplicate = builtInUsedTools.find( if (!builtInUsedTools.find((tool) => tool.tool === 'urlContext')) {
(tool) =>
tool.tool === 'urlContext' &&
JSON.stringify((tool.toolInput as any)?.urlMetadata) === JSON.stringify(urlContextMetadata.urlMetadata)
)
if (!isDuplicate) {
builtInUsedTools.push({ builtInUsedTools.push({
tool: 'urlContext', tool: 'urlContext',
toolInput: { toolInput: {
@ -1578,55 +1358,47 @@ class Agent_Agentflow implements INode {
} }
} }
// Handle Gemini codeExecution tool
if (response.content && Array.isArray(response.content)) {
for (let i = 0; i < response.content.length; i++) {
const item = response.content[i]
if (item.type === 'executableCode' && item.executableCode) {
const language = item.executableCode.language || 'PYTHON'
const code = item.executableCode.code || ''
let toolOutput = ''
// Check for duplicates
const isDuplicate = builtInUsedTools.find(
(tool) =>
tool.tool === 'codeExecution' &&
(tool.toolInput as any)?.language === language &&
(tool.toolInput as any)?.code === code
)
if (isDuplicate) {
continue
}
// Check the next item for the output
const nextItem = i + 1 < response.content.length ? response.content[i + 1] : null
if (nextItem) {
if (nextItem.type === 'codeExecutionResult' && nextItem.codeExecutionResult) {
const outcome = nextItem.codeExecutionResult.outcome
const output = nextItem.codeExecutionResult.output || ''
toolOutput = outcome === 'OUTCOME_OK' ? output : `Error: ${output}`
} else if (nextItem.type === 'inlineData') {
toolOutput = 'Generated image data'
}
}
builtInUsedTools.push({
tool: 'codeExecution',
toolInput: {
language,
code
},
toolOutput
})
}
}
}
return builtInUsedTools return builtInUsedTools
} }
/**
* Saves base64 image data to storage and returns file information
*/
private async saveBase64Image(
outputItem: any,
options: ICommonObject
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> {
try {
if (!outputItem.result) {
return null
}
// Extract base64 data and create buffer
const base64Data = outputItem.result
const imageBuffer = Buffer.from(base64Data, 'base64')
// Determine file extension and MIME type
const outputFormat = outputItem.output_format || 'png'
const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
// Save the image using the existing storage utility
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
imageBuffer,
fileName,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, fileName, totalSize }
} catch (error) {
console.error('Error saving base64 image:', error)
return null
}
}
/** /**
* Handles memory management based on the specified memory type * Handles memory management based on the specified memory type
*/ */
@ -1789,62 +1561,32 @@ class Agent_Agentflow implements INode {
llmNodeInstance: BaseChatModel, llmNodeInstance: BaseChatModel,
messages: BaseMessageLike[], messages: BaseMessageLike[],
chatId: string, chatId: string,
abortController: AbortController, abortController: AbortController
isStructuredOutput: boolean = false
): Promise<AIMessageChunk> { ): Promise<AIMessageChunk> {
let response = new AIMessageChunk('') let response = new AIMessageChunk('')
try { try {
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) { for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
if (sseStreamer && !isStructuredOutput) { if (sseStreamer) {
let content = '' let content = ''
if (Array.isArray(chunk.content) && chunk.content.length > 0) {
if (typeof chunk === 'string') { const contents = chunk.content as MessageContentText[]
content = chunk content = contents.map((item) => item.text).join('')
} else if (Array.isArray(chunk.content) && chunk.content.length > 0) { } else {
content = chunk.content
.map((item: any) => {
if ((item.text && !item.type) || (item.type === 'text' && item.text)) {
return item.text
} else if (item.type === 'executableCode' && item.executableCode) {
const language = item.executableCode.language?.toLowerCase() || 'python'
return `\n\`\`\`${language}\n${item.executableCode.code}\n\`\`\`\n`
} else if (item.type === 'codeExecutionResult' && item.codeExecutionResult) {
const outcome = item.codeExecutionResult.outcome || 'OUTCOME_OK'
const output = item.codeExecutionResult.output || ''
if (outcome === 'OUTCOME_OK' && output) {
return `**Code Output:**\n\`\`\`\n${output}\n\`\`\`\n`
} else if (outcome !== 'OUTCOME_OK') {
return `**Code Execution Error:**\n\`\`\`\n${output}\n\`\`\`\n`
}
}
return ''
})
.filter((text: string) => text)
.join('')
} else if (chunk.content) {
content = chunk.content.toString() content = chunk.content.toString()
} }
sseStreamer.streamTokenEvent(chatId, content) sseStreamer.streamTokenEvent(chatId, content)
} }
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk response = response.concat(chunk)
response = response.concat(messageChunk)
} }
} catch (error) { } catch (error) {
console.error('Error during streaming:', error) console.error('Error during streaming:', error)
throw error throw error
} }
// Only convert to string if all content items are text (no inlineData or other special types)
if (Array.isArray(response.content) && response.content.length > 0) { if (Array.isArray(response.content) && response.content.length > 0) {
const hasNonTextContent = response.content.some( const responseContents = response.content as MessageContentText[]
(item: any) => item.type === 'inlineData' || item.type === 'executableCode' || item.type === 'codeExecutionResult' response.content = responseContents.map((item) => item.text).join('')
)
if (!hasNonTextContent) {
const responseContents = response.content as MessageContentText[]
response.content = responseContents.map((item) => item.text).join('')
}
} }
return response return response
} }
@ -1864,8 +1606,7 @@ class Agent_Agentflow implements INode {
artifacts: any[], artifacts: any[],
additionalTokens: number = 0, additionalTokens: number = 0,
isWaitingForHumanInput: boolean = false, isWaitingForHumanInput: boolean = false,
fileAnnotations: any[] = [], fileAnnotations: any[] = []
isStructuredOutput: boolean = false
): any { ): any {
const output: any = { const output: any = {
content: finalResponse, content: finalResponse,
@ -1900,15 +1641,6 @@ class Agent_Agentflow implements INode {
output.responseMetadata = response.response_metadata output.responseMetadata = response.response_metadata
} }
if (isStructuredOutput && typeof response === 'object') {
const structuredOutput = response as Record<string, any>
for (const key in structuredOutput) {
if (structuredOutput[key] !== undefined && structuredOutput[key] !== null) {
output[key] = structuredOutput[key]
}
}
}
// Add used tools, source documents and artifacts to output // Add used tools, source documents and artifacts to output
if (usedTools && usedTools.length > 0) { if (usedTools && usedTools.length > 0) {
output.usedTools = flatten(usedTools) output.usedTools = flatten(usedTools)
@ -1974,8 +1706,7 @@ class Agent_Agentflow implements INode {
llmNodeInstance, llmNodeInstance,
isStreamable, isStreamable,
isLastNode, isLastNode,
iterationContext, iterationContext
isStructuredOutput = false
}: { }: {
response: AIMessageChunk response: AIMessageChunk
messages: BaseMessageLike[] messages: BaseMessageLike[]
@ -1989,7 +1720,6 @@ class Agent_Agentflow implements INode {
isStreamable: boolean isStreamable: boolean
isLastNode: boolean isLastNode: boolean
iterationContext: ICommonObject iterationContext: ICommonObject
isStructuredOutput?: boolean
}): Promise<{ }): Promise<{
response: AIMessageChunk response: AIMessageChunk
usedTools: IUsedTool[] usedTools: IUsedTool[]
@ -2069,9 +1799,7 @@ class Agent_Agentflow implements INode {
const toolCallDetails = '```json\n' + JSON.stringify(toolCall, null, 2) + '\n```' const toolCallDetails = '```json\n' + JSON.stringify(toolCall, null, 2) + '\n```'
const responseContent = response.content + `\nAttempting to use tool:\n${toolCallDetails}` const responseContent = response.content + `\nAttempting to use tool:\n${toolCallDetails}`
response.content = responseContent response.content = responseContent
if (!isStructuredOutput) { sseStreamer?.streamTokenEvent(chatId, responseContent)
sseStreamer?.streamTokenEvent(chatId, responseContent)
}
return { response, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput: true } return { response, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput: true }
} }
@ -2177,7 +1905,7 @@ class Agent_Agentflow implements INode {
const lastToolOutput = usedTools[0]?.toolOutput || '' const lastToolOutput = usedTools[0]?.toolOutput || ''
const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2) const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
if (sseStreamer && !isStructuredOutput) { if (sseStreamer) {
sseStreamer.streamTokenEvent(chatId, lastToolOutputString) sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
} }
@ -2206,19 +1934,12 @@ class Agent_Agentflow implements INode {
let newResponse: AIMessageChunk let newResponse: AIMessageChunk
if (isStreamable) { if (isStreamable) {
newResponse = await this.handleStreamingResponse( newResponse = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
sseStreamer,
llmNodeInstance,
messages,
chatId,
abortController,
isStructuredOutput
)
} else { } else {
newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal }) newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
// Stream non-streaming response if this is the last node // Stream non-streaming response if this is the last node
if (isLastNode && sseStreamer && !isStructuredOutput) { if (isLastNode && sseStreamer) {
let responseContent = JSON.stringify(newResponse, null, 2) let responseContent = JSON.stringify(newResponse, null, 2)
if (typeof newResponse.content === 'string') { if (typeof newResponse.content === 'string') {
responseContent = newResponse.content responseContent = newResponse.content
@ -2253,8 +1974,7 @@ class Agent_Agentflow implements INode {
llmNodeInstance, llmNodeInstance,
isStreamable, isStreamable,
isLastNode, isLastNode,
iterationContext, iterationContext
isStructuredOutput
}) })
// Merge results from recursive tool calls // Merge results from recursive tool calls
@ -2285,8 +2005,7 @@ class Agent_Agentflow implements INode {
llmWithoutToolsBind, llmWithoutToolsBind,
isStreamable, isStreamable,
isLastNode, isLastNode,
iterationContext, iterationContext
isStructuredOutput = false
}: { }: {
humanInput: IHumanInput humanInput: IHumanInput
humanInputAction: Record<string, any> | undefined humanInputAction: Record<string, any> | undefined
@ -2301,7 +2020,6 @@ class Agent_Agentflow implements INode {
isStreamable: boolean isStreamable: boolean
isLastNode: boolean isLastNode: boolean
iterationContext: ICommonObject iterationContext: ICommonObject
isStructuredOutput?: boolean
}): Promise<{ }): Promise<{
response: AIMessageChunk response: AIMessageChunk
usedTools: IUsedTool[] usedTools: IUsedTool[]
@ -2504,7 +2222,7 @@ class Agent_Agentflow implements INode {
const lastToolOutput = usedTools[0]?.toolOutput || '' const lastToolOutput = usedTools[0]?.toolOutput || ''
const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2) const lastToolOutputString = typeof lastToolOutput === 'string' ? lastToolOutput : JSON.stringify(lastToolOutput, null, 2)
if (sseStreamer && !isStructuredOutput) { if (sseStreamer) {
sseStreamer.streamTokenEvent(chatId, lastToolOutputString) sseStreamer.streamTokenEvent(chatId, lastToolOutputString)
} }
@ -2535,19 +2253,12 @@ class Agent_Agentflow implements INode {
} }
if (isStreamable) { if (isStreamable) {
newResponse = await this.handleStreamingResponse( newResponse = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
sseStreamer,
llmNodeInstance,
messages,
chatId,
abortController,
isStructuredOutput
)
} else { } else {
newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal }) newResponse = await llmNodeInstance.invoke(messages, { signal: abortController?.signal })
// Stream non-streaming response if this is the last node // Stream non-streaming response if this is the last node
if (isLastNode && sseStreamer && !isStructuredOutput) { if (isLastNode && sseStreamer) {
let responseContent = JSON.stringify(newResponse, null, 2) let responseContent = JSON.stringify(newResponse, null, 2)
if (typeof newResponse.content === 'string') { if (typeof newResponse.content === 'string') {
responseContent = newResponse.content responseContent = newResponse.content
@ -2582,8 +2293,7 @@ class Agent_Agentflow implements INode {
llmNodeInstance, llmNodeInstance,
isStreamable, isStreamable,
isLastNode, isLastNode,
iterationContext, iterationContext
isStructuredOutput
}) })
// Merge results from recursive tool calls // Merge results from recursive tool calls
@ -2598,6 +2308,190 @@ class Agent_Agentflow implements INode {
return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput } return { response: newResponse, usedTools, sourceDocuments, artifacts, totalTokens, isWaitingForHumanInput }
} }
/**
* Extracts artifacts from response metadata (both annotations and built-in tools)
*/
private async extractArtifactsFromResponse(
responseMetadata: any,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{ artifacts: any[]; fileAnnotations: any[] }> {
const artifacts: any[] = []
const fileAnnotations: any[] = []
if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
return { artifacts, fileAnnotations }
}
for (const outputItem of responseMetadata.output) {
// Handle container file citations from annotations
if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
for (const contentItem of outputItem.content) {
if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
for (const annotation of contentItem.annotations) {
if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
try {
// Download and store the file content
const downloadResult = await this.downloadContainerFile(
annotation.container_id,
annotation.file_id,
annotation.filename,
modelNodeData,
options
)
if (downloadResult) {
const fileType = this.getArtifactTypeFromFilename(annotation.filename)
if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
const artifact = {
type: fileType,
data: downloadResult.filePath
}
artifacts.push(artifact)
} else {
fileAnnotations.push({
filePath: downloadResult.filePath,
fileName: annotation.filename
})
}
}
} catch (error) {
console.error('Error processing annotation:', error)
}
}
}
}
}
}
// Handle built-in tool artifacts (like image generation)
if (outputItem.type === 'image_generation_call' && outputItem.result) {
try {
const savedImageResult = await this.saveBase64Image(outputItem, options)
if (savedImageResult) {
// Replace the base64 result with the file path in the response metadata
outputItem.result = savedImageResult.filePath
// Create artifact in the same format as other image artifacts
const fileType = this.getArtifactTypeFromFilename(savedImageResult.fileName)
artifacts.push({
type: fileType,
data: savedImageResult.filePath
})
}
} catch (error) {
console.error('Error processing image generation artifact:', error)
}
}
}
return { artifacts, fileAnnotations }
}
/**
* Downloads file content from container file citation
*/
private async downloadContainerFile(
containerId: string,
fileId: string,
filename: string,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{ filePath: string; totalSize: number } | null> {
try {
const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
if (!openAIApiKey) {
console.warn('No OpenAI API key available for downloading container file')
return null
}
// Download the file using OpenAI Container API
const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
method: 'GET',
headers: {
Accept: '*/*',
Authorization: `Bearer ${openAIApiKey}`
}
})
if (!response.ok) {
console.warn(
`Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
)
return null
}
// Extract the binary data from the Response object
const data = await response.arrayBuffer()
const dataBuffer = Buffer.from(data)
const mimeType = this.getMimeTypeFromFilename(filename)
// Store the file using the same storage utility as OpenAIAssistant
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
dataBuffer,
filename,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, totalSize }
} catch (error) {
console.error('Error downloading container file:', error)
return null
}
}
/**
* Gets MIME type from filename extension
*/
private getMimeTypeFromFilename(filename: string): string {
const extension = filename.toLowerCase().split('.').pop()
const mimeTypes: { [key: string]: string } = {
png: 'image/png',
jpg: 'image/jpeg',
jpeg: 'image/jpeg',
gif: 'image/gif',
pdf: 'application/pdf',
txt: 'text/plain',
csv: 'text/csv',
json: 'application/json',
html: 'text/html',
xml: 'application/xml'
}
return mimeTypes[extension || ''] || 'application/octet-stream'
}
/**
* Gets artifact type from filename extension for UI rendering
*/
private getArtifactTypeFromFilename(filename: string): string {
const extension = filename.toLowerCase().split('.').pop()
const artifactTypes: { [key: string]: string } = {
png: 'png',
jpg: 'jpeg',
jpeg: 'jpeg',
html: 'html',
htm: 'html',
md: 'markdown',
markdown: 'markdown',
json: 'json',
js: 'javascript',
javascript: 'javascript',
tex: 'latex',
latex: 'latex',
txt: 'text',
csv: 'text',
pdf: 'text'
}
return artifactTypes[extension || ''] || 'text'
}
/** /**
* Processes sandbox links in the response text and converts them to file annotations * Processes sandbox links in the response text and converts them to file annotations
*/ */

View File

@ -60,7 +60,7 @@ class CustomFunction_Agentflow implements INode {
constructor() { constructor() {
this.label = 'Custom Function' this.label = 'Custom Function'
this.name = 'customFunctionAgentflow' this.name = 'customFunctionAgentflow'
this.version = 1.1 this.version = 1.0
this.type = 'CustomFunction' this.type = 'CustomFunction'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Execute custom function' this.description = 'Execute custom function'
@ -107,7 +107,8 @@ class CustomFunction_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',
@ -133,7 +134,7 @@ class CustomFunction_Agentflow implements INode {
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> { async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
const javascriptFunction = nodeData.inputs?.customFunctionJavascriptFunction as string const javascriptFunction = nodeData.inputs?.customFunctionJavascriptFunction as string
const functionInputVariables = (nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]) ?? [] const functionInputVariables = nodeData.inputs?.customFunctionInputVariables as ICustomFunctionInputVariables[]
const _customFunctionUpdateState = nodeData.inputs?.customFunctionUpdateState const _customFunctionUpdateState = nodeData.inputs?.customFunctionUpdateState
const state = options.agentflowRuntime?.state as ICommonObject const state = options.agentflowRuntime?.state as ICommonObject
@ -146,17 +147,11 @@ class CustomFunction_Agentflow implements INode {
const variables = await getVars(appDataSource, databaseEntities, nodeData, options) const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
const flow = { const flow = {
input,
state,
chatflowId: options.chatflowid, chatflowId: options.chatflowid,
sessionId: options.sessionId, sessionId: options.sessionId,
chatId: options.chatId, chatId: options.chatId,
rawOutput: options.postProcessing?.rawOutput || '', input,
chatHistory: options.postProcessing?.chatHistory || [], state
sourceDocuments: options.postProcessing?.sourceDocuments,
usedTools: options.postProcessing?.usedTools,
artifacts: options.postProcessing?.artifacts,
fileAnnotations: options.postProcessing?.fileAnnotations
} }
// Create additional sandbox variables for custom function inputs // Create additional sandbox variables for custom function inputs

View File

@ -30,7 +30,7 @@ class ExecuteFlow_Agentflow implements INode {
constructor() { constructor() {
this.label = 'Execute Flow' this.label = 'Execute Flow'
this.name = 'executeFlowAgentflow' this.name = 'executeFlowAgentflow'
this.version = 1.2 this.version = 1.1
this.type = 'ExecuteFlow' this.type = 'ExecuteFlow'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Execute another flow' this.description = 'Execute another flow'
@ -102,7 +102,8 @@ class ExecuteFlow_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',

View File

@ -241,11 +241,8 @@ class HumanInput_Agentflow implements INode {
if (isStreamable) { if (isStreamable) {
const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer const sseStreamer: IServerSideEventStreamer = options.sseStreamer as IServerSideEventStreamer
for await (const chunk of await llmNodeInstance.stream(messages)) { for await (const chunk of await llmNodeInstance.stream(messages)) {
const content = typeof chunk === 'string' ? chunk : chunk.content.toString() sseStreamer.streamTokenEvent(chatId, chunk.content.toString())
sseStreamer.streamTokenEvent(chatId, content) response = response.concat(chunk)
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk
response = response.concat(messageChunk)
} }
humanInputDescription = response.content as string humanInputDescription = response.content as string
} else { } else {

View File

@ -2,19 +2,17 @@ import { BaseChatModel } from '@langchain/core/language_models/chat_models'
import { ICommonObject, IMessage, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface' import { ICommonObject, IMessage, INode, INodeData, INodeOptionsValue, INodeParams, IServerSideEventStreamer } from '../../../src/Interface'
import { AIMessageChunk, BaseMessageLike, MessageContentText } from '@langchain/core/messages' import { AIMessageChunk, BaseMessageLike, MessageContentText } from '@langchain/core/messages'
import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt' import { DEFAULT_SUMMARIZER_TEMPLATE } from '../prompt'
import { z } from 'zod'
import { AnalyticHandler } from '../../../src/handler' import { AnalyticHandler } from '../../../src/handler'
import { ILLMMessage } from '../Interface.Agentflow' import { ILLMMessage, IStructuredOutput } from '../Interface.Agentflow'
import { import {
addImageArtifactsToMessages,
extractArtifactsFromResponse,
getPastChatHistoryImageMessages, getPastChatHistoryImageMessages,
getUniqueImageMessages, getUniqueImageMessages,
processMessagesWithImages, processMessagesWithImages,
replaceBase64ImagesWithFileReferences, replaceBase64ImagesWithFileReferences,
replaceInlineDataWithFileReferences,
updateFlowState updateFlowState
} from '../utils' } from '../utils'
import { processTemplateVariables, configureStructuredOutput } from '../../../src/utils' import { processTemplateVariables } from '../../../src/utils'
import { flatten } from 'lodash' import { flatten } from 'lodash'
class LLM_Agentflow implements INode { class LLM_Agentflow implements INode {
@ -34,7 +32,7 @@ class LLM_Agentflow implements INode {
constructor() { constructor() {
this.label = 'LLM' this.label = 'LLM'
this.name = 'llmAgentflow' this.name = 'llmAgentflow'
this.version = 1.1 this.version = 1.0
this.type = 'LLM' this.type = 'LLM'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Large language models to analyze user-provided inputs and generate responses' this.description = 'Large language models to analyze user-provided inputs and generate responses'
@ -290,7 +288,8 @@ class LLM_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',
@ -450,16 +449,10 @@ class LLM_Agentflow implements INode {
} }
delete nodeData.inputs?.llmMessages delete nodeData.inputs?.llmMessages
/**
* Add image artifacts from previous assistant responses as user messages
* Images are converted from FILE-STORAGE::<image_path> to base 64 image_url format
*/
await addImageArtifactsToMessages(messages, options)
// Configure structured output if specified // Configure structured output if specified
const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0 const isStructuredOutput = _llmStructuredOutput && Array.isArray(_llmStructuredOutput) && _llmStructuredOutput.length > 0
if (isStructuredOutput) { if (isStructuredOutput) {
llmNodeInstance = configureStructuredOutput(llmNodeInstance, _llmStructuredOutput) llmNodeInstance = this.configureStructuredOutput(llmNodeInstance, _llmStructuredOutput)
} }
// Initialize response and determine if streaming is possible // Initialize response and determine if streaming is possible
@ -475,11 +468,9 @@ class LLM_Agentflow implements INode {
// Track execution time // Track execution time
const startTime = Date.now() const startTime = Date.now()
const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer const sseStreamer: IServerSideEventStreamer | undefined = options.sseStreamer
/*
* Invoke LLM
*/
if (isStreamable) { if (isStreamable) {
response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController) response = await this.handleStreamingResponse(sseStreamer, llmNodeInstance, messages, chatId, abortController)
} else { } else {
@ -504,40 +495,6 @@ class LLM_Agentflow implements INode {
const endTime = Date.now() const endTime = Date.now()
const timeDelta = endTime - startTime const timeDelta = endTime - startTime
// Extract artifacts and file annotations from response metadata
let artifacts: any[] = []
let fileAnnotations: any[] = []
if (response.response_metadata) {
const {
artifacts: extractedArtifacts,
fileAnnotations: extractedFileAnnotations,
savedInlineImages
} = await extractArtifactsFromResponse(response.response_metadata, newNodeData, options)
if (extractedArtifacts.length > 0) {
artifacts = extractedArtifacts
// Stream artifacts if this is the last node
if (isLastNode && sseStreamer) {
sseStreamer.streamArtifactsEvent(chatId, artifacts)
}
}
if (extractedFileAnnotations.length > 0) {
fileAnnotations = extractedFileAnnotations
// Stream file annotations if this is the last node
if (isLastNode && sseStreamer) {
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
}
}
// Replace inlineData base64 with file references in the response
if (savedInlineImages && savedInlineImages.length > 0) {
replaceInlineDataWithFileReferences(response, savedInlineImages)
}
}
// Update flow state if needed // Update flow state if needed
let newState = { ...state } let newState = { ...state }
if (_llmUpdateState && Array.isArray(_llmUpdateState) && _llmUpdateState.length > 0) { if (_llmUpdateState && Array.isArray(_llmUpdateState) && _llmUpdateState.length > 0) {
@ -557,22 +514,10 @@ class LLM_Agentflow implements INode {
finalResponse = response.content.map((item: any) => item.text).join('\n') finalResponse = response.content.map((item: any) => item.text).join('\n')
} else if (response.content && typeof response.content === 'string') { } else if (response.content && typeof response.content === 'string') {
finalResponse = response.content finalResponse = response.content
} else if (response.content === '') {
// Empty response content, this could happen when there is only image data
finalResponse = ''
} else { } else {
finalResponse = JSON.stringify(response, null, 2) finalResponse = JSON.stringify(response, null, 2)
} }
const output = this.prepareOutputObject( const output = this.prepareOutputObject(response, finalResponse, startTime, endTime, timeDelta, isStructuredOutput)
response,
finalResponse,
startTime,
endTime,
timeDelta,
isStructuredOutput,
artifacts,
fileAnnotations
)
// End analytics tracking // End analytics tracking
if (analyticHandlers && llmIds) { if (analyticHandlers && llmIds) {
@ -584,23 +529,12 @@ class LLM_Agentflow implements INode {
this.sendStreamingEvents(options, chatId, response) this.sendStreamingEvents(options, chatId, response)
} }
// Stream file annotations if any were extracted
if (fileAnnotations.length > 0 && isLastNode && sseStreamer) {
sseStreamer.streamFileAnnotationsEvent(chatId, fileAnnotations)
}
// Process template variables in state // Process template variables in state
newState = processTemplateVariables(newState, finalResponse) newState = processTemplateVariables(newState, finalResponse)
/**
* Remove the temporarily added image artifact messages before storing
* This is to avoid storing the actual base64 data into database
*/
const messagesToStore = messages.filter((msg: any) => !msg._isTemporaryImageMessage)
// Replace the actual messages array with one that includes the file references for images instead of base64 data // Replace the actual messages array with one that includes the file references for images instead of base64 data
const messagesWithFileReferences = replaceBase64ImagesWithFileReferences( const messagesWithFileReferences = replaceBase64ImagesWithFileReferences(
messagesToStore, messages,
runtimeImageMessagesWithFileRef, runtimeImageMessagesWithFileRef,
pastImageMessagesWithFileRef pastImageMessagesWithFileRef
) )
@ -651,13 +585,7 @@ class LLM_Agentflow implements INode {
{ {
role: returnRole, role: returnRole,
content: finalResponse, content: finalResponse,
name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id, name: nodeData?.label ? nodeData?.label.toLowerCase().replace(/\s/g, '_').trim() : nodeData?.id
...(((artifacts && artifacts.length > 0) || (fileAnnotations && fileAnnotations.length > 0)) && {
additional_kwargs: {
...(artifacts && artifacts.length > 0 && { artifacts }),
...(fileAnnotations && fileAnnotations.length > 0 && { fileAnnotations })
}
})
} }
] ]
} }
@ -827,6 +755,59 @@ class LLM_Agentflow implements INode {
} }
} }
/**
* Configures structured output for the LLM
*/
private configureStructuredOutput(llmNodeInstance: BaseChatModel, llmStructuredOutput: IStructuredOutput[]): BaseChatModel {
try {
const zodObj: ICommonObject = {}
for (const sch of llmStructuredOutput) {
if (sch.type === 'string') {
zodObj[sch.key] = z.string().describe(sch.description || '')
} else if (sch.type === 'stringArray') {
zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
} else if (sch.type === 'number') {
zodObj[sch.key] = z.number().describe(sch.description || '')
} else if (sch.type === 'boolean') {
zodObj[sch.key] = z.boolean().describe(sch.description || '')
} else if (sch.type === 'enum') {
const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
zodObj[sch.key] = z
.enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
.describe(sch.description || '')
} else if (sch.type === 'jsonArray') {
const jsonSchema = sch.jsonSchema
if (jsonSchema) {
try {
// Parse the JSON schema
const schemaObj = JSON.parse(jsonSchema)
// Create a Zod schema from the JSON schema
const itemSchema = this.createZodSchemaFromJSON(schemaObj)
// Create an array schema of the item schema
zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
} catch (err) {
console.error(`Error parsing JSON schema for ${sch.key}:`, err)
// Fallback to generic array of records
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
}
} else {
// If no schema provided, use generic array of records
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
}
}
}
const structuredOutput = z.object(zodObj)
// @ts-ignore
return llmNodeInstance.withStructuredOutput(structuredOutput)
} catch (exception) {
console.error(exception)
return llmNodeInstance
}
}
/** /**
* Handles streaming response from the LLM * Handles streaming response from the LLM
*/ */
@ -843,20 +824,16 @@ class LLM_Agentflow implements INode {
for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) { for await (const chunk of await llmNodeInstance.stream(messages, { signal: abortController?.signal })) {
if (sseStreamer) { if (sseStreamer) {
let content = '' let content = ''
if (Array.isArray(chunk.content) && chunk.content.length > 0) {
if (typeof chunk === 'string') {
content = chunk
} else if (Array.isArray(chunk.content) && chunk.content.length > 0) {
const contents = chunk.content as MessageContentText[] const contents = chunk.content as MessageContentText[]
content = contents.map((item) => item.text).join('') content = contents.map((item) => item.text).join('')
} else if (chunk.content) { } else {
content = chunk.content.toString() content = chunk.content.toString()
} }
sseStreamer.streamTokenEvent(chatId, content) sseStreamer.streamTokenEvent(chatId, content)
} }
const messageChunk = typeof chunk === 'string' ? new AIMessageChunk(chunk) : chunk response = response.concat(chunk)
response = response.concat(messageChunk)
} }
} catch (error) { } catch (error) {
console.error('Error during streaming:', error) console.error('Error during streaming:', error)
@ -878,9 +855,7 @@ class LLM_Agentflow implements INode {
startTime: number, startTime: number,
endTime: number, endTime: number,
timeDelta: number, timeDelta: number,
isStructuredOutput: boolean, isStructuredOutput: boolean
artifacts: any[] = [],
fileAnnotations: any[] = []
): any { ): any {
const output: any = { const output: any = {
content: finalResponse, content: finalResponse,
@ -899,10 +874,6 @@ class LLM_Agentflow implements INode {
output.usageMetadata = response.usage_metadata output.usageMetadata = response.usage_metadata
} }
if (response.response_metadata) {
output.responseMetadata = response.response_metadata
}
if (isStructuredOutput && typeof response === 'object') { if (isStructuredOutput && typeof response === 'object') {
const structuredOutput = response as Record<string, any> const structuredOutput = response as Record<string, any>
for (const key in structuredOutput) { for (const key in structuredOutput) {
@ -912,14 +883,6 @@ class LLM_Agentflow implements INode {
} }
} }
if (artifacts && artifacts.length > 0) {
output.artifacts = flatten(artifacts)
}
if (fileAnnotations && fileAnnotations.length > 0) {
output.fileAnnotations = fileAnnotations
}
return output return output
} }
@ -944,6 +907,107 @@ class LLM_Agentflow implements INode {
sseStreamer.streamEndEvent(chatId) sseStreamer.streamEndEvent(chatId)
} }
/**
* Creates a Zod schema from a JSON schema object
* @param jsonSchema The JSON schema object
* @returns A Zod schema
*/
private createZodSchemaFromJSON(jsonSchema: any): z.ZodTypeAny {
// If the schema is an object with properties, create an object schema
if (typeof jsonSchema === 'object' && jsonSchema !== null) {
const schemaObj: Record<string, z.ZodTypeAny> = {}
// Process each property in the schema
for (const [key, value] of Object.entries(jsonSchema)) {
if (value === null) {
// Handle null values
schemaObj[key] = z.null()
} else if (typeof value === 'object' && !Array.isArray(value)) {
// Check if the property has a type definition
if ('type' in value) {
const type = value.type as string
const description = ('description' in value ? (value.description as string) : '') || ''
// Create the appropriate Zod type based on the type property
if (type === 'string') {
schemaObj[key] = z.string().describe(description)
} else if (type === 'number') {
schemaObj[key] = z.number().describe(description)
} else if (type === 'boolean') {
schemaObj[key] = z.boolean().describe(description)
} else if (type === 'array') {
// If it's an array type, check if items is defined
if ('items' in value && value.items) {
const itemSchema = this.createZodSchemaFromJSON(value.items)
schemaObj[key] = z.array(itemSchema).describe(description)
} else {
// Default to array of any if items not specified
schemaObj[key] = z.array(z.any()).describe(description)
}
} else if (type === 'object') {
// If it's an object type, check if properties is defined
if ('properties' in value && value.properties) {
const nestedSchema = this.createZodSchemaFromJSON(value.properties)
schemaObj[key] = nestedSchema.describe(description)
} else {
// Default to record of any if properties not specified
schemaObj[key] = z.record(z.any()).describe(description)
}
} else {
// Default to any for unknown types
schemaObj[key] = z.any().describe(description)
}
// Check if the property is optional
if ('optional' in value && value.optional === true) {
schemaObj[key] = schemaObj[key].optional()
}
} else if (Array.isArray(value)) {
// Array values without a type property
if (value.length > 0) {
// If the array has items, recursively create a schema for the first item
const itemSchema = this.createZodSchemaFromJSON(value[0])
schemaObj[key] = z.array(itemSchema)
} else {
// Empty array, allow any array
schemaObj[key] = z.array(z.any())
}
} else {
// It's a nested object without a type property, recursively create schema
schemaObj[key] = this.createZodSchemaFromJSON(value)
}
} else if (Array.isArray(value)) {
// Array values
if (value.length > 0) {
// If the array has items, recursively create a schema for the first item
const itemSchema = this.createZodSchemaFromJSON(value[0])
schemaObj[key] = z.array(itemSchema)
} else {
// Empty array, allow any array
schemaObj[key] = z.array(z.any())
}
} else {
// For primitive values (which shouldn't be in the schema directly)
// Use the corresponding Zod type
if (typeof value === 'string') {
schemaObj[key] = z.string()
} else if (typeof value === 'number') {
schemaObj[key] = z.number()
} else if (typeof value === 'boolean') {
schemaObj[key] = z.boolean()
} else {
schemaObj[key] = z.any()
}
}
}
return z.object(schemaObj)
}
// Fallback to any for unknown types
return z.any()
}
} }
module.exports = { nodeClass: LLM_Agentflow } module.exports = { nodeClass: LLM_Agentflow }

View File

@ -20,7 +20,7 @@ class Loop_Agentflow implements INode {
constructor() { constructor() {
this.label = 'Loop' this.label = 'Loop'
this.name = 'loopAgentflow' this.name = 'loopAgentflow'
this.version = 1.2 this.version = 1.1
this.type = 'Loop' this.type = 'Loop'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Loop back to a previous node' this.description = 'Loop back to a previous node'
@ -64,7 +64,8 @@ class Loop_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',

View File

@ -36,7 +36,7 @@ class Retriever_Agentflow implements INode {
constructor() { constructor() {
this.label = 'Retriever' this.label = 'Retriever'
this.name = 'retrieverAgentflow' this.name = 'retrieverAgentflow'
this.version = 1.1 this.version = 1.0
this.type = 'Retriever' this.type = 'Retriever'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Retrieve information from vector database' this.description = 'Retrieve information from vector database'
@ -87,7 +87,8 @@ class Retriever_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',

View File

@ -29,7 +29,7 @@ class Tool_Agentflow implements INode {
constructor() { constructor() {
this.label = 'Tool' this.label = 'Tool'
this.name = 'toolAgentflow' this.name = 'toolAgentflow'
this.version = 1.2 this.version = 1.1
this.type = 'Tool' this.type = 'Tool'
this.category = 'Agent Flows' this.category = 'Agent Flows'
this.description = 'Tools allow LLM to interact with external systems' this.description = 'Tools allow LLM to interact with external systems'
@ -80,7 +80,8 @@ class Tool_Agentflow implements INode {
label: 'Key', label: 'Key',
name: 'key', name: 'key',
type: 'asyncOptions', type: 'asyncOptions',
loadMethod: 'listRuntimeStateKeys' loadMethod: 'listRuntimeStateKeys',
freeSolo: true
}, },
{ {
label: 'Value', label: 'Value',

View File

@ -1,11 +1,10 @@
import { BaseMessage, MessageContentImageUrl, AIMessageChunk } from '@langchain/core/messages' import { BaseMessage, MessageContentImageUrl } from '@langchain/core/messages'
import { getImageUploads } from '../../src/multiModalUtils' import { getImageUploads } from '../../src/multiModalUtils'
import { addSingleFileToStorage, getFileFromStorage } from '../../src/storageUtils' import { getFileFromStorage } from '../../src/storageUtils'
import { ICommonObject, IFileUpload, INodeData } from '../../src/Interface' import { ICommonObject, IFileUpload } from '../../src/Interface'
import { BaseMessageLike } from '@langchain/core/messages' import { BaseMessageLike } from '@langchain/core/messages'
import { IFlowState } from './Interface.Agentflow' import { IFlowState } from './Interface.Agentflow'
import { getCredentialData, getCredentialParam, handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils' import { handleEscapeCharacters, mapMimeTypeToInputField } from '../../src/utils'
import fetch from 'node-fetch'
export const addImagesToMessages = async ( export const addImagesToMessages = async (
options: ICommonObject, options: ICommonObject,
@ -19,8 +18,7 @@ export const addImagesToMessages = async (
for (const upload of imageUploads) { for (const upload of imageUploads) {
let bf = upload.data let bf = upload.data
if (upload.type == 'stored-file') { if (upload.type == 'stored-file') {
const fileName = upload.name.replace(/^FILE-STORAGE::/, '') const contents = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
const contents = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64 // as the image is stored in the server, read the file and convert it to base64
bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64') bf = 'data:' + upload.mime + ';base64,' + contents.toString('base64')
@ -91,9 +89,8 @@ export const processMessagesWithImages = async (
if (item.type === 'stored-file' && item.name && item.mime.startsWith('image/')) { if (item.type === 'stored-file' && item.name && item.mime.startsWith('image/')) {
hasImageReferences = true hasImageReferences = true
try { try {
const fileName = item.name.replace(/^FILE-STORAGE::/, '')
// Get file contents from storage // Get file contents from storage
const contents = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId) const contents = await getFileFromStorage(item.name, options.orgId, options.chatflowid, options.chatId)
// Create base64 data URL // Create base64 data URL
const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64') const base64Data = 'data:' + item.mime + ';base64,' + contents.toString('base64')
@ -325,8 +322,7 @@ export const getPastChatHistoryImageMessages = async (
const imageContents: MessageContentImageUrl[] = [] const imageContents: MessageContentImageUrl[] = []
for (const upload of uploads) { for (const upload of uploads) {
if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) { if (upload.type === 'stored-file' && upload.mime.startsWith('image/')) {
const fileName = upload.name.replace(/^FILE-STORAGE::/, '') const fileData = await getFileFromStorage(upload.name, options.orgId, options.chatflowid, options.chatId)
const fileData = await getFileFromStorage(fileName, options.orgId, options.chatflowid, options.chatId)
// as the image is stored in the server, read the file and convert it to base64 // as the image is stored in the server, read the file and convert it to base64
const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64') const bf = 'data:' + upload.mime + ';base64,' + fileData.toString('base64')
@ -460,437 +456,6 @@ export const getPastChatHistoryImageMessages = async (
} }
} }
/**
* Gets MIME type from filename extension
*/
export const getMimeTypeFromFilename = (filename: string): string => {
const extension = filename.toLowerCase().split('.').pop()
const mimeTypes: { [key: string]: string } = {
png: 'image/png',
jpg: 'image/jpeg',
jpeg: 'image/jpeg',
gif: 'image/gif',
pdf: 'application/pdf',
txt: 'text/plain',
csv: 'text/csv',
json: 'application/json',
html: 'text/html',
xml: 'application/xml'
}
return mimeTypes[extension || ''] || 'application/octet-stream'
}
/**
* Gets artifact type from filename extension for UI rendering
*/
export const getArtifactTypeFromFilename = (filename: string): string => {
const extension = filename.toLowerCase().split('.').pop()
const artifactTypes: { [key: string]: string } = {
png: 'png',
jpg: 'jpeg',
jpeg: 'jpeg',
html: 'html',
htm: 'html',
md: 'markdown',
markdown: 'markdown',
json: 'json',
js: 'javascript',
javascript: 'javascript',
tex: 'latex',
latex: 'latex',
txt: 'text',
csv: 'text',
pdf: 'text'
}
return artifactTypes[extension || ''] || 'text'
}
/**
* Saves base64 image data to storage and returns file information
*/
export const saveBase64Image = async (
outputItem: any,
options: ICommonObject
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> => {
try {
if (!outputItem.result) {
return null
}
// Extract base64 data and create buffer
const base64Data = outputItem.result
const imageBuffer = Buffer.from(base64Data, 'base64')
// Determine file extension and MIME type
const outputFormat = outputItem.output_format || 'png'
const fileName = `generated_image_${outputItem.id || Date.now()}.${outputFormat}`
const mimeType = outputFormat === 'png' ? 'image/png' : 'image/jpeg'
// Save the image using the existing storage utility
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
imageBuffer,
fileName,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, fileName, totalSize }
} catch (error) {
console.error('Error saving base64 image:', error)
return null
}
}
/**
* Saves Gemini inline image data to storage and returns file information
*/
export const saveGeminiInlineImage = async (
inlineItem: any,
options: ICommonObject
): Promise<{ filePath: string; fileName: string; totalSize: number } | null> => {
try {
if (!inlineItem.data || !inlineItem.mimeType) {
return null
}
// Extract base64 data and create buffer
const base64Data = inlineItem.data
const imageBuffer = Buffer.from(base64Data, 'base64')
// Determine file extension from MIME type
const mimeType = inlineItem.mimeType
let extension = 'png'
if (mimeType.includes('jpeg') || mimeType.includes('jpg')) {
extension = 'jpg'
} else if (mimeType.includes('png')) {
extension = 'png'
} else if (mimeType.includes('gif')) {
extension = 'gif'
} else if (mimeType.includes('webp')) {
extension = 'webp'
}
const fileName = `gemini_generated_image_${Date.now()}.${extension}`
// Save the image using the existing storage utility
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
imageBuffer,
fileName,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, fileName, totalSize }
} catch (error) {
console.error('Error saving Gemini inline image:', error)
return null
}
}
/**
* Downloads file content from container file citation
*/
export const downloadContainerFile = async (
containerId: string,
fileId: string,
filename: string,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{ filePath: string; totalSize: number } | null> => {
try {
const credentialData = await getCredentialData(modelNodeData.credential ?? '', options)
const openAIApiKey = getCredentialParam('openAIApiKey', credentialData, modelNodeData)
if (!openAIApiKey) {
console.warn('No OpenAI API key available for downloading container file')
return null
}
// Download the file using OpenAI Container API
const response = await fetch(`https://api.openai.com/v1/containers/${containerId}/files/${fileId}/content`, {
method: 'GET',
headers: {
Accept: '*/*',
Authorization: `Bearer ${openAIApiKey}`
}
})
if (!response.ok) {
console.warn(
`Failed to download container file ${fileId} from container ${containerId}: ${response.status} ${response.statusText}`
)
return null
}
// Extract the binary data from the Response object
const data = await response.arrayBuffer()
const dataBuffer = Buffer.from(data)
const mimeType = getMimeTypeFromFilename(filename)
// Store the file using the same storage utility as OpenAIAssistant
const { path, totalSize } = await addSingleFileToStorage(
mimeType,
dataBuffer,
filename,
options.orgId,
options.chatflowid,
options.chatId
)
return { filePath: path, totalSize }
} catch (error) {
console.error('Error downloading container file:', error)
return null
}
}
/**
* Replace inlineData base64 with file references in the response content
*/
export const replaceInlineDataWithFileReferences = (
response: AIMessageChunk,
savedInlineImages: Array<{ filePath: string; fileName: string; mimeType: string }>
): void => {
// Check if content is an array
if (!Array.isArray(response.content)) {
return
}
// Replace base64 data with file references in response content
let savedImageIndex = 0
for (let i = 0; i < response.content.length; i++) {
const contentItem = response.content[i]
if (
typeof contentItem === 'object' &&
contentItem.type === 'inlineData' &&
contentItem.inlineData &&
savedImageIndex < savedInlineImages.length
) {
const savedImage = savedInlineImages[savedImageIndex]
// Replace with file reference
response.content[i] = {
type: 'stored-file',
name: savedImage.fileName,
mime: savedImage.mimeType,
path: savedImage.filePath
}
savedImageIndex++
}
}
// Clear the inlineData from response_metadata to avoid duplication
if (response.response_metadata?.inlineData) {
delete response.response_metadata.inlineData
}
}
/**
* Extracts artifacts from response metadata (both annotations and built-in tools)
*/
export const extractArtifactsFromResponse = async (
responseMetadata: any,
modelNodeData: INodeData,
options: ICommonObject
): Promise<{
artifacts: any[]
fileAnnotations: any[]
savedInlineImages?: Array<{ filePath: string; fileName: string; mimeType: string }>
}> => {
const artifacts: any[] = []
const fileAnnotations: any[] = []
const savedInlineImages: Array<{ filePath: string; fileName: string; mimeType: string }> = []
// Handle Gemini inline data (image generation)
if (responseMetadata?.inlineData && Array.isArray(responseMetadata.inlineData)) {
for (const inlineItem of responseMetadata.inlineData) {
if (inlineItem.type === 'gemini_inline_data' && inlineItem.data && inlineItem.mimeType) {
try {
const savedImageResult = await saveGeminiInlineImage(inlineItem, options)
if (savedImageResult) {
// Create artifact in the same format as other image artifacts
const fileType = getArtifactTypeFromFilename(savedImageResult.fileName)
artifacts.push({
type: fileType,
data: savedImageResult.filePath
})
// Track saved image for replacing base64 data in content
savedInlineImages.push({
filePath: savedImageResult.filePath,
fileName: savedImageResult.fileName,
mimeType: inlineItem.mimeType
})
}
} catch (error) {
console.error('Error processing Gemini inline image artifact:', error)
}
}
}
}
if (!responseMetadata?.output || !Array.isArray(responseMetadata.output)) {
return { artifacts, fileAnnotations, savedInlineImages: savedInlineImages.length > 0 ? savedInlineImages : undefined }
}
for (const outputItem of responseMetadata.output) {
// Handle container file citations from annotations
if (outputItem.type === 'message' && outputItem.content && Array.isArray(outputItem.content)) {
for (const contentItem of outputItem.content) {
if (contentItem.annotations && Array.isArray(contentItem.annotations)) {
for (const annotation of contentItem.annotations) {
if (annotation.type === 'container_file_citation' && annotation.file_id && annotation.filename) {
try {
// Download and store the file content
const downloadResult = await downloadContainerFile(
annotation.container_id,
annotation.file_id,
annotation.filename,
modelNodeData,
options
)
if (downloadResult) {
const fileType = getArtifactTypeFromFilename(annotation.filename)
if (fileType === 'png' || fileType === 'jpeg' || fileType === 'jpg') {
const artifact = {
type: fileType,
data: downloadResult.filePath
}
artifacts.push(artifact)
} else {
fileAnnotations.push({
filePath: downloadResult.filePath,
fileName: annotation.filename
})
}
}
} catch (error) {
console.error('Error processing annotation:', error)
}
}
}
}
}
}
// Handle built-in tool artifacts (like image generation)
if (outputItem.type === 'image_generation_call' && outputItem.result) {
try {
const savedImageResult = await saveBase64Image(outputItem, options)
if (savedImageResult) {
// Replace the base64 result with the file path in the response metadata
outputItem.result = savedImageResult.filePath
// Create artifact in the same format as other image artifacts
const fileType = getArtifactTypeFromFilename(savedImageResult.fileName)
artifacts.push({
type: fileType,
data: savedImageResult.filePath
})
}
} catch (error) {
console.error('Error processing image generation artifact:', error)
}
}
}
return { artifacts, fileAnnotations, savedInlineImages: savedInlineImages.length > 0 ? savedInlineImages : undefined }
}
/**
* Add image artifacts from previous assistant messages as user messages
* This allows the LLM to see and reference the generated images in the conversation
* Messages are marked with a special flag for later removal
*/
export const addImageArtifactsToMessages = async (messages: BaseMessageLike[], options: ICommonObject): Promise<void> => {
const imageExtensions = ['png', 'jpg', 'jpeg', 'gif', 'webp']
const messagesToInsert: Array<{ index: number; message: any }> = []
// Iterate through messages to find assistant messages with image artifacts
for (let i = 0; i < messages.length; i++) {
const message = messages[i] as any
// Check if this is an assistant message with artifacts
if (
(message.role === 'assistant' || message.role === 'ai') &&
message.additional_kwargs?.artifacts &&
Array.isArray(message.additional_kwargs.artifacts)
) {
const artifacts = message.additional_kwargs.artifacts
const imageArtifacts: Array<{ type: string; name: string; mime: string }> = []
// Extract image artifacts
for (const artifact of artifacts) {
if (artifact.type && artifact.data) {
// Check if this is an image artifact by file type
if (imageExtensions.includes(artifact.type.toLowerCase())) {
// Extract filename from the file path
const fileName = artifact.data.split('/').pop() || artifact.data
const mimeType = `image/${artifact.type.toLowerCase()}`
imageArtifacts.push({
type: 'stored-file',
name: fileName,
mime: mimeType
})
}
}
}
// If we found image artifacts, prepare to insert a user message after this assistant message
if (imageArtifacts.length > 0) {
// Check if the next message already contains these image artifacts to avoid duplicates
const nextMessage = messages[i + 1] as any
const shouldInsert =
!nextMessage ||
nextMessage.role !== 'user' ||
!Array.isArray(nextMessage.content) ||
!nextMessage.content.some(
(item: any) =>
(item.type === 'stored-file' || item.type === 'image_url') &&
imageArtifacts.some((artifact) => {
// Compare with and without FILE-STORAGE:: prefix
const artifactName = artifact.name.replace('FILE-STORAGE::', '')
const itemName = item.name?.replace('FILE-STORAGE::', '') || ''
return artifactName === itemName
})
)
if (shouldInsert) {
messagesToInsert.push({
index: i + 1,
message: {
role: 'user',
content: imageArtifacts,
_isTemporaryImageMessage: true // Mark for later removal
}
})
}
}
}
}
// Insert messages in reverse order to maintain correct indices
for (let i = messagesToInsert.length - 1; i >= 0; i--) {
const { index, message } = messagesToInsert[i]
messages.splice(index, 0, message)
}
// Convert stored-file references to base64 image_url format
if (messagesToInsert.length > 0) {
const { updatedMessages } = await processMessagesWithImages(messages, options)
// Replace the messages array content with the updated messages
messages.length = 0
messages.push(...updatedMessages)
}
}
/** /**
* Updates the flow state with new values * Updates the flow state with new values
*/ */

View File

@ -5,7 +5,7 @@ import { RunnableSequence } from '@langchain/core/runnables'
import { BaseChatModel } from '@langchain/core/language_models/chat_models' import { BaseChatModel } from '@langchain/core/language_models/chat_models'
import { ChatPromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate, PromptTemplate } from '@langchain/core/prompts' import { ChatPromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate, PromptTemplate } from '@langchain/core/prompts'
import { formatToOpenAIToolMessages } from 'langchain/agents/format_scratchpad/openai_tools' import { formatToOpenAIToolMessages } from 'langchain/agents/format_scratchpad/openai_tools'
import { getBaseClasses, transformBracesWithColon, convertChatHistoryToText, convertBaseMessagetoIMessage } from '../../../src/utils' import { getBaseClasses, transformBracesWithColon } from '../../../src/utils'
import { type ToolsAgentStep } from 'langchain/agents/openai/output_parser' import { type ToolsAgentStep } from 'langchain/agents/openai/output_parser'
import { import {
FlowiseMemory, FlowiseMemory,
@ -23,10 +23,8 @@ import { Moderation, checkInputs, streamResponse } from '../../moderation/Modera
import { formatResponse } from '../../outputparsers/OutputParserHelpers' import { formatResponse } from '../../outputparsers/OutputParserHelpers'
import type { Document } from '@langchain/core/documents' import type { Document } from '@langchain/core/documents'
import { BaseRetriever } from '@langchain/core/retrievers' import { BaseRetriever } from '@langchain/core/retrievers'
import { RESPONSE_TEMPLATE, REPHRASE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts' import { RESPONSE_TEMPLATE } from '../../chains/ConversationalRetrievalQAChain/prompts'
import { addImagesToMessages, llmSupportsVision } from '../../../src/multiModalUtils' import { addImagesToMessages, llmSupportsVision } from '../../../src/multiModalUtils'
import { StringOutputParser } from '@langchain/core/output_parsers'
import { Tool } from '@langchain/core/tools'
class ConversationalRetrievalToolAgent_Agents implements INode { class ConversationalRetrievalToolAgent_Agents implements INode {
label: string label: string
@ -44,7 +42,7 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
constructor(fields?: { sessionId?: string }) { constructor(fields?: { sessionId?: string }) {
this.label = 'Conversational Retrieval Tool Agent' this.label = 'Conversational Retrieval Tool Agent'
this.name = 'conversationalRetrievalToolAgent' this.name = 'conversationalRetrievalToolAgent'
this.author = 'niztal(falkor) and nikitas-novatix' this.author = 'niztal(falkor)'
this.version = 1.0 this.version = 1.0
this.type = 'AgentExecutor' this.type = 'AgentExecutor'
this.category = 'Agents' this.category = 'Agents'
@ -81,26 +79,6 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
optional: true, optional: true,
default: RESPONSE_TEMPLATE default: RESPONSE_TEMPLATE
}, },
{
label: 'Rephrase Prompt',
name: 'rephrasePrompt',
type: 'string',
description: 'Using previous chat history, rephrase question into a standalone question',
warning: 'Prompt must include input variables: {chat_history} and {question}',
rows: 4,
additionalParams: true,
optional: true,
default: REPHRASE_TEMPLATE
},
{
label: 'Rephrase Model',
name: 'rephraseModel',
type: 'BaseChatModel',
description:
'Optional: Use a different (faster/cheaper) model for rephrasing. If not specified, uses the main Tool Calling Chat Model.',
optional: true,
additionalParams: true
},
{ {
label: 'Input Moderation', label: 'Input Moderation',
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model', description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
@ -125,9 +103,8 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
this.sessionId = fields?.sessionId this.sessionId = fields?.sessionId
} }
// The agent will be prepared in run() with the correct user message - it needs the actual runtime input for rephrasing async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
async init(_nodeData: INodeData, _input: string, _options: ICommonObject): Promise<any> { return prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
return null
} }
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> { async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
@ -171,23 +148,6 @@ class ConversationalRetrievalToolAgent_Agents implements INode {
sseStreamer.streamUsedToolsEvent(chatId, res.usedTools) sseStreamer.streamUsedToolsEvent(chatId, res.usedTools)
usedTools = res.usedTools usedTools = res.usedTools
} }
// If the tool is set to returnDirect, stream the output to the client
if (res.usedTools && res.usedTools.length) {
let inputTools = nodeData.inputs?.tools
inputTools = flatten(inputTools)
for (const tool of res.usedTools) {
const inputTool = inputTools.find((inputTool: Tool) => inputTool.name === tool.tool)
if (inputTool && (inputTool as any).returnDirect && shouldStreamResponse) {
sseStreamer.streamTokenEvent(chatId, tool.toolOutput)
// Prevent CustomChainHandler from streaming the same output again
if (res.output === tool.toolOutput) {
res.output = ''
}
}
}
}
// The CustomChainHandler will send the stream end event
} else { } else {
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] }) res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
if (res.sourceDocuments) { if (res.sourceDocuments) {
@ -250,11 +210,9 @@ const prepareAgent = async (
flowObj: { sessionId?: string; chatId?: string; input?: string } flowObj: { sessionId?: string; chatId?: string; input?: string }
) => { ) => {
const model = nodeData.inputs?.model as BaseChatModel const model = nodeData.inputs?.model as BaseChatModel
const rephraseModel = (nodeData.inputs?.rephraseModel as BaseChatModel) || model // Use main model if not specified
const maxIterations = nodeData.inputs?.maxIterations as string const maxIterations = nodeData.inputs?.maxIterations as string
const memory = nodeData.inputs?.memory as FlowiseMemory const memory = nodeData.inputs?.memory as FlowiseMemory
let systemMessage = nodeData.inputs?.systemMessage as string let systemMessage = nodeData.inputs?.systemMessage as string
let rephrasePrompt = nodeData.inputs?.rephrasePrompt as string
let tools = nodeData.inputs?.tools let tools = nodeData.inputs?.tools
tools = flatten(tools) tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history' const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
@ -262,9 +220,6 @@ const prepareAgent = async (
const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever const vectorStoreRetriever = nodeData.inputs?.vectorStoreRetriever as BaseRetriever
systemMessage = transformBracesWithColon(systemMessage) systemMessage = transformBracesWithColon(systemMessage)
if (rephrasePrompt) {
rephrasePrompt = transformBracesWithColon(rephrasePrompt)
}
const prompt = ChatPromptTemplate.fromMessages([ const prompt = ChatPromptTemplate.fromMessages([
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`], ['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
@ -308,37 +263,6 @@ const prepareAgent = async (
const modelWithTools = model.bindTools(tools) const modelWithTools = model.bindTools(tools)
// Function to get standalone question (either rephrased or original)
const getStandaloneQuestion = async (input: string): Promise<string> => {
// If no rephrase prompt, return the original input
if (!rephrasePrompt) {
return input
}
// Get chat history (use empty string if none)
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
const iMessages = convertBaseMessagetoIMessage(messages)
const chatHistoryString = convertChatHistoryToText(iMessages)
// Always rephrase to normalize/expand user queries for better retrieval
try {
const CONDENSE_QUESTION_PROMPT = PromptTemplate.fromTemplate(rephrasePrompt)
const condenseQuestionChain = RunnableSequence.from([CONDENSE_QUESTION_PROMPT, rephraseModel, new StringOutputParser()])
const res = await condenseQuestionChain.invoke({
question: input,
chat_history: chatHistoryString
})
return res
} catch (error) {
console.error('Error rephrasing question:', error)
// On error, fall back to original input
return input
}
}
// Get standalone question before creating runnable
const standaloneQuestion = await getStandaloneQuestion(flowObj?.input || '')
const runnableAgent = RunnableSequence.from([ const runnableAgent = RunnableSequence.from([
{ {
[inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input, [inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
@ -348,9 +272,7 @@ const prepareAgent = async (
return messages ?? [] return messages ?? []
}, },
context: async (i: { input: string; chatHistory?: string }) => { context: async (i: { input: string; chatHistory?: string }) => {
// Use the standalone question (rephrased or original) for retrieval const relevantDocs = await vectorStoreRetriever.invoke(i.input)
const retrievalQuery = standaloneQuestion || i.input
const relevantDocs = await vectorStoreRetriever.invoke(retrievalQuery)
const formattedDocs = formatDocs(relevantDocs) const formattedDocs = formatDocs(relevantDocs)
return formattedDocs return formattedDocs
} }
@ -373,6 +295,4 @@ const prepareAgent = async (
return executor return executor
} }
module.exports = { module.exports = { nodeClass: ConversationalRetrievalToolAgent_Agents }
nodeClass: ConversationalRetrievalToolAgent_Agents
}

View File

@ -578,7 +578,7 @@ class OpenAIAssistant_Agents implements INode {
toolOutput toolOutput
}) })
} catch (e) { } catch (e) {
await analyticHandlers.onToolError(toolIds, e) await analyticHandlers.onToolEnd(toolIds, e)
console.error('Error executing tool', e) console.error('Error executing tool', e)
throw new Error( throw new Error(
`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}` `Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`
@ -703,7 +703,7 @@ class OpenAIAssistant_Agents implements INode {
toolOutput toolOutput
}) })
} catch (e) { } catch (e) {
await analyticHandlers.onToolError(toolIds, e) await analyticHandlers.onToolEnd(toolIds, e)
console.error('Error executing tool', e) console.error('Error executing tool', e)
clearInterval(timeout) clearInterval(timeout)
reject( reject(
@ -1096,7 +1096,7 @@ async function handleToolSubmission(params: ToolSubmissionParams): Promise<ToolS
toolOutput toolOutput
}) })
} catch (e) { } catch (e) {
await analyticHandlers.onToolError(toolIds, e) await analyticHandlers.onToolEnd(toolIds, e)
console.error('Error executing tool', e) console.error('Error executing tool', e)
throw new Error(`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`) throw new Error(`Error executing tool. Tool: ${tool.name}. Thread ID: ${threadId}. Run ID: ${runThreadId}`)
} }

View File

@ -607,12 +607,7 @@ export class LangchainChatGoogleGenerativeAI
private client: GenerativeModel private client: GenerativeModel
get _isMultimodalModel() { get _isMultimodalModel() {
return ( return this.model.includes('vision') || this.model.startsWith('gemini-1.5') || this.model.startsWith('gemini-2')
this.model.includes('vision') ||
this.model.startsWith('gemini-1.5') ||
this.model.startsWith('gemini-2') ||
this.model.startsWith('gemini-3')
)
} }
constructor(fields: GoogleGenerativeAIChatInput) { constructor(fields: GoogleGenerativeAIChatInput) {

View File

@ -452,7 +452,6 @@ export function mapGenerateContentResultToChatResult(
const [candidate] = response.candidates const [candidate] = response.candidates
const { content: candidateContent, ...generationInfo } = candidate const { content: candidateContent, ...generationInfo } = candidate
let content: MessageContent | undefined let content: MessageContent | undefined
const inlineDataItems: any[] = []
if (Array.isArray(candidateContent?.parts) && candidateContent.parts.length === 1 && candidateContent.parts[0].text) { if (Array.isArray(candidateContent?.parts) && candidateContent.parts.length === 1 && candidateContent.parts[0].text) {
content = candidateContent.parts[0].text content = candidateContent.parts[0].text
@ -473,18 +472,6 @@ export function mapGenerateContentResultToChatResult(
type: 'codeExecutionResult', type: 'codeExecutionResult',
codeExecutionResult: p.codeExecutionResult codeExecutionResult: p.codeExecutionResult
} }
} else if ('inlineData' in p && p.inlineData) {
// Extract inline image data for processing by Agent
inlineDataItems.push({
type: 'gemini_inline_data',
mimeType: p.inlineData.mimeType,
data: p.inlineData.data
})
// Return the inline data as part of the content structure
return {
type: 'inlineData',
inlineData: p.inlineData
}
} }
return p return p
}) })
@ -501,12 +488,6 @@ export function mapGenerateContentResultToChatResult(
text = block?.text ?? text text = block?.text ?? text
} }
// Build response_metadata with inline data if present
const response_metadata: any = {}
if (inlineDataItems.length > 0) {
response_metadata.inlineData = inlineDataItems
}
const generation: ChatGeneration = { const generation: ChatGeneration = {
text, text,
message: new AIMessage({ message: new AIMessage({
@ -521,8 +502,7 @@ export function mapGenerateContentResultToChatResult(
additional_kwargs: { additional_kwargs: {
...generationInfo ...generationInfo
}, },
usage_metadata: extra?.usageMetadata, usage_metadata: extra?.usageMetadata
response_metadata: Object.keys(response_metadata).length > 0 ? response_metadata : undefined
}), }),
generationInfo generationInfo
} }
@ -553,8 +533,6 @@ export function convertResponseContentToChatGenerationChunk(
const [candidate] = response.candidates const [candidate] = response.candidates
const { content: candidateContent, ...generationInfo } = candidate const { content: candidateContent, ...generationInfo } = candidate
let content: MessageContent | undefined let content: MessageContent | undefined
const inlineDataItems: any[] = []
// Checks if some parts do not have text. If false, it means that the content is a string. // Checks if some parts do not have text. If false, it means that the content is a string.
if (Array.isArray(candidateContent?.parts) && candidateContent.parts.every((p) => 'text' in p)) { if (Array.isArray(candidateContent?.parts) && candidateContent.parts.every((p) => 'text' in p)) {
content = candidateContent.parts.map((p) => p.text).join('') content = candidateContent.parts.map((p) => p.text).join('')
@ -575,18 +553,6 @@ export function convertResponseContentToChatGenerationChunk(
type: 'codeExecutionResult', type: 'codeExecutionResult',
codeExecutionResult: p.codeExecutionResult codeExecutionResult: p.codeExecutionResult
} }
} else if ('inlineData' in p && p.inlineData) {
// Extract inline image data for processing by Agent
inlineDataItems.push({
type: 'gemini_inline_data',
mimeType: p.inlineData.mimeType,
data: p.inlineData.data
})
// Return the inline data as part of the content structure
return {
type: 'inlineData',
inlineData: p.inlineData
}
} }
return p return p
}) })
@ -616,12 +582,6 @@ export function convertResponseContentToChatGenerationChunk(
) )
} }
// Build response_metadata with inline data if present
const response_metadata: any = {}
if (inlineDataItems.length > 0) {
response_metadata.inlineData = inlineDataItems
}
return new ChatGenerationChunk({ return new ChatGenerationChunk({
text, text,
message: new AIMessageChunk({ message: new AIMessageChunk({
@ -631,8 +591,7 @@ export function convertResponseContentToChatGenerationChunk(
// Each chunk can have unique "generationInfo", and merging strategy is unclear, // Each chunk can have unique "generationInfo", and merging strategy is unclear,
// so leave blank for now. // so leave blank for now.
additional_kwargs: {}, additional_kwargs: {},
usage_metadata: extra.usageMetadata, usage_metadata: extra.usageMetadata
response_metadata: Object.keys(response_metadata).length > 0 ? response_metadata : undefined
}), }),
generationInfo generationInfo
}) })

View File

@ -41,17 +41,15 @@ class ChatHuggingFace_ChatModels implements INode {
label: 'Model', label: 'Model',
name: 'model', name: 'model',
type: 'string', type: 'string',
description: description: 'If using own inference endpoint, leave this blank',
'Model name (e.g., deepseek-ai/DeepSeek-V3.2-Exp:novita). If model includes provider (:) or using router endpoint, leave Endpoint blank.', placeholder: 'gpt2'
placeholder: 'deepseek-ai/DeepSeek-V3.2-Exp:novita'
}, },
{ {
label: 'Endpoint', label: 'Endpoint',
name: 'endpoint', name: 'endpoint',
type: 'string', type: 'string',
placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2', placeholder: 'https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2',
description: description: 'Using your own inference endpoint',
'Custom inference endpoint (optional). Not needed for models with providers (:) or router endpoints. Leave blank to use Inference Providers.',
optional: true optional: true
}, },
{ {
@ -126,15 +124,6 @@ class ChatHuggingFace_ChatModels implements INode {
const credentialData = await getCredentialData(nodeData.credential ?? '', options) const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData) const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
if (!huggingFaceApiKey) {
console.error('[ChatHuggingFace] API key validation failed: No API key found')
throw new Error('HuggingFace API key is required. Please configure it in the credential settings.')
}
if (!huggingFaceApiKey.startsWith('hf_')) {
console.warn('[ChatHuggingFace] API key format warning: Key does not start with "hf_"')
}
const obj: Partial<HFInput> = { const obj: Partial<HFInput> = {
model, model,
apiKey: huggingFaceApiKey apiKey: huggingFaceApiKey

View File

@ -56,9 +56,9 @@ export class HuggingFaceInference extends LLM implements HFInput {
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY') this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
this.endpointUrl = fields?.endpointUrl this.endpointUrl = fields?.endpointUrl
this.includeCredentials = fields?.includeCredentials this.includeCredentials = fields?.includeCredentials
if (!this.apiKey || this.apiKey.trim() === '') { if (!this.apiKey) {
throw new Error( throw new Error(
'Please set an API key for HuggingFace Hub. Either configure it in the credential settings in the UI, or set the environment variable HUGGINGFACEHUB_API_KEY.' 'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
) )
} }
} }
@ -68,21 +68,19 @@ export class HuggingFaceInference extends LLM implements HFInput {
} }
invocationParams(options?: this['ParsedCallOptions']) { invocationParams(options?: this['ParsedCallOptions']) {
// Return parameters compatible with chatCompletion API (OpenAI-compatible format) return {
const params: any = { model: this.model,
temperature: this.temperature, parameters: {
max_tokens: this.maxTokens, // make it behave similar to openai, returning only the generated text
stop: options?.stop ?? this.stopSequences, return_full_text: false,
top_p: this.topP temperature: this.temperature,
max_new_tokens: this.maxTokens,
stop: options?.stop ?? this.stopSequences,
top_p: this.topP,
top_k: this.topK,
repetition_penalty: this.frequencyPenalty
}
} }
// Include optional parameters if they are defined
if (this.topK !== undefined) {
params.top_k = this.topK
}
if (this.frequencyPenalty !== undefined) {
params.frequency_penalty = this.frequencyPenalty
}
return params
} }
async *_streamResponseChunks( async *_streamResponseChunks(
@ -90,109 +88,51 @@ export class HuggingFaceInference extends LLM implements HFInput {
options: this['ParsedCallOptions'], options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun runManager?: CallbackManagerForLLMRun
): AsyncGenerator<GenerationChunk> { ): AsyncGenerator<GenerationChunk> {
try { const hfi = await this._prepareHFInference()
const client = await this._prepareHFInference() const stream = await this.caller.call(async () =>
const stream = await this.caller.call(async () => hfi.textGenerationStream({
client.chatCompletionStream({ ...this.invocationParams(options),
model: this.model, inputs: prompt
messages: [{ role: 'user', content: prompt }], })
...this.invocationParams(options) )
for await (const chunk of stream) {
const token = chunk.token.text
yield new GenerationChunk({ text: token, generationInfo: chunk })
await runManager?.handleLLMNewToken(token ?? '')
// stream is done
if (chunk.generated_text)
yield new GenerationChunk({
text: '',
generationInfo: { finished: true }
}) })
)
for await (const chunk of stream) {
const token = chunk.choices[0]?.delta?.content || ''
if (token) {
yield new GenerationChunk({ text: token, generationInfo: chunk })
await runManager?.handleLLMNewToken(token)
}
// stream is done when finish_reason is set
if (chunk.choices[0]?.finish_reason) {
yield new GenerationChunk({
text: '',
generationInfo: { finished: true }
})
break
}
}
} catch (error: any) {
console.error('[ChatHuggingFace] Error in _streamResponseChunks:', error)
// Provide more helpful error messages
if (error?.message?.includes('endpointUrl') || error?.message?.includes('third-party provider')) {
throw new Error(
`Cannot use custom endpoint with model "${this.model}" that includes a provider. Please leave the Endpoint field blank in the UI. Original error: ${error.message}`
)
}
throw error
} }
} }
/** @ignore */ /** @ignore */
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> { async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
try { const hfi = await this._prepareHFInference()
const client = await this._prepareHFInference() const args = { ...this.invocationParams(options), inputs: prompt }
// Use chatCompletion for chat models (v4 supports conversational models via Inference Providers) const res = await this.caller.callWithOptions({ signal: options.signal }, hfi.textGeneration.bind(hfi), args)
const args = { return res.generated_text
model: this.model,
messages: [{ role: 'user', content: prompt }],
...this.invocationParams(options)
}
const res = await this.caller.callWithOptions({ signal: options.signal }, client.chatCompletion.bind(client), args)
const content = res.choices[0]?.message?.content || ''
if (!content) {
console.error('[ChatHuggingFace] No content in response:', JSON.stringify(res))
throw new Error(`No content received from HuggingFace API. Response: ${JSON.stringify(res)}`)
}
return content
} catch (error: any) {
console.error('[ChatHuggingFace] Error in _call:', error.message)
// Provide more helpful error messages
if (error?.message?.includes('endpointUrl') || error?.message?.includes('third-party provider')) {
throw new Error(
`Cannot use custom endpoint with model "${this.model}" that includes a provider. Please leave the Endpoint field blank in the UI. Original error: ${error.message}`
)
}
if (error?.message?.includes('Invalid username or password') || error?.message?.includes('authentication')) {
throw new Error(
`HuggingFace API authentication failed. Please verify your API key is correct and starts with "hf_". Original error: ${error.message}`
)
}
throw error
}
} }
/** @ignore */ /** @ignore */
private async _prepareHFInference() { private async _prepareHFInference() {
if (!this.apiKey || this.apiKey.trim() === '') { const { HfInference } = await HuggingFaceInference.imports()
console.error('[ChatHuggingFace] API key validation failed: Empty or undefined') const hfi = new HfInference(this.apiKey, {
throw new Error('HuggingFace API key is required. Please configure it in the credential settings.') includeCredentials: this.includeCredentials
} })
return this.endpointUrl ? hfi.endpoint(this.endpointUrl) : hfi
const { InferenceClient } = await HuggingFaceInference.imports()
// Use InferenceClient for chat models (works better with Inference Providers)
const client = new InferenceClient(this.apiKey)
// Don't override endpoint if model uses a provider (contains ':') or if endpoint is router-based
// When using Inference Providers, endpoint should be left blank - InferenceClient handles routing automatically
if (
this.endpointUrl &&
!this.model.includes(':') &&
!this.endpointUrl.includes('/v1/chat/completions') &&
!this.endpointUrl.includes('router.huggingface.co')
) {
return client.endpoint(this.endpointUrl)
}
// Return client without endpoint override - InferenceClient will use Inference Providers automatically
return client
} }
/** @ignore */ /** @ignore */
static async imports(): Promise<{ static async imports(): Promise<{
InferenceClient: typeof import('@huggingface/inference').InferenceClient HfInference: typeof import('@huggingface/inference').HfInference
}> { }> {
try { try {
const { InferenceClient } = await import('@huggingface/inference') const { HfInference } = await import('@huggingface/inference')
return { InferenceClient } return { HfInference }
} catch (e) { } catch (e) {
throw new Error('Please install huggingface as a dependency with, e.g. `pnpm install @huggingface/inference`') throw new Error('Please install huggingface as a dependency with, e.g. `pnpm install @huggingface/inference`')
} }

View File

@ -1,8 +1,7 @@
import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai' import { ChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
import { BaseCache } from '@langchain/core/caches' import { BaseCache } from '@langchain/core/caches'
import { ICommonObject, IMultiModalOption, INode, INodeData, INodeParams } from '../../../src/Interface' import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils' import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { ChatOpenRouter } from './FlowiseChatOpenRouter'
class ChatOpenRouter_ChatModels implements INode { class ChatOpenRouter_ChatModels implements INode {
label: string label: string
@ -24,7 +23,7 @@ class ChatOpenRouter_ChatModels implements INode {
this.icon = 'openRouter.svg' this.icon = 'openRouter.svg'
this.category = 'Chat Models' this.category = 'Chat Models'
this.description = 'Wrapper around Open Router Inference API' this.description = 'Wrapper around Open Router Inference API'
this.baseClasses = [this.type, ...getBaseClasses(LangchainChatOpenAI)] this.baseClasses = [this.type, ...getBaseClasses(ChatOpenAI)]
this.credential = { this.credential = {
label: 'Connect Credential', label: 'Connect Credential',
name: 'credential', name: 'credential',
@ -115,40 +114,6 @@ class ChatOpenRouter_ChatModels implements INode {
type: 'json', type: 'json',
optional: true, optional: true,
additionalParams: true additionalParams: true
},
{
label: 'Allow Image Uploads',
name: 'allowImageUploads',
type: 'boolean',
description:
'Allow image input. Refer to the <a href="https://docs.flowiseai.com/using-flowise/uploads#image" target="_blank">docs</a> for more details.',
default: false,
optional: true
},
{
label: 'Image Resolution',
description: 'This parameter controls the resolution in which the model views the image.',
name: 'imageResolution',
type: 'options',
options: [
{
label: 'Low',
name: 'low'
},
{
label: 'High',
name: 'high'
},
{
label: 'Auto',
name: 'auto'
}
],
default: 'low',
optional: false,
show: {
allowImageUploads: true
}
} }
] ]
} }
@ -165,8 +130,6 @@ class ChatOpenRouter_ChatModels implements INode {
const basePath = (nodeData.inputs?.basepath as string) || 'https://openrouter.ai/api/v1' const basePath = (nodeData.inputs?.basepath as string) || 'https://openrouter.ai/api/v1'
const baseOptions = nodeData.inputs?.baseOptions const baseOptions = nodeData.inputs?.baseOptions
const cache = nodeData.inputs?.cache as BaseCache const cache = nodeData.inputs?.cache as BaseCache
const allowImageUploads = nodeData.inputs?.allowImageUploads as boolean
const imageResolution = nodeData.inputs?.imageResolution as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options) const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const openRouterApiKey = getCredentialParam('openRouterApiKey', credentialData, nodeData) const openRouterApiKey = getCredentialParam('openRouterApiKey', credentialData, nodeData)
@ -192,7 +155,7 @@ class ChatOpenRouter_ChatModels implements INode {
try { try {
parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions) parsedBaseOptions = typeof baseOptions === 'object' ? baseOptions : JSON.parse(baseOptions)
} catch (exception) { } catch (exception) {
throw new Error("Invalid JSON in the ChatOpenRouter's BaseOptions: " + exception) throw new Error("Invalid JSON in the ChatCerebras's BaseOptions: " + exception)
} }
} }
@ -203,15 +166,7 @@ class ChatOpenRouter_ChatModels implements INode {
} }
} }
const multiModalOption: IMultiModalOption = { const model = new ChatOpenAI(obj)
image: {
allowImageUploads: allowImageUploads ?? false,
imageResolution
}
}
const model = new ChatOpenRouter(nodeData.id, obj)
model.setMultiModalOption(multiModalOption)
return model return model
} }
} }

View File

@ -1,29 +0,0 @@
import { ChatOpenAI as LangchainChatOpenAI, ChatOpenAIFields } from '@langchain/openai'
import { IMultiModalOption, IVisionChatModal } from '../../../src'
export class ChatOpenRouter extends LangchainChatOpenAI implements IVisionChatModal {
configuredModel: string
configuredMaxToken?: number
multiModalOption: IMultiModalOption
id: string
constructor(id: string, fields?: ChatOpenAIFields) {
super(fields)
this.id = id
this.configuredModel = fields?.modelName ?? ''
this.configuredMaxToken = fields?.maxTokens
}
revertToOriginalModel(): void {
this.model = this.configuredModel
this.maxTokens = this.configuredMaxToken
}
setMultiModalOption(multiModalOption: IMultiModalOption): void {
this.multiModalOption = multiModalOption
}
setVisionModel(): void {
// pass - OpenRouter models don't need model switching
}
}

View File

@ -27,6 +27,8 @@ type Element = {
} }
export class UnstructuredLoader extends BaseDocumentLoader { export class UnstructuredLoader extends BaseDocumentLoader {
public filePath: string
private apiUrl = process.env.UNSTRUCTURED_API_URL || 'https://api.unstructuredapp.io/general/v0/general' private apiUrl = process.env.UNSTRUCTURED_API_URL || 'https://api.unstructuredapp.io/general/v0/general'
private apiKey: string | undefined = process.env.UNSTRUCTURED_API_KEY private apiKey: string | undefined = process.env.UNSTRUCTURED_API_KEY
@ -136,7 +138,7 @@ export class UnstructuredLoader extends BaseDocumentLoader {
}) })
if (!response.ok) { if (!response.ok) {
throw new Error(`Failed to partition file with error ${response.status} and message ${await response.text()}`) throw new Error(`Failed to partition file ${this.filePath} with error ${response.status} and message ${await response.text()}`)
} }
const elements = await response.json() const elements = await response.json()

View File

@ -1,6 +1,3 @@
/*
* Uncomment this if you want to use the UnstructuredFolder to load a folder from the file system
import { omit } from 'lodash' import { omit } from 'lodash'
import { ICommonObject, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface' import { ICommonObject, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
import { import {
@ -519,4 +516,3 @@ class UnstructuredFolder_DocumentLoaders implements INode {
} }
module.exports = { nodeClass: UnstructuredFolder_DocumentLoaders } module.exports = { nodeClass: UnstructuredFolder_DocumentLoaders }
*/

View File

@ -23,22 +23,24 @@ export class HuggingFaceInferenceEmbeddings extends Embeddings implements Huggin
this.model = fields?.model ?? 'sentence-transformers/distilbert-base-nli-mean-tokens' this.model = fields?.model ?? 'sentence-transformers/distilbert-base-nli-mean-tokens'
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY') this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
this.endpoint = fields?.endpoint ?? '' this.endpoint = fields?.endpoint ?? ''
const hf = new HfInference(this.apiKey) this.client = new HfInference(this.apiKey)
// v4 uses Inference Providers by default; only override if custom endpoint provided if (this.endpoint) this.client.endpoint(this.endpoint)
this.client = this.endpoint ? hf.endpoint(this.endpoint) : hf
} }
async _embed(texts: string[]): Promise<number[][]> { async _embed(texts: string[]): Promise<number[][]> {
// replace newlines, which can negatively affect performance. // replace newlines, which can negatively affect performance.
const clean = texts.map((text) => text.replace(/\n/g, ' ')) const clean = texts.map((text) => text.replace(/\n/g, ' '))
const hf = new HfInference(this.apiKey)
const obj: any = { const obj: any = {
inputs: clean inputs: clean
} }
if (!this.endpoint) { if (this.endpoint) {
hf.endpoint(this.endpoint)
} else {
obj.model = this.model obj.model = this.model
} }
const res = await this.caller.callWithOptions({}, this.client.featureExtraction.bind(this.client), obj) const res = await this.caller.callWithOptions({}, hf.featureExtraction.bind(hf), obj)
return res as number[][] return res as number[][]
} }

View File

@ -78,8 +78,6 @@ export class HuggingFaceInference extends LLM implements HFInput {
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> { async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
const { HfInference } = await HuggingFaceInference.imports() const { HfInference } = await HuggingFaceInference.imports()
const hf = new HfInference(this.apiKey) const hf = new HfInference(this.apiKey)
// v4 uses Inference Providers by default; only override if custom endpoint provided
const hfClient = this.endpoint ? hf.endpoint(this.endpoint) : hf
const obj: any = { const obj: any = {
parameters: { parameters: {
// make it behave similar to openai, returning only the generated text // make it behave similar to openai, returning only the generated text
@ -92,10 +90,12 @@ export class HuggingFaceInference extends LLM implements HFInput {
}, },
inputs: prompt inputs: prompt
} }
if (!this.endpoint) { if (this.endpoint) {
hf.endpoint(this.endpoint)
} else {
obj.model = this.model obj.model = this.model
} }
const res = await this.caller.callWithOptions({ signal: options.signal }, hfClient.textGeneration.bind(hfClient), obj) const res = await this.caller.callWithOptions({ signal: options.signal }, hf.textGeneration.bind(hf), obj)
return res.generated_text return res.generated_text
} }

View File

@ -62,6 +62,7 @@ class MySQLRecordManager_RecordManager implements INode {
label: 'Namespace', label: 'Namespace',
name: 'namespace', name: 'namespace',
type: 'string', type: 'string',
description: 'If not specified, chatflowid will be used',
additionalParams: true, additionalParams: true,
optional: true optional: true
}, },
@ -218,16 +219,7 @@ class MySQLRecordManager implements RecordManagerInterface {
unique key \`unique_key_namespace\` (\`key\`, unique key \`unique_key_namespace\` (\`key\`,
\`namespace\`));`) \`namespace\`));`)
// Add doc_id column if it doesn't exist (migration for existing tables) const columns = [`updated_at`, `key`, `namespace`, `group_id`]
const checkColumn = await queryRunner.manager.query(
`SELECT COUNT(1) ColumnExists FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_schema=DATABASE() AND table_name='${tableName}' AND column_name='doc_id';`
)
if (checkColumn[0].ColumnExists === 0) {
await queryRunner.manager.query(`ALTER TABLE \`${tableName}\` ADD COLUMN \`doc_id\` longtext;`)
}
const columns = [`updated_at`, `key`, `namespace`, `group_id`, `doc_id`]
for (const column of columns) { for (const column of columns) {
// MySQL does not support 'IF NOT EXISTS' function for Index // MySQL does not support 'IF NOT EXISTS' function for Index
const Check = await queryRunner.manager.query( const Check = await queryRunner.manager.query(
@ -269,7 +261,7 @@ class MySQLRecordManager implements RecordManagerInterface {
} }
} }
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> { async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
if (keys.length === 0) { if (keys.length === 0) {
return return
} }
@ -285,23 +277,23 @@ class MySQLRecordManager implements RecordManagerInterface {
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`) throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
} }
// Handle both new format (objects with uid and docId) and old format (strings) const groupIds = _groupIds ?? keys.map(() => null)
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
const groupIds = _groupIds ?? keyStrings.map(() => null) if (groupIds.length !== keys.length) {
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids (${groupIds.length})`)
if (groupIds.length !== keyStrings.length) {
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids (${groupIds.length})`)
} }
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i] ?? null, docIds[i] ?? null]) const recordsToUpsert = keys.map((key, i) => [
key,
this.namespace,
updatedAt,
groupIds[i] ?? null // Ensure groupIds[i] is null if undefined
])
const query = ` const query = `
INSERT INTO \`${tableName}\` (\`key\`, \`namespace\`, \`updated_at\`, \`group_id\`, \`doc_id\`) INSERT INTO \`${tableName}\` (\`key\`, \`namespace\`, \`updated_at\`, \`group_id\`)
VALUES (?, ?, ?, ?, ?) VALUES (?, ?, ?, ?)
ON DUPLICATE KEY UPDATE \`updated_at\` = VALUES(\`updated_at\`), \`doc_id\` = VALUES(\`doc_id\`)` ON DUPLICATE KEY UPDATE \`updated_at\` = VALUES(\`updated_at\`)`
// To handle multiple files upsert // To handle multiple files upsert
try { try {
@ -357,13 +349,13 @@ class MySQLRecordManager implements RecordManagerInterface {
} }
} }
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> { async listKeys(options?: ListKeyOptions): Promise<string[]> {
const dataSource = await this.getDataSource() const dataSource = await this.getDataSource()
const queryRunner = dataSource.createQueryRunner() const queryRunner = dataSource.createQueryRunner()
const tableName = this.sanitizeTableName(this.tableName) const tableName = this.sanitizeTableName(this.tableName)
try { try {
const { before, after, limit, groupIds, docId } = options ?? {} const { before, after, limit, groupIds } = options ?? {}
let query = `SELECT \`key\` FROM \`${tableName}\` WHERE \`namespace\` = ?` let query = `SELECT \`key\` FROM \`${tableName}\` WHERE \`namespace\` = ?`
const values: (string | number | string[])[] = [this.namespace] const values: (string | number | string[])[] = [this.namespace]
@ -390,11 +382,6 @@ class MySQLRecordManager implements RecordManagerInterface {
values.push(...groupIds.filter((gid): gid is string => gid !== null)) values.push(...groupIds.filter((gid): gid is string => gid !== null))
} }
if (docId) {
query += ` AND \`doc_id\` = ?`
values.push(docId)
}
query += ';' query += ';'
// Directly using try/catch with async/await for cleaner flow // Directly using try/catch with async/await for cleaner flow

View File

@ -78,6 +78,7 @@ class PostgresRecordManager_RecordManager implements INode {
label: 'Namespace', label: 'Namespace',
name: 'namespace', name: 'namespace',
type: 'string', type: 'string',
description: 'If not specified, chatflowid will be used',
additionalParams: true, additionalParams: true,
optional: true optional: true
}, },
@ -240,19 +241,6 @@ class PostgresRecordManager implements RecordManagerInterface {
CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace); CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace);
CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`) CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
// Add doc_id column if it doesn't exist (migration for existing tables)
await queryRunner.manager.query(`
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_name = '${tableName}' AND column_name = 'doc_id'
) THEN
ALTER TABLE "${tableName}" ADD COLUMN doc_id TEXT;
CREATE INDEX IF NOT EXISTS doc_id_index ON "${tableName}" (doc_id);
END IF;
END $$;`)
await queryRunner.release() await queryRunner.release()
} catch (e: any) { } catch (e: any) {
// This error indicates that the table already exists // This error indicates that the table already exists
@ -298,7 +286,7 @@ class PostgresRecordManager implements RecordManagerInterface {
return `(${placeholders.join(', ')})` return `(${placeholders.join(', ')})`
} }
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> { async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
if (keys.length === 0) { if (keys.length === 0) {
return return
} }
@ -314,22 +302,17 @@ class PostgresRecordManager implements RecordManagerInterface {
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`) throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
} }
// Handle both new format (objects with uid and docId) and old format (strings) const groupIds = _groupIds ?? keys.map(() => null)
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
const groupIds = _groupIds ?? keyStrings.map(() => null) if (groupIds.length !== keys.length) {
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids ${groupIds.length})`)
if (groupIds.length !== keyStrings.length) {
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids ${groupIds.length})`)
} }
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i], docIds[i]]) const recordsToUpsert = keys.map((key, i) => [key, this.namespace, updatedAt, groupIds[i]])
const valuesPlaceholders = recordsToUpsert.map((_, j) => this.generatePlaceholderForRowAt(j, recordsToUpsert[0].length)).join(', ') const valuesPlaceholders = recordsToUpsert.map((_, j) => this.generatePlaceholderForRowAt(j, recordsToUpsert[0].length)).join(', ')
const query = `INSERT INTO "${tableName}" (key, namespace, updated_at, group_id, doc_id) VALUES ${valuesPlaceholders} ON CONFLICT (key, namespace) DO UPDATE SET updated_at = EXCLUDED.updated_at, doc_id = EXCLUDED.doc_id;` const query = `INSERT INTO "${tableName}" (key, namespace, updated_at, group_id) VALUES ${valuesPlaceholders} ON CONFLICT (key, namespace) DO UPDATE SET updated_at = EXCLUDED.updated_at;`
try { try {
await queryRunner.manager.query(query, recordsToUpsert.flat()) await queryRunner.manager.query(query, recordsToUpsert.flat())
await queryRunner.release() await queryRunner.release()
@ -368,8 +351,8 @@ class PostgresRecordManager implements RecordManagerInterface {
} }
} }
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> { async listKeys(options?: ListKeyOptions): Promise<string[]> {
const { before, after, limit, groupIds, docId } = options ?? {} const { before, after, limit, groupIds } = options ?? {}
const tableName = this.sanitizeTableName(this.tableName) const tableName = this.sanitizeTableName(this.tableName)
let query = `SELECT key FROM "${tableName}" WHERE namespace = $1` let query = `SELECT key FROM "${tableName}" WHERE namespace = $1`
@ -400,12 +383,6 @@ class PostgresRecordManager implements RecordManagerInterface {
index += 1 index += 1
} }
if (docId) {
values.push(docId)
query += ` AND doc_id = $${index}`
index += 1
}
query += ';' query += ';'
const dataSource = await this.getDataSource() const dataSource = await this.getDataSource()

View File

@ -51,6 +51,7 @@ class SQLiteRecordManager_RecordManager implements INode {
label: 'Namespace', label: 'Namespace',
name: 'namespace', name: 'namespace',
type: 'string', type: 'string',
description: 'If not specified, chatflowid will be used',
additionalParams: true, additionalParams: true,
optional: true optional: true
}, },
@ -197,15 +198,6 @@ CREATE INDEX IF NOT EXISTS key_index ON "${tableName}" (key);
CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace); CREATE INDEX IF NOT EXISTS namespace_index ON "${tableName}" (namespace);
CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`) CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
// Add doc_id column if it doesn't exist (migration for existing tables)
const checkColumn = await queryRunner.manager.query(
`SELECT COUNT(*) as count FROM pragma_table_info('${tableName}') WHERE name='doc_id';`
)
if (checkColumn[0].count === 0) {
await queryRunner.manager.query(`ALTER TABLE "${tableName}" ADD COLUMN doc_id TEXT;`)
await queryRunner.manager.query(`CREATE INDEX IF NOT EXISTS doc_id_index ON "${tableName}" (doc_id);`)
}
await queryRunner.release() await queryRunner.release()
} catch (e: any) { } catch (e: any) {
// This error indicates that the table already exists // This error indicates that the table already exists
@ -236,7 +228,7 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
} }
} }
async update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: UpdateOptions): Promise<void> { async update(keys: string[], updateOptions?: UpdateOptions): Promise<void> {
if (keys.length === 0) { if (keys.length === 0) {
return return
} }
@ -251,23 +243,23 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`) throw new Error(`Time sync issue with database ${updatedAt} < ${timeAtLeast}`)
} }
// Handle both new format (objects with uid and docId) and old format (strings) const groupIds = _groupIds ?? keys.map(() => null)
const isNewFormat = keys.length > 0 && typeof keys[0] === 'object' && 'uid' in keys[0]
const keyStrings = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.uid) : (keys as string[])
const docIds = isNewFormat ? (keys as Array<{ uid: string; docId: string }>).map((k) => k.docId) : keys.map(() => null)
const groupIds = _groupIds ?? keyStrings.map(() => null) if (groupIds.length !== keys.length) {
throw new Error(`Number of keys (${keys.length}) does not match number of group_ids (${groupIds.length})`)
if (groupIds.length !== keyStrings.length) {
throw new Error(`Number of keys (${keyStrings.length}) does not match number of group_ids (${groupIds.length})`)
} }
const recordsToUpsert = keyStrings.map((key, i) => [key, this.namespace, updatedAt, groupIds[i] ?? null, docIds[i] ?? null]) const recordsToUpsert = keys.map((key, i) => [
key,
this.namespace,
updatedAt,
groupIds[i] ?? null // Ensure groupIds[i] is null if undefined
])
const query = ` const query = `
INSERT INTO "${tableName}" (key, namespace, updated_at, group_id, doc_id) INSERT INTO "${tableName}" (key, namespace, updated_at, group_id)
VALUES (?, ?, ?, ?, ?) VALUES (?, ?, ?, ?)
ON CONFLICT (key, namespace) DO UPDATE SET updated_at = excluded.updated_at, doc_id = excluded.doc_id` ON CONFLICT (key, namespace) DO UPDATE SET updated_at = excluded.updated_at`
try { try {
// To handle multiple files upsert // To handle multiple files upsert
@ -322,8 +314,8 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
} }
} }
async listKeys(options?: ListKeyOptions & { docId?: string }): Promise<string[]> { async listKeys(options?: ListKeyOptions): Promise<string[]> {
const { before, after, limit, groupIds, docId } = options ?? {} const { before, after, limit, groupIds } = options ?? {}
const tableName = this.sanitizeTableName(this.tableName) const tableName = this.sanitizeTableName(this.tableName)
let query = `SELECT key FROM "${tableName}" WHERE namespace = ?` let query = `SELECT key FROM "${tableName}" WHERE namespace = ?`
@ -352,11 +344,6 @@ CREATE INDEX IF NOT EXISTS group_id_index ON "${tableName}" (group_id);`)
values.push(...groupIds.filter((gid): gid is string => gid !== null)) values.push(...groupIds.filter((gid): gid is string => gid !== null))
} }
if (docId) {
query += ` AND doc_id = ?`
values.push(docId)
}
query += ';' query += ';'
const dataSource = await this.getDataSource() const dataSource = await this.getDataSource()

View File

@ -136,17 +136,17 @@ class Custom_MCP implements INode {
} }
let sandbox: ICommonObject = {} let sandbox: ICommonObject = {}
const workspaceId = options?.searchOptions?.workspaceId?._value || options?.workspaceId
if (mcpServerConfig.includes('$vars')) { if (mcpServerConfig.includes('$vars')) {
const appDataSource = options.appDataSource as DataSource const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity const databaseEntities = options.databaseEntities as IDatabaseEntity
// If options.workspaceId is not set, create a new options object with the workspaceId for getVars.
const optionsWithWorkspaceId = options.workspaceId ? options : { ...options, workspaceId } const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
const variables = await getVars(appDataSource, databaseEntities, nodeData, optionsWithWorkspaceId)
sandbox['$vars'] = prepareSandboxVars(variables) sandbox['$vars'] = prepareSandboxVars(variables)
} }
const workspaceId = options?.searchOptions?.workspaceId?._value || options?.workspaceId
let canonicalConfig let canonicalConfig
try { try {
canonicalConfig = JSON.parse(mcpServerConfig) canonicalConfig = JSON.parse(mcpServerConfig)

View File

@ -84,16 +84,11 @@ class CustomFunction_Utilities implements INode {
const variables = await getVars(appDataSource, databaseEntities, nodeData, options) const variables = await getVars(appDataSource, databaseEntities, nodeData, options)
const flow = { const flow = {
input,
chatflowId: options.chatflowid, chatflowId: options.chatflowid,
sessionId: options.sessionId, sessionId: options.sessionId,
chatId: options.chatId, chatId: options.chatId,
rawOutput: options.postProcessing?.rawOutput || '', rawOutput: options.rawOutput || '',
chatHistory: options.postProcessing?.chatHistory || [], input
sourceDocuments: options.postProcessing?.sourceDocuments,
usedTools: options.postProcessing?.usedTools,
artifacts: options.postProcessing?.artifacts,
fileAnnotations: options.postProcessing?.fileAnnotations
} }
let inputVars: ICommonObject = {} let inputVars: ICommonObject = {}

View File

@ -186,11 +186,7 @@ class Chroma_VectorStores implements INode {
const vectorStoreName = collectionName const vectorStoreName = collectionName
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
const chromaStore = new ChromaExtended(embeddings, obj) const chromaStore = new ChromaExtended(embeddings, obj)

View File

@ -198,11 +198,7 @@ class Elasticsearch_VectorStores implements INode {
const vectorStoreName = indexName const vectorStoreName = indexName
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await vectorStore.delete({ ids: keys }) await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -212,11 +212,7 @@ class Pinecone_VectorStores implements INode {
const vectorStoreName = pineconeNamespace const vectorStoreName = pineconeNamespace
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await pineconeStore.delete({ ids: keys }) await pineconeStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -49,7 +49,7 @@ class Postgres_VectorStores implements INode {
constructor() { constructor() {
this.label = 'Postgres' this.label = 'Postgres'
this.name = 'postgres' this.name = 'postgres'
this.version = 7.1 this.version = 7.0
this.type = 'Postgres' this.type = 'Postgres'
this.icon = 'postgres.svg' this.icon = 'postgres.svg'
this.category = 'Vector Stores' this.category = 'Vector Stores'
@ -173,15 +173,6 @@ class Postgres_VectorStores implements INode {
additionalParams: true, additionalParams: true,
optional: true optional: true
}, },
{
label: 'Upsert Batch Size',
name: 'batchSize',
type: 'number',
step: 1,
description: 'Upsert in batches of size N',
additionalParams: true,
optional: true
},
{ {
label: 'Additional Configuration', label: 'Additional Configuration',
name: 'additionalConfig', name: 'additionalConfig',
@ -241,7 +232,6 @@ class Postgres_VectorStores implements INode {
const docs = nodeData.inputs?.document as Document[] const docs = nodeData.inputs?.document as Document[]
const recordManager = nodeData.inputs?.recordManager const recordManager = nodeData.inputs?.recordManager
const isFileUploadEnabled = nodeData.inputs?.fileUpload as boolean const isFileUploadEnabled = nodeData.inputs?.fileUpload as boolean
const _batchSize = nodeData.inputs?.batchSize
const vectorStoreDriver: VectorStoreDriver = Postgres_VectorStores.getDriverFromConfig(nodeData, options) const vectorStoreDriver: VectorStoreDriver = Postgres_VectorStores.getDriverFromConfig(nodeData, options)
const flattenDocs = docs && docs.length ? flatten(docs) : [] const flattenDocs = docs && docs.length ? flatten(docs) : []
@ -275,15 +265,7 @@ class Postgres_VectorStores implements INode {
return res return res
} else { } else {
if (_batchSize) { await vectorStoreDriver.fromDocuments(finalDocs)
const batchSize = parseInt(_batchSize, 10)
for (let i = 0; i < finalDocs.length; i += batchSize) {
const batch = finalDocs.slice(i, i + batchSize)
await vectorStoreDriver.fromDocuments(batch)
}
} else {
await vectorStoreDriver.fromDocuments(finalDocs)
}
return { numAdded: finalDocs.length, addedDocs: finalDocs } return { numAdded: finalDocs.length, addedDocs: finalDocs }
} }
@ -303,11 +285,7 @@ class Postgres_VectorStores implements INode {
const vectorStoreName = tableName const vectorStoreName = tableName
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await vectorStore.delete({ ids: keys }) await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -5,11 +5,6 @@ import { TypeORMVectorStore, TypeORMVectorStoreArgs, TypeORMVectorStoreDocument
import { VectorStore } from '@langchain/core/vectorstores' import { VectorStore } from '@langchain/core/vectorstores'
import { Document } from '@langchain/core/documents' import { Document } from '@langchain/core/documents'
import { Pool } from 'pg' import { Pool } from 'pg'
import { v4 as uuid } from 'uuid'
type TypeORMAddDocumentOptions = {
ids?: string[]
}
export class TypeORMDriver extends VectorStoreDriver { export class TypeORMDriver extends VectorStoreDriver {
protected _postgresConnectionOptions: DataSourceOptions protected _postgresConnectionOptions: DataSourceOptions
@ -100,45 +95,15 @@ export class TypeORMDriver extends VectorStoreDriver {
try { try {
instance.appDataSource.getRepository(instance.documentEntity).delete(ids) instance.appDataSource.getRepository(instance.documentEntity).delete(ids)
} catch (e) { } catch (e) {
console.error('Failed to delete', e) console.error('Failed to delete')
} }
} }
} }
instance.addVectors = async ( const baseAddVectorsFn = instance.addVectors.bind(instance)
vectors: number[][],
documents: Document[],
documentOptions?: TypeORMAddDocumentOptions
): Promise<void> => {
const rows = vectors.map((embedding, idx) => {
const embeddingString = `[${embedding.join(',')}]`
const documentRow = {
id: documentOptions?.ids?.length ? documentOptions.ids[idx] : uuid(),
pageContent: documents[idx].pageContent,
embedding: embeddingString,
metadata: documents[idx].metadata
}
return documentRow
})
const documentRepository = instance.appDataSource.getRepository(instance.documentEntity) instance.addVectors = async (vectors, documents) => {
const _batchSize = this.nodeData.inputs?.batchSize return baseAddVectorsFn(vectors, this.sanitizeDocuments(documents))
const chunkSize = _batchSize ? parseInt(_batchSize, 10) : 500
for (let i = 0; i < rows.length; i += chunkSize) {
const chunk = rows.slice(i, i + chunkSize)
try {
await documentRepository.save(chunk)
} catch (e) {
console.error(e)
throw new Error(`Error inserting: ${chunk[0].pageContent}`)
}
}
}
instance.addDocuments = async (documents: Document[], options?: { ids?: string[] }): Promise<void> => {
const texts = documents.map(({ pageContent }) => pageContent)
return (instance.addVectors as any)(await this.getEmbeddings().embedDocuments(texts), documents, options)
} }
return instance return instance

View File

@ -385,11 +385,7 @@ class Qdrant_VectorStores implements INode {
const vectorStoreName = collectionName const vectorStoreName = collectionName
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await vectorStore.delete({ ids: keys }) await vectorStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -197,11 +197,7 @@ class Supabase_VectorStores implements INode {
const vectorStoreName = tableName + '_' + queryName const vectorStoreName = tableName + '_' + queryName
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await supabaseStore.delete({ ids: keys }) await supabaseStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -187,11 +187,7 @@ class Upstash_VectorStores implements INode {
const vectorStoreName = UPSTASH_VECTOR_REST_URL const vectorStoreName = UPSTASH_VECTOR_REST_URL
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await upstashStore.delete({ ids: keys }) await upstashStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -252,11 +252,7 @@ class Weaviate_VectorStores implements INode {
const vectorStoreName = weaviateTextKey ? weaviateIndex + '_' + weaviateTextKey : weaviateIndex const vectorStoreName = weaviateTextKey ? weaviateIndex + '_' + weaviateTextKey : weaviateIndex
await recordManager.createSchema() await recordManager.createSchema()
;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName ;(recordManager as any).namespace = (recordManager as any).namespace + '_' + vectorStoreName
const filterKeys: ICommonObject = {} const keys: string[] = await recordManager.listKeys({})
if (options.docId) {
filterKeys.docId = options.docId
}
const keys: string[] = await recordManager.listKeys(filterKeys)
await weaviateStore.delete({ ids: keys }) await weaviateStore.delete({ ids: keys })
await recordManager.deleteKeys(keys) await recordManager.deleteKeys(keys)

View File

@ -42,8 +42,7 @@
"@google-ai/generativelanguage": "^2.5.0", "@google-ai/generativelanguage": "^2.5.0",
"@google-cloud/storage": "^7.15.2", "@google-cloud/storage": "^7.15.2",
"@google/generative-ai": "^0.24.0", "@google/generative-ai": "^0.24.0",
"@grpc/grpc-js": "^1.10.10", "@huggingface/inference": "^2.6.1",
"@huggingface/inference": "^4.13.2",
"@langchain/anthropic": "0.3.33", "@langchain/anthropic": "0.3.33",
"@langchain/aws": "^0.1.11", "@langchain/aws": "^0.1.11",
"@langchain/baidu-qianfan": "^0.1.0", "@langchain/baidu-qianfan": "^0.1.0",
@ -74,20 +73,6 @@
"@modelcontextprotocol/server-slack": "^2025.1.17", "@modelcontextprotocol/server-slack": "^2025.1.17",
"@notionhq/client": "^2.2.8", "@notionhq/client": "^2.2.8",
"@opensearch-project/opensearch": "^1.2.0", "@opensearch-project/opensearch": "^1.2.0",
"@opentelemetry/api": "1.9.0",
"@opentelemetry/auto-instrumentations-node": "^0.52.0",
"@opentelemetry/core": "1.27.0",
"@opentelemetry/exporter-metrics-otlp-grpc": "0.54.0",
"@opentelemetry/exporter-metrics-otlp-http": "0.54.0",
"@opentelemetry/exporter-metrics-otlp-proto": "0.54.0",
"@opentelemetry/exporter-trace-otlp-grpc": "0.54.0",
"@opentelemetry/exporter-trace-otlp-http": "0.54.0",
"@opentelemetry/exporter-trace-otlp-proto": "0.54.0",
"@opentelemetry/resources": "1.27.0",
"@opentelemetry/sdk-metrics": "1.27.0",
"@opentelemetry/sdk-node": "^0.54.0",
"@opentelemetry/sdk-trace-base": "1.27.0",
"@opentelemetry/semantic-conventions": "1.27.0",
"@pinecone-database/pinecone": "4.0.0", "@pinecone-database/pinecone": "4.0.0",
"@qdrant/js-client-rest": "^1.9.0", "@qdrant/js-client-rest": "^1.9.0",
"@stripe/agent-toolkit": "^0.1.20", "@stripe/agent-toolkit": "^0.1.20",

View File

@ -1774,7 +1774,7 @@ export class AnalyticHandler {
} }
if (Object.prototype.hasOwnProperty.call(this.handlers, 'lunary')) { if (Object.prototype.hasOwnProperty.call(this.handlers, 'lunary')) {
const toolEventId: string = this.handlers['lunary'].toolEvent[returnIds['lunary'].toolEvent] const toolEventId: string = this.handlers['lunary'].llmEvent[returnIds['lunary'].toolEvent]
const monitor = this.handlers['lunary'].client const monitor = this.handlers['lunary'].client
if (monitor && toolEventId) { if (monitor && toolEventId) {

View File

@ -8,10 +8,6 @@ import { IndexingResult } from './Interface'
type Metadata = Record<string, unknown> type Metadata = Record<string, unknown>
export interface ExtendedRecordManagerInterface extends RecordManagerInterface {
update(keys: Array<{ uid: string; docId: string }> | string[], updateOptions?: Record<string, any>): Promise<void>
}
type StringOrDocFunc = string | ((doc: DocumentInterface) => string) type StringOrDocFunc = string | ((doc: DocumentInterface) => string)
export interface HashedDocumentInterface extends DocumentInterface { export interface HashedDocumentInterface extends DocumentInterface {
@ -211,7 +207,7 @@ export const _isBaseDocumentLoader = (arg: any): arg is BaseDocumentLoader => {
interface IndexArgs { interface IndexArgs {
docsSource: BaseDocumentLoader | DocumentInterface[] docsSource: BaseDocumentLoader | DocumentInterface[]
recordManager: ExtendedRecordManagerInterface recordManager: RecordManagerInterface
vectorStore: VectorStore vectorStore: VectorStore
options?: IndexOptions options?: IndexOptions
} }
@ -279,7 +275,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
const uids: string[] = [] const uids: string[] = []
const docsToIndex: DocumentInterface[] = [] const docsToIndex: DocumentInterface[] = []
const docsToUpdate: Array<{ uid: string; docId: string }> = [] const docsToUpdate: string[] = []
const seenDocs = new Set<string>() const seenDocs = new Set<string>()
hashedDocs.forEach((hashedDoc, i) => { hashedDocs.forEach((hashedDoc, i) => {
const docExists = batchExists[i] const docExists = batchExists[i]
@ -287,7 +283,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
if (forceUpdate) { if (forceUpdate) {
seenDocs.add(hashedDoc.uid) seenDocs.add(hashedDoc.uid)
} else { } else {
docsToUpdate.push({ uid: hashedDoc.uid, docId: hashedDoc.metadata.docId as string }) docsToUpdate.push(hashedDoc.uid)
return return
} }
} }
@ -312,7 +308,7 @@ export async function index(args: IndexArgs): Promise<IndexingResult> {
} }
await recordManager.update( await recordManager.update(
hashedDocs.map((doc) => ({ uid: doc.uid, docId: doc.metadata.docId as string })), hashedDocs.map((doc) => doc.uid),
{ timeAtLeast: indexStartDt, groupIds: sourceIds } { timeAtLeast: indexStartDt, groupIds: sourceIds }
) )

View File

@ -8,7 +8,6 @@ import { cloneDeep, omit, get } from 'lodash'
import TurndownService from 'turndown' import TurndownService from 'turndown'
import { DataSource, Equal } from 'typeorm' import { DataSource, Equal } from 'typeorm'
import { ICommonObject, IDatabaseEntity, IFileUpload, IMessage, INodeData, IVariable, MessageContentImageUrl } from './Interface' import { ICommonObject, IDatabaseEntity, IFileUpload, IMessage, INodeData, IVariable, MessageContentImageUrl } from './Interface'
import { BaseChatModel } from '@langchain/core/language_models/chat_models'
import { AES, enc } from 'crypto-js' import { AES, enc } from 'crypto-js'
import { AIMessage, HumanMessage, BaseMessage } from '@langchain/core/messages' import { AIMessage, HumanMessage, BaseMessage } from '@langchain/core/messages'
import { Document } from '@langchain/core/documents' import { Document } from '@langchain/core/documents'
@ -1942,160 +1941,3 @@ export async function parseWithTypeConversion<T extends z.ZodTypeAny>(schema: T,
throw e throw e
} }
} }
/**
* Configures structured output for the LLM using Zod schema
* @param {BaseChatModel} llmNodeInstance - The LLM instance to configure
* @param {any[]} structuredOutput - Array of structured output schema definitions
* @returns {BaseChatModel} - The configured LLM instance
*/
export const configureStructuredOutput = (llmNodeInstance: BaseChatModel, structuredOutput: any[]): BaseChatModel => {
try {
const zodObj: ICommonObject = {}
for (const sch of structuredOutput) {
if (sch.type === 'string') {
zodObj[sch.key] = z.string().describe(sch.description || '')
} else if (sch.type === 'stringArray') {
zodObj[sch.key] = z.array(z.string()).describe(sch.description || '')
} else if (sch.type === 'number') {
zodObj[sch.key] = z.number().describe(sch.description || '')
} else if (sch.type === 'boolean') {
zodObj[sch.key] = z.boolean().describe(sch.description || '')
} else if (sch.type === 'enum') {
const enumValues = sch.enumValues?.split(',').map((item: string) => item.trim()) || []
zodObj[sch.key] = z
.enum(enumValues.length ? (enumValues as [string, ...string[]]) : ['default'])
.describe(sch.description || '')
} else if (sch.type === 'jsonArray') {
const jsonSchema = sch.jsonSchema
if (jsonSchema) {
try {
// Parse the JSON schema
const schemaObj = JSON.parse(jsonSchema)
// Create a Zod schema from the JSON schema
const itemSchema = createZodSchemaFromJSON(schemaObj)
// Create an array schema of the item schema
zodObj[sch.key] = z.array(itemSchema).describe(sch.description || '')
} catch (err) {
console.error(`Error parsing JSON schema for ${sch.key}:`, err)
// Fallback to generic array of records
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
}
} else {
// If no schema provided, use generic array of records
zodObj[sch.key] = z.array(z.record(z.any())).describe(sch.description || '')
}
}
}
const structuredOutputSchema = z.object(zodObj)
// @ts-ignore
return llmNodeInstance.withStructuredOutput(structuredOutputSchema)
} catch (exception) {
console.error(exception)
return llmNodeInstance
}
}
/**
* Creates a Zod schema from a JSON schema object
* @param {any} jsonSchema - The JSON schema object
* @returns {z.ZodTypeAny} - A Zod schema
*/
export const createZodSchemaFromJSON = (jsonSchema: any): z.ZodTypeAny => {
// If the schema is an object with properties, create an object schema
if (typeof jsonSchema === 'object' && jsonSchema !== null) {
const schemaObj: Record<string, z.ZodTypeAny> = {}
// Process each property in the schema
for (const [key, value] of Object.entries(jsonSchema)) {
if (value === null) {
// Handle null values
schemaObj[key] = z.null()
} else if (typeof value === 'object' && !Array.isArray(value)) {
// Check if the property has a type definition
if ('type' in value) {
const type = value.type as string
const description = ('description' in value ? (value.description as string) : '') || ''
// Create the appropriate Zod type based on the type property
if (type === 'string') {
schemaObj[key] = z.string().describe(description)
} else if (type === 'number') {
schemaObj[key] = z.number().describe(description)
} else if (type === 'boolean') {
schemaObj[key] = z.boolean().describe(description)
} else if (type === 'array') {
// If it's an array type, check if items is defined
if ('items' in value && value.items) {
const itemSchema = createZodSchemaFromJSON(value.items)
schemaObj[key] = z.array(itemSchema).describe(description)
} else {
// Default to array of any if items not specified
schemaObj[key] = z.array(z.any()).describe(description)
}
} else if (type === 'object') {
// If it's an object type, check if properties is defined
if ('properties' in value && value.properties) {
const nestedSchema = createZodSchemaFromJSON(value.properties)
schemaObj[key] = nestedSchema.describe(description)
} else {
// Default to record of any if properties not specified
schemaObj[key] = z.record(z.any()).describe(description)
}
} else {
// Default to any for unknown types
schemaObj[key] = z.any().describe(description)
}
// Check if the property is optional
if ('optional' in value && value.optional === true) {
schemaObj[key] = schemaObj[key].optional()
}
} else if (Array.isArray(value)) {
// Array values without a type property
if (value.length > 0) {
// If the array has items, recursively create a schema for the first item
const itemSchema = createZodSchemaFromJSON(value[0])
schemaObj[key] = z.array(itemSchema)
} else {
// Empty array, allow any array
schemaObj[key] = z.array(z.any())
}
} else {
// It's a nested object without a type property, recursively create schema
schemaObj[key] = createZodSchemaFromJSON(value)
}
} else if (Array.isArray(value)) {
// Array values
if (value.length > 0) {
// If the array has items, recursively create a schema for the first item
const itemSchema = createZodSchemaFromJSON(value[0])
schemaObj[key] = z.array(itemSchema)
} else {
// Empty array, allow any array
schemaObj[key] = z.array(z.any())
}
} else {
// For primitive values (which shouldn't be in the schema directly)
// Use the corresponding Zod type
if (typeof value === 'string') {
schemaObj[key] = z.string()
} else if (typeof value === 'number') {
schemaObj[key] = z.number()
} else if (typeof value === 'boolean') {
schemaObj[key] = z.boolean()
} else {
schemaObj[key] = z.any()
}
}
}
return z.object(schemaObj)
}
// Fallback to any for unknown types
return z.any()
}

View File

@ -66,7 +66,7 @@
"@google-cloud/logging-winston": "^6.0.0", "@google-cloud/logging-winston": "^6.0.0",
"@keyv/redis": "^4.2.0", "@keyv/redis": "^4.2.0",
"@oclif/core": "4.0.7", "@oclif/core": "4.0.7",
"@opentelemetry/api": "1.9.0", "@opentelemetry/api": "^1.3.0",
"@opentelemetry/auto-instrumentations-node": "^0.52.0", "@opentelemetry/auto-instrumentations-node": "^0.52.0",
"@opentelemetry/core": "1.27.0", "@opentelemetry/core": "1.27.0",
"@opentelemetry/exporter-metrics-otlp-grpc": "0.54.0", "@opentelemetry/exporter-metrics-otlp-grpc": "0.54.0",
@ -119,12 +119,12 @@
"lodash": "^4.17.21", "lodash": "^4.17.21",
"moment": "^2.29.3", "moment": "^2.29.3",
"moment-timezone": "^0.5.34", "moment-timezone": "^0.5.34",
"multer": "^2.0.2", "multer": "^1.4.5-lts.1",
"multer-cloud-storage": "^4.0.0", "multer-cloud-storage": "^4.0.0",
"multer-s3": "^3.0.1", "multer-s3": "^3.0.1",
"mysql2": "^3.11.3", "mysql2": "^3.11.3",
"nanoid": "3", "nanoid": "3",
"nodemailer": "^7.0.7", "nodemailer": "^6.9.14",
"openai": "^4.96.0", "openai": "^4.96.0",
"passport": "^0.7.0", "passport": "^0.7.0",
"passport-auth0": "^1.4.4", "passport-auth0": "^1.4.4",

View File

@ -37,19 +37,7 @@ export class UsageCacheManager {
if (process.env.MODE === MODE.QUEUE) { if (process.env.MODE === MODE.QUEUE) {
let redisConfig: string | Record<string, any> let redisConfig: string | Record<string, any>
if (process.env.REDIS_URL) { if (process.env.REDIS_URL) {
redisConfig = { redisConfig = process.env.REDIS_URL
url: process.env.REDIS_URL,
socket: {
keepAlive:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
},
pingInterval:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
}
} else { } else {
redisConfig = { redisConfig = {
username: process.env.REDIS_USERNAME || undefined, username: process.env.REDIS_USERNAME || undefined,
@ -60,16 +48,8 @@ export class UsageCacheManager {
tls: process.env.REDIS_TLS === 'true', tls: process.env.REDIS_TLS === 'true',
cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined, cert: process.env.REDIS_CERT ? Buffer.from(process.env.REDIS_CERT, 'base64') : undefined,
key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined, key: process.env.REDIS_KEY ? Buffer.from(process.env.REDIS_KEY, 'base64') : undefined,
ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined, ca: process.env.REDIS_CA ? Buffer.from(process.env.REDIS_CA, 'base64') : undefined
keepAlive: }
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
},
pingInterval:
process.env.REDIS_KEEP_ALIVE && !isNaN(parseInt(process.env.REDIS_KEEP_ALIVE, 10))
? parseInt(process.env.REDIS_KEEP_ALIVE, 10)
: undefined
} }
} }
this.cache = createCache({ this.cache = createCache({

View File

@ -465,10 +465,9 @@ const insertIntoVectorStore = async (req: Request, res: Response, next: NextFunc
} }
const subscriptionId = req.user?.activeOrganizationSubscriptionId || '' const subscriptionId = req.user?.activeOrganizationSubscriptionId || ''
const body = req.body const body = req.body
const isStrictSave = body.isStrictSave ?? false
const apiResponse = await documentStoreService.insertIntoVectorStoreMiddleware( const apiResponse = await documentStoreService.insertIntoVectorStoreMiddleware(
body, body,
isStrictSave, false,
orgId, orgId,
workspaceId, workspaceId,
subscriptionId, subscriptionId,
@ -514,11 +513,7 @@ const deleteVectorStoreFromStore = async (req: Request, res: Response, next: Nex
`Error: documentStoreController.deleteVectorStoreFromStore - workspaceId not provided!` `Error: documentStoreController.deleteVectorStoreFromStore - workspaceId not provided!`
) )
} }
const apiResponse = await documentStoreService.deleteVectorStoreFromStore( const apiResponse = await documentStoreService.deleteVectorStoreFromStore(req.params.storeId, workspaceId)
req.params.storeId,
workspaceId,
(req.query.docId as string) || undefined
)
return res.json(apiResponse) return res.json(apiResponse)
} catch (error) { } catch (error) {
next(error) next(error)

View File

@ -1,14 +0,0 @@
import { MigrationInterface, QueryRunner } from 'typeorm'
export class FixDocumentStoreFileChunkLongText1765000000000 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` LONGTEXT NOT NULL;`)
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` LONGTEXT NULL;`)
}
public async down(queryRunner: QueryRunner): Promise<void> {
// WARNING: Reverting to TEXT may cause data loss if content exceeds the 64KB limit.
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` TEXT NOT NULL;`)
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` TEXT NULL;`)
}
}

View File

@ -40,7 +40,6 @@ import { AddTextToSpeechToChatFlow1754986457485 } from './1754986457485-AddTextT
import { ModifyChatflowType1755066758601 } from './1755066758601-ModifyChatflowType' import { ModifyChatflowType1755066758601 } from './1755066758601-ModifyChatflowType'
import { AddTextToSpeechToChatFlow1759419231100 } from './1759419231100-AddTextToSpeechToChatFlow' import { AddTextToSpeechToChatFlow1759419231100 } from './1759419231100-AddTextToSpeechToChatFlow'
import { AddChatFlowNameIndex1759424809984 } from './1759424809984-AddChatFlowNameIndex' import { AddChatFlowNameIndex1759424809984 } from './1759424809984-AddChatFlowNameIndex'
import { FixDocumentStoreFileChunkLongText1765000000000 } from './1765000000000-FixDocumentStoreFileChunkLongText'
import { AddAuthTables1720230151482 } from '../../../enterprise/database/migrations/mariadb/1720230151482-AddAuthTables' import { AddAuthTables1720230151482 } from '../../../enterprise/database/migrations/mariadb/1720230151482-AddAuthTables'
import { AddWorkspace1725437498242 } from '../../../enterprise/database/migrations/mariadb/1725437498242-AddWorkspace' import { AddWorkspace1725437498242 } from '../../../enterprise/database/migrations/mariadb/1725437498242-AddWorkspace'
@ -107,6 +106,5 @@ export const mariadbMigrations = [
AddTextToSpeechToChatFlow1754986457485, AddTextToSpeechToChatFlow1754986457485,
ModifyChatflowType1755066758601, ModifyChatflowType1755066758601,
AddTextToSpeechToChatFlow1759419231100, AddTextToSpeechToChatFlow1759419231100,
AddChatFlowNameIndex1759424809984, AddChatFlowNameIndex1759424809984
FixDocumentStoreFileChunkLongText1765000000000
] ]

View File

@ -1,14 +0,0 @@
import { MigrationInterface, QueryRunner } from 'typeorm'
export class FixDocumentStoreFileChunkLongText1765000000000 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` LONGTEXT NOT NULL;`)
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` LONGTEXT NULL;`)
}
public async down(queryRunner: QueryRunner): Promise<void> {
// WARNING: Reverting to TEXT may cause data loss if content exceeds the 64KB limit.
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`pageContent\` TEXT NOT NULL;`)
await queryRunner.query(`ALTER TABLE \`document_store_file_chunk\` MODIFY \`metadata\` TEXT NULL;`)
}
}

View File

@ -41,7 +41,6 @@ import { AddTextToSpeechToChatFlow1754986468397 } from './1754986468397-AddTextT
import { ModifyChatflowType1755066758601 } from './1755066758601-ModifyChatflowType' import { ModifyChatflowType1755066758601 } from './1755066758601-ModifyChatflowType'
import { AddTextToSpeechToChatFlow1759419216034 } from './1759419216034-AddTextToSpeechToChatFlow' import { AddTextToSpeechToChatFlow1759419216034 } from './1759419216034-AddTextToSpeechToChatFlow'
import { AddChatFlowNameIndex1759424828558 } from './1759424828558-AddChatFlowNameIndex' import { AddChatFlowNameIndex1759424828558 } from './1759424828558-AddChatFlowNameIndex'
import { FixDocumentStoreFileChunkLongText1765000000000 } from './1765000000000-FixDocumentStoreFileChunkLongText'
import { AddAuthTables1720230151482 } from '../../../enterprise/database/migrations/mysql/1720230151482-AddAuthTables' import { AddAuthTables1720230151482 } from '../../../enterprise/database/migrations/mysql/1720230151482-AddAuthTables'
import { AddWorkspace1720230151484 } from '../../../enterprise/database/migrations/mysql/1720230151484-AddWorkspace' import { AddWorkspace1720230151484 } from '../../../enterprise/database/migrations/mysql/1720230151484-AddWorkspace'
@ -109,6 +108,5 @@ export const mysqlMigrations = [
AddTextToSpeechToChatFlow1754986468397, AddTextToSpeechToChatFlow1754986468397,
ModifyChatflowType1755066758601, ModifyChatflowType1755066758601,
AddTextToSpeechToChatFlow1759419216034, AddTextToSpeechToChatFlow1759419216034,
AddChatFlowNameIndex1759424828558, AddChatFlowNameIndex1759424828558
FixDocumentStoreFileChunkLongText1765000000000
] ]

View File

@ -391,7 +391,7 @@ const deleteDocumentStoreFileChunk = async (storeId: string, docId: string, chun
} }
} }
const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string, docId?: string) => { const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string) => {
try { try {
const appServer = getRunningExpressApp() const appServer = getRunningExpressApp()
const componentNodes = appServer.nodesPool.componentNodes const componentNodes = appServer.nodesPool.componentNodes
@ -461,7 +461,7 @@ const deleteVectorStoreFromStore = async (storeId: string, workspaceId: string,
// Call the delete method of the vector store // Call the delete method of the vector store
if (vectorStoreObj.vectorStoreMethods.delete) { if (vectorStoreObj.vectorStoreMethods.delete) {
await vectorStoreObj.vectorStoreMethods.delete(vStoreNodeData, idsToDelete, { ...options, docId }) await vectorStoreObj.vectorStoreMethods.delete(vStoreNodeData, idsToDelete, options)
} }
} catch (error) { } catch (error) {
throw new InternalFlowiseError( throw new InternalFlowiseError(
@ -1157,18 +1157,6 @@ const updateVectorStoreConfigOnly = async (data: ICommonObject, workspaceId: str
) )
} }
} }
/**
* Saves vector store configuration to the document store entity.
* Handles embedding, vector store, and record manager configurations.
*
* @example
* // Strict mode: Only save what's provided, clear the rest
* await saveVectorStoreConfig(ds, { storeId, embeddingName, embeddingConfig }, true, wsId)
*
* @example
* // Lenient mode: Reuse existing configs if not provided
* await saveVectorStoreConfig(ds, { storeId, vectorStoreName, vectorStoreConfig }, false, wsId)
*/
const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObject, isStrictSave = true, workspaceId: string) => { const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObject, isStrictSave = true, workspaceId: string) => {
try { try {
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({ const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
@ -1233,15 +1221,6 @@ const saveVectorStoreConfig = async (appDataSource: DataSource, data: ICommonObj
} }
} }
/**
* Inserts documents from document store into the configured vector store.
*
* Process:
* 1. Saves vector store configuration (embedding, vector store, record manager)
* 2. Sets document store status to UPSERTING
* 3. Performs the actual vector store upsert operation
* 4. Updates status to UPSERTED upon completion
*/
export const insertIntoVectorStore = async ({ export const insertIntoVectorStore = async ({
appDataSource, appDataSource,
componentNodes, componentNodes,
@ -1252,16 +1231,19 @@ export const insertIntoVectorStore = async ({
workspaceId workspaceId
}: IExecuteVectorStoreInsert) => { }: IExecuteVectorStoreInsert) => {
try { try {
// Step 1: Save configuration based on isStrictSave mode
const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId) const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId)
// Step 2: Mark as UPSERTING before starting the operation
entity.status = DocumentStoreStatus.UPSERTING entity.status = DocumentStoreStatus.UPSERTING
await appDataSource.getRepository(DocumentStore).save(entity) await appDataSource.getRepository(DocumentStore).save(entity)
// Step 3: Perform the actual vector store upsert const indexResult = await _insertIntoVectorStoreWorkerThread(
// Note: Configuration already saved above, worker thread just retrieves and uses it appDataSource,
const indexResult = await _insertIntoVectorStoreWorkerThread(appDataSource, componentNodes, telemetry, data, orgId, workspaceId) componentNodes,
telemetry,
data,
isStrictSave,
orgId,
workspaceId
)
return indexResult return indexResult
} catch (error) { } catch (error) {
throw new InternalFlowiseError( throw new InternalFlowiseError(
@ -1326,18 +1308,12 @@ const _insertIntoVectorStoreWorkerThread = async (
componentNodes: IComponentNodes, componentNodes: IComponentNodes,
telemetry: Telemetry, telemetry: Telemetry,
data: ICommonObject, data: ICommonObject,
isStrictSave = true,
orgId: string, orgId: string,
workspaceId: string workspaceId: string
) => { ) => {
try { try {
// Configuration already saved by insertIntoVectorStore, just retrieve the entity const entity = await saveVectorStoreConfig(appDataSource, data, isStrictSave, workspaceId)
const entity = await appDataSource.getRepository(DocumentStore).findOneBy({
id: data.storeId,
workspaceId: workspaceId
})
if (!entity) {
throw new InternalFlowiseError(StatusCodes.NOT_FOUND, `Document store ${data.storeId} not found`)
}
let upsertHistory: Record<string, any> = {} let upsertHistory: Record<string, any> = {}
const chatflowid = data.storeId // fake chatflowid because this is not tied to any chatflow const chatflowid = data.storeId // fake chatflowid because this is not tied to any chatflow
@ -1374,10 +1350,7 @@ const _insertIntoVectorStoreWorkerThread = async (
const docs: Document[] = chunks.map((chunk: DocumentStoreFileChunk) => { const docs: Document[] = chunks.map((chunk: DocumentStoreFileChunk) => {
return new Document({ return new Document({
pageContent: chunk.pageContent, pageContent: chunk.pageContent,
metadata: { metadata: JSON.parse(chunk.metadata)
...JSON.parse(chunk.metadata),
docId: chunk.docId
}
}) })
}) })
vStoreNodeData.inputs.document = docs vStoreNodeData.inputs.document = docs
@ -1938,8 +1911,6 @@ const upsertDocStore = async (
recordManagerConfig recordManagerConfig
} }
// Use isStrictSave: false to preserve existing configurations during upsert
// This allows the operation to reuse existing embedding/vector store/record manager configs
const res = await insertIntoVectorStore({ const res = await insertIntoVectorStore({
appDataSource, appDataSource,
componentNodes, componentNodes,

View File

@ -2122,62 +2122,7 @@ export const executeAgentFlow = async ({
// check if last agentFlowExecutedData.data.output contains the key "content" // check if last agentFlowExecutedData.data.output contains the key "content"
const lastNodeOutput = agentFlowExecutedData[agentFlowExecutedData.length - 1].data?.output as ICommonObject | undefined const lastNodeOutput = agentFlowExecutedData[agentFlowExecutedData.length - 1].data?.output as ICommonObject | undefined
let content = (lastNodeOutput?.content as string) ?? ' ' const content = (lastNodeOutput?.content as string) ?? ' '
/* Check for post-processing settings */
let chatflowConfig: ICommonObject = {}
try {
if (chatflow.chatbotConfig) {
chatflowConfig = typeof chatflow.chatbotConfig === 'string' ? JSON.parse(chatflow.chatbotConfig) : chatflow.chatbotConfig
}
} catch (e) {
logger.error('[server]: Error parsing chatflow config:', e)
}
if (chatflowConfig?.postProcessing?.enabled === true && content) {
try {
const postProcessingFunction = JSON.parse(chatflowConfig?.postProcessing?.customFunction)
const nodeInstanceFilePath = componentNodes['customFunctionAgentflow'].filePath as string
const nodeModule = await import(nodeInstanceFilePath)
//set the outputs.output to EndingNode to prevent json escaping of content...
const nodeData = {
inputs: { customFunctionJavascriptFunction: postProcessingFunction }
}
const runtimeChatHistory = agentflowRuntime.chatHistory || []
const chatHistory = [...pastChatHistory, ...runtimeChatHistory]
const options: ICommonObject = {
chatflowid: chatflow.id,
sessionId,
chatId,
input: question || form,
postProcessing: {
rawOutput: content,
chatHistory: cloneDeep(chatHistory),
sourceDocuments: lastNodeOutput?.sourceDocuments ? cloneDeep(lastNodeOutput.sourceDocuments) : undefined,
usedTools: lastNodeOutput?.usedTools ? cloneDeep(lastNodeOutput.usedTools) : undefined,
artifacts: lastNodeOutput?.artifacts ? cloneDeep(lastNodeOutput.artifacts) : undefined,
fileAnnotations: lastNodeOutput?.fileAnnotations ? cloneDeep(lastNodeOutput.fileAnnotations) : undefined
},
appDataSource,
databaseEntities,
workspaceId,
orgId,
logger
}
const customFuncNodeInstance = new nodeModule.nodeClass()
const customFunctionResponse = await customFuncNodeInstance.run(nodeData, question || form, options)
const moderatedResponse = customFunctionResponse.output.content
if (typeof moderatedResponse === 'string') {
content = moderatedResponse
} else if (typeof moderatedResponse === 'object') {
content = '```json\n' + JSON.stringify(moderatedResponse, null, 2) + '\n```'
} else {
content = moderatedResponse
}
} catch (e) {
logger.error('[server]: Post Processing Error:', e)
}
}
// remove credentialId from agentFlowExecutedData // remove credentialId from agentFlowExecutedData
agentFlowExecutedData = agentFlowExecutedData.map((data) => _removeCredentialId(data)) agentFlowExecutedData = agentFlowExecutedData.map((data) => _removeCredentialId(data))

View File

@ -2,7 +2,7 @@ import { Request } from 'express'
import * as path from 'path' import * as path from 'path'
import { DataSource } from 'typeorm' import { DataSource } from 'typeorm'
import { v4 as uuidv4 } from 'uuid' import { v4 as uuidv4 } from 'uuid'
import { omit, cloneDeep } from 'lodash' import { omit } from 'lodash'
import { import {
IFileUpload, IFileUpload,
convertSpeechToText, convertSpeechToText,
@ -817,14 +817,7 @@ export const executeFlow = async ({
sessionId, sessionId,
chatId, chatId,
input: question, input: question,
postProcessing: { rawOutput: resultText,
rawOutput: resultText,
chatHistory: cloneDeep(chatHistory),
sourceDocuments: result?.sourceDocuments ? cloneDeep(result.sourceDocuments) : undefined,
usedTools: result?.usedTools ? cloneDeep(result.usedTools) : undefined,
artifacts: result?.artifacts ? cloneDeep(result.artifacts) : undefined,
fileAnnotations: result?.fileAnnotations ? cloneDeep(result.fileAnnotations) : undefined
},
appDataSource, appDataSource,
databaseEntities, databaseEntities,
workspaceId, workspaceId,

View File

@ -70,7 +70,7 @@ export const checkUsageLimit = async (
if (limit === -1) return if (limit === -1) return
if (currentUsage > limit) { if (currentUsage > limit) {
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, `Limit exceeded: ${type}`) throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, `Limit exceeded: ${type}`)
} }
} }
@ -135,7 +135,7 @@ export const checkPredictions = async (orgId: string, subscriptionId: string, us
if (predictionsLimit === -1) return if (predictionsLimit === -1) return
if (currentPredictions >= predictionsLimit) { if (currentPredictions >= predictionsLimit) {
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, 'Predictions limit exceeded') throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, 'Predictions limit exceeded')
} }
return { return {
@ -161,7 +161,7 @@ export const checkStorage = async (orgId: string, subscriptionId: string, usageC
if (storageLimit === -1) return if (storageLimit === -1) return
if (currentStorageUsage >= storageLimit) { if (currentStorageUsage >= storageLimit) {
throw new InternalFlowiseError(StatusCodes.PAYMENT_REQUIRED, 'Storage limit exceeded') throw new InternalFlowiseError(StatusCodes.TOO_MANY_REQUESTS, 'Storage limit exceeded')
} }
return { return {

View File

@ -22,10 +22,7 @@ const refreshLoader = (storeId) => client.post(`/document-store/refresh/${storeI
const insertIntoVectorStore = (body) => client.post(`/document-store/vectorstore/insert`, body) const insertIntoVectorStore = (body) => client.post(`/document-store/vectorstore/insert`, body)
const saveVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/save`, body) const saveVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/save`, body)
const updateVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/update`, body) const updateVectorStoreConfig = (body) => client.post(`/document-store/vectorstore/update`, body)
const deleteVectorStoreDataFromStore = (storeId, docId) => { const deleteVectorStoreDataFromStore = (storeId) => client.delete(`/document-store/vectorstore/${storeId}`)
const url = docId ? `/document-store/vectorstore/${storeId}?docId=${docId}` : `/document-store/vectorstore/${storeId}`
return client.delete(url)
}
const queryVectorStore = (body) => client.post(`/document-store/vectorstore/query`, body) const queryVectorStore = (body) => client.post(`/document-store/vectorstore/query`, body)
const getVectorStoreProviders = () => client.get('/document-store/components/vectorstore') const getVectorStoreProviders = () => client.get('/document-store/components/vectorstore')
const getEmbeddingProviders = () => client.get('/document-store/components/embeddings') const getEmbeddingProviders = () => client.get('/document-store/components/embeddings')

View File

@ -10,7 +10,6 @@ const VerifyEmailPage = Loadable(lazy(() => import('@/views/auth/verify-email'))
const ForgotPasswordPage = Loadable(lazy(() => import('@/views/auth/forgotPassword'))) const ForgotPasswordPage = Loadable(lazy(() => import('@/views/auth/forgotPassword')))
const ResetPasswordPage = Loadable(lazy(() => import('@/views/auth/resetPassword'))) const ResetPasswordPage = Loadable(lazy(() => import('@/views/auth/resetPassword')))
const UnauthorizedPage = Loadable(lazy(() => import('@/views/auth/unauthorized'))) const UnauthorizedPage = Loadable(lazy(() => import('@/views/auth/unauthorized')))
const RateLimitedPage = Loadable(lazy(() => import('@/views/auth/rateLimited')))
const OrganizationSetupPage = Loadable(lazy(() => import('@/views/organization/index'))) const OrganizationSetupPage = Loadable(lazy(() => import('@/views/organization/index')))
const LicenseExpiredPage = Loadable(lazy(() => import('@/views/auth/expired'))) const LicenseExpiredPage = Loadable(lazy(() => import('@/views/auth/expired')))
@ -46,10 +45,6 @@ const AuthRoutes = {
path: '/unauthorized', path: '/unauthorized',
element: <UnauthorizedPage /> element: <UnauthorizedPage />
}, },
{
path: '/rate-limited',
element: <RateLimitedPage />
},
{ {
path: '/organization-setup', path: '/organization-setup',
element: <OrganizationSetupPage /> element: <OrganizationSetupPage />

View File

@ -10,29 +10,11 @@ const ErrorContext = createContext()
export const ErrorProvider = ({ children }) => { export const ErrorProvider = ({ children }) => {
const [error, setError] = useState(null) const [error, setError] = useState(null)
const [authRateLimitError, setAuthRateLimitError] = useState(null)
const navigate = useNavigate() const navigate = useNavigate()
const handleError = async (err) => { const handleError = async (err) => {
console.error(err) console.error(err)
if (err?.response?.status === 429 && err?.response?.data?.type === 'authentication_rate_limit') { if (err?.response?.status === 403) {
setAuthRateLimitError("You're making a lot of requests. Please wait and try again later.")
} else if (err?.response?.status === 429 && err?.response?.data?.type !== 'authentication_rate_limit') {
const retryAfterHeader = err?.response?.headers?.['retry-after']
let retryAfter = 60 // Default in seconds
if (retryAfterHeader) {
const parsedSeconds = parseInt(retryAfterHeader, 10)
if (Number.isNaN(parsedSeconds)) {
const retryDate = new Date(retryAfterHeader)
if (!Number.isNaN(retryDate.getTime())) {
retryAfter = Math.max(0, Math.ceil((retryDate.getTime() - Date.now()) / 1000))
}
} else {
retryAfter = parsedSeconds
}
}
navigate('/rate-limited', { state: { retryAfter } })
} else if (err?.response?.status === 403) {
navigate('/unauthorized') navigate('/unauthorized')
} else if (err?.response?.status === 401) { } else if (err?.response?.status === 401) {
if (ErrorMessage.INVALID_MISSING_TOKEN === err?.response?.data?.message) { if (ErrorMessage.INVALID_MISSING_TOKEN === err?.response?.data?.message) {
@ -62,9 +44,7 @@ export const ErrorProvider = ({ children }) => {
value={{ value={{
error, error,
setError, setError,
handleError, handleError
authRateLimitError,
setAuthRateLimitError
}} }}
> >
{children} {children}

View File

@ -74,7 +74,7 @@ const StyledMenu = styled((props) => (
} }
})) }))
export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, setError, updateFlowsApi, currentPage, pageLimit }) { export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, setError, updateFlowsApi }) {
const { confirm } = useConfirm() const { confirm } = useConfirm()
const dispatch = useDispatch() const dispatch = useDispatch()
const updateChatflowApi = useApi(chatflowsApi.updateChatflow) const updateChatflowApi = useApi(chatflowsApi.updateChatflow)
@ -166,16 +166,10 @@ export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, s
} }
try { try {
await updateChatflowApi.request(chatflow.id, updateBody) await updateChatflowApi.request(chatflow.id, updateBody)
const params = {
page: currentPage,
limit: pageLimit
}
if (isAgentCanvas && isAgentflowV2) { if (isAgentCanvas && isAgentflowV2) {
await updateFlowsApi.request('AGENTFLOW', params) await updateFlowsApi.request('AGENTFLOW')
} else if (isAgentCanvas) {
await updateFlowsApi.request('MULTIAGENT', params)
} else { } else {
await updateFlowsApi.request(params) await updateFlowsApi.request(isAgentCanvas ? 'MULTIAGENT' : undefined)
} }
} catch (error) { } catch (error) {
if (setError) setError(error) if (setError) setError(error)
@ -215,15 +209,7 @@ export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, s
} }
try { try {
await updateChatflowApi.request(chatflow.id, updateBody) await updateChatflowApi.request(chatflow.id, updateBody)
const params = { await updateFlowsApi.request(isAgentCanvas ? 'AGENTFLOW' : undefined)
page: currentPage,
limit: pageLimit
}
if (isAgentCanvas) {
await updateFlowsApi.request('AGENTFLOW', params)
} else {
await updateFlowsApi.request(params)
}
} catch (error) { } catch (error) {
if (setError) setError(error) if (setError) setError(error)
enqueueSnackbar({ enqueueSnackbar({
@ -255,16 +241,10 @@ export default function FlowListMenu({ chatflow, isAgentCanvas, isAgentflowV2, s
if (isConfirmed) { if (isConfirmed) {
try { try {
await chatflowsApi.deleteChatflow(chatflow.id) await chatflowsApi.deleteChatflow(chatflow.id)
const params = {
page: currentPage,
limit: pageLimit
}
if (isAgentCanvas && isAgentflowV2) { if (isAgentCanvas && isAgentflowV2) {
await updateFlowsApi.request('AGENTFLOW', params) await updateFlowsApi.request('AGENTFLOW')
} else if (isAgentCanvas) {
await updateFlowsApi.request('MULTIAGENT', params)
} else { } else {
await updateFlowsApi.request(params) await updateFlowsApi.request(isAgentCanvas ? 'MULTIAGENT' : undefined)
} }
} catch (error) { } catch (error) {
if (setError) setError(error) if (setError) setError(error)
@ -474,7 +454,5 @@ FlowListMenu.propTypes = {
isAgentCanvas: PropTypes.bool, isAgentCanvas: PropTypes.bool,
isAgentflowV2: PropTypes.bool, isAgentflowV2: PropTypes.bool,
setError: PropTypes.func, setError: PropTypes.func,
updateFlowsApi: PropTypes.object, updateFlowsApi: PropTypes.object
currentPage: PropTypes.number,
pageLimit: PropTypes.number
} }

View File

@ -53,7 +53,8 @@ const CHATFLOW_CONFIGURATION_TABS = [
}, },
{ {
label: 'Post Processing', label: 'Post Processing',
id: 'postProcessing' id: 'postProcessing',
hideInAgentFlow: true
} }
] ]

View File

@ -16,11 +16,11 @@ import { useEditor, EditorContent } from '@tiptap/react'
import Placeholder from '@tiptap/extension-placeholder' import Placeholder from '@tiptap/extension-placeholder'
import { mergeAttributes } from '@tiptap/core' import { mergeAttributes } from '@tiptap/core'
import StarterKit from '@tiptap/starter-kit' import StarterKit from '@tiptap/starter-kit'
import Mention from '@tiptap/extension-mention'
import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight' import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight'
import { common, createLowlight } from 'lowlight' import { common, createLowlight } from 'lowlight'
import { suggestionOptions } from '@/ui-component/input/suggestionOption' import { suggestionOptions } from '@/ui-component/input/suggestionOption'
import { getAvailableNodesForVariable } from '@/utils/genericHelper' import { getAvailableNodesForVariable } from '@/utils/genericHelper'
import { CustomMention } from '@/utils/customMention'
const lowlight = createLowlight(common) const lowlight = createLowlight(common)
@ -78,7 +78,7 @@ const extensions = (availableNodesForVariable, availableState, acceptNodeOutputA
StarterKit.configure({ StarterKit.configure({
codeBlock: false codeBlock: false
}), }),
CustomMention.configure({ Mention.configure({
HTMLAttributes: { HTMLAttributes: {
class: 'variable' class: 'variable'
}, },

View File

@ -4,25 +4,8 @@ import PropTypes from 'prop-types'
import { useSelector } from 'react-redux' import { useSelector } from 'react-redux'
// material-ui // material-ui
import { import { IconButton, Button, Box, Typography } from '@mui/material'
IconButton, import { IconArrowsMaximize, IconBulb, IconX } from '@tabler/icons-react'
Button,
Box,
Typography,
TableContainer,
Table,
TableHead,
TableBody,
TableRow,
TableCell,
Paper,
Accordion,
AccordionSummary,
AccordionDetails,
Card
} from '@mui/material'
import { IconArrowsMaximize, IconX } from '@tabler/icons-react'
import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
import { useTheme } from '@mui/material/styles' import { useTheme } from '@mui/material/styles'
// Project import // Project import
@ -38,11 +21,7 @@ import useNotifier from '@/utils/useNotifier'
// API // API
import chatflowsApi from '@/api/chatflows' import chatflowsApi from '@/api/chatflows'
const sampleFunction = `// Access chat history as a string const sampleFunction = `return $flow.rawOutput + " This is a post processed response!";`
const chatHistory = JSON.stringify($flow.chatHistory, null, 2);
// Return a modified response
return $flow.rawOutput + " This is a post processed response!";`
const PostProcessing = ({ dialogProps }) => { const PostProcessing = ({ dialogProps }) => {
const dispatch = useDispatch() const dispatch = useDispatch()
@ -196,105 +175,31 @@ const PostProcessing = ({ dialogProps }) => {
/> />
</div> </div>
</Box> </Box>
<Card sx={{ borderColor: theme.palette.primary[200] + 75, mt: 2, mb: 2 }} variant='outlined'> <div
<Accordion style={{
disableGutters display: 'flex',
sx={{ flexDirection: 'column',
'&:before': { borderRadius: 10,
display: 'none' background: '#d8f3dc',
} padding: 10,
marginTop: 10
}}
>
<div
style={{
display: 'flex',
flexDirection: 'row',
alignItems: 'center',
paddingTop: 10
}} }}
> >
<AccordionSummary expandIcon={<ExpandMoreIcon />}> <IconBulb size={30} color='#2d6a4f' />
<Typography>Available Variables</Typography> <span style={{ color: '#2d6a4f', marginLeft: 10, fontWeight: 500 }}>
</AccordionSummary> The following variables are available to use in the custom function:{' '}
<AccordionDetails sx={{ p: 0 }}> <pre>$flow.rawOutput, $flow.input, $flow.chatflowId, $flow.sessionId, $flow.chatId</pre>
<TableContainer component={Paper}> </span>
<Table aria-label='available variables table'> </div>
<TableHead> </div>
<TableRow>
<TableCell sx={{ width: '30%' }}>Variable</TableCell>
<TableCell sx={{ width: '15%' }}>Type</TableCell>
<TableCell sx={{ width: '55%' }}>Description</TableCell>
</TableRow>
</TableHead>
<TableBody>
<TableRow>
<TableCell>
<code>$flow.rawOutput</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>The raw output response from the flow</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.input</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>The user input message</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.chatHistory</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>Array of previous messages in the conversation</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.chatflowId</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>Unique identifier for the chatflow</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.sessionId</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>Current session identifier</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.chatId</code>
</TableCell>
<TableCell>string</TableCell>
<TableCell>Current chat identifier</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.sourceDocuments</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>Source documents used in retrieval (if applicable)</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.usedTools</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>List of tools used during execution</TableCell>
</TableRow>
<TableRow>
<TableCell>
<code>$flow.artifacts</code>
</TableCell>
<TableCell>array</TableCell>
<TableCell>List of artifacts generated during execution</TableCell>
</TableRow>
<TableRow>
<TableCell sx={{ borderBottom: 'none' }}>
<code>$flow.fileAnnotations</code>
</TableCell>
<TableCell sx={{ borderBottom: 'none' }}>array</TableCell>
<TableCell sx={{ borderBottom: 'none' }}>File annotations associated with the response</TableCell>
</TableRow>
</TableBody>
</Table>
</TableContainer>
</AccordionDetails>
</Accordion>
</Card>
<StyledButton <StyledButton
style={{ marginBottom: 10, marginTop: 10 }} style={{ marginBottom: 10, marginTop: 10 }}
variant='contained' variant='contained'

View File

@ -7,11 +7,11 @@ import { mergeAttributes } from '@tiptap/core'
import StarterKit from '@tiptap/starter-kit' import StarterKit from '@tiptap/starter-kit'
import { styled } from '@mui/material/styles' import { styled } from '@mui/material/styles'
import { Box } from '@mui/material' import { Box } from '@mui/material'
import Mention from '@tiptap/extension-mention'
import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight' import CodeBlockLowlight from '@tiptap/extension-code-block-lowlight'
import { common, createLowlight } from 'lowlight' import { common, createLowlight } from 'lowlight'
import { suggestionOptions } from './suggestionOption' import { suggestionOptions } from './suggestionOption'
import { getAvailableNodesForVariable } from '@/utils/genericHelper' import { getAvailableNodesForVariable } from '@/utils/genericHelper'
import { CustomMention } from '@/utils/customMention'
const lowlight = createLowlight(common) const lowlight = createLowlight(common)
@ -20,7 +20,7 @@ const extensions = (availableNodesForVariable, availableState, acceptNodeOutputA
StarterKit.configure({ StarterKit.configure({
codeBlock: false codeBlock: false
}), }),
CustomMention.configure({ Mention.configure({
HTMLAttributes: { HTMLAttributes: {
class: 'variable' class: 'variable'
}, },

View File

@ -112,7 +112,7 @@ export const suggestionOptions = (
category: 'Node Outputs' category: 'Node Outputs'
}) })
const structuredOutputs = nodeData?.inputs?.llmStructuredOutput ?? nodeData?.inputs?.agentStructuredOutput ?? [] const structuredOutputs = nodeData?.inputs?.llmStructuredOutput ?? []
if (structuredOutputs && structuredOutputs.length > 0) { if (structuredOutputs && structuredOutputs.length > 0) {
structuredOutputs.forEach((item) => { structuredOutputs.forEach((item) => {
defaultItems.unshift({ defaultItems.unshift({

View File

@ -59,9 +59,7 @@ export const FlowListTable = ({
updateFlowsApi, updateFlowsApi,
setError, setError,
isAgentCanvas, isAgentCanvas,
isAgentflowV2, isAgentflowV2
currentPage,
pageLimit
}) => { }) => {
const { hasPermission } = useAuth() const { hasPermission } = useAuth()
const isActionsAvailable = isAgentCanvas const isActionsAvailable = isAgentCanvas
@ -333,8 +331,6 @@ export const FlowListTable = ({
chatflow={row} chatflow={row}
setError={setError} setError={setError}
updateFlowsApi={updateFlowsApi} updateFlowsApi={updateFlowsApi}
currentPage={currentPage}
pageLimit={pageLimit}
/> />
</Stack> </Stack>
</StyledTableCell> </StyledTableCell>
@ -359,7 +355,5 @@ FlowListTable.propTypes = {
updateFlowsApi: PropTypes.object, updateFlowsApi: PropTypes.object,
setError: PropTypes.func, setError: PropTypes.func,
isAgentCanvas: PropTypes.bool, isAgentCanvas: PropTypes.bool,
isAgentflowV2: PropTypes.bool, isAgentflowV2: PropTypes.bool
currentPage: PropTypes.number,
pageLimit: PropTypes.number
} }

View File

@ -1,26 +0,0 @@
import Mention from '@tiptap/extension-mention'
import { PasteRule } from '@tiptap/core'
export const CustomMention = Mention.extend({
renderText({ node }) {
return `{{${node.attrs.label ?? node.attrs.id}}}`
},
addPasteRules() {
return [
new PasteRule({
find: /\{\{([^{}]+)\}\}/g,
handler: ({ match, chain, range }) => {
const label = match[1].trim()
if (label) {
chain()
.deleteRange(range)
.insertContentAt(range.from, {
type: this.name,
attrs: { id: label, label: label }
})
}
}
})
]
}
})

View File

@ -325,8 +325,6 @@ const Agentflows = () => {
filterFunction={filterFlows} filterFunction={filterFlows}
updateFlowsApi={getAllAgentflows} updateFlowsApi={getAllAgentflows}
setError={setError} setError={setError}
currentPage={currentPage}
pageLimit={pageLimit}
/> />
)} )}
{/* Pagination and Page Size Controls */} {/* Pagination and Page Size Controls */}

View File

@ -150,8 +150,6 @@ const AgentFlowNode = ({ data }) => {
return <IconWorldWww size={14} color={'white'} /> return <IconWorldWww size={14} color={'white'} />
case 'googleSearch': case 'googleSearch':
return <IconBrandGoogle size={14} color={'white'} /> return <IconBrandGoogle size={14} color={'white'} />
case 'codeExecution':
return <IconCode size={14} color={'white'} />
default: default:
return null return null
} }

View File

@ -16,7 +16,6 @@ import accountApi from '@/api/account.api'
// Hooks // Hooks
import useApi from '@/hooks/useApi' import useApi from '@/hooks/useApi'
import { useConfig } from '@/store/context/ConfigContext' import { useConfig } from '@/store/context/ConfigContext'
import { useError } from '@/store/context/ErrorContext'
// utils // utils
import useNotifier from '@/utils/useNotifier' import useNotifier from '@/utils/useNotifier'
@ -42,13 +41,10 @@ const ForgotPasswordPage = () => {
const [isLoading, setLoading] = useState(false) const [isLoading, setLoading] = useState(false)
const [responseMsg, setResponseMsg] = useState(undefined) const [responseMsg, setResponseMsg] = useState(undefined)
const { authRateLimitError, setAuthRateLimitError } = useError()
const forgotPasswordApi = useApi(accountApi.forgotPassword) const forgotPasswordApi = useApi(accountApi.forgotPassword)
const sendResetRequest = async (event) => { const sendResetRequest = async (event) => {
event.preventDefault() event.preventDefault()
setAuthRateLimitError(null)
const body = { const body = {
user: { user: {
email: usernameVal email: usernameVal
@ -58,11 +54,6 @@ const ForgotPasswordPage = () => {
await forgotPasswordApi.request(body) await forgotPasswordApi.request(body)
} }
useEffect(() => {
setAuthRateLimitError(null)
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [setAuthRateLimitError])
useEffect(() => { useEffect(() => {
if (forgotPasswordApi.error) { if (forgotPasswordApi.error) {
const errMessage = const errMessage =
@ -98,11 +89,6 @@ const ForgotPasswordPage = () => {
{responseMsg.msg} {responseMsg.msg}
</Alert> </Alert>
)} )}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
{responseMsg && responseMsg?.type !== 'error' && ( {responseMsg && responseMsg?.type !== 'error' && (
<Alert icon={<IconCircleCheck />} variant='filled' severity='success'> <Alert icon={<IconCircleCheck />} variant='filled' severity='success'>
{responseMsg.msg} {responseMsg.msg}

View File

@ -1,51 +0,0 @@
import { Box, Button, Stack, Typography } from '@mui/material'
import { Link, useLocation } from 'react-router-dom'
import unauthorizedSVG from '@/assets/images/unauthorized.svg'
import MainCard from '@/ui-component/cards/MainCard'
// ==============================|| RateLimitedPage ||============================== //
const RateLimitedPage = () => {
const location = useLocation()
const retryAfter = location.state?.retryAfter || 60
return (
<MainCard>
<Box
sx={{
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
height: 'calc(100vh - 210px)'
}}
>
<Stack
sx={{
alignItems: 'center',
justifyContent: 'center',
maxWidth: '500px'
}}
flexDirection='column'
>
<Box sx={{ p: 2, height: 'auto' }}>
<img style={{ objectFit: 'cover', height: '20vh', width: 'auto' }} src={unauthorizedSVG} alt='rateLimitedSVG' />
</Box>
<Typography sx={{ mb: 2 }} variant='h4' component='div' fontWeight='bold'>
429 Too Many Requests
</Typography>
<Typography variant='body1' component='div' sx={{ mb: 2, textAlign: 'center' }}>
{`You have made too many requests in a short period of time. Please wait ${retryAfter}s before trying again.`}
</Typography>
<Link to='/'>
<Button variant='contained' color='primary'>
Back to Home
</Button>
</Link>
</Stack>
</Box>
</MainCard>
)
}
export default RateLimitedPage

View File

@ -18,7 +18,6 @@ import ssoApi from '@/api/sso'
// Hooks // Hooks
import useApi from '@/hooks/useApi' import useApi from '@/hooks/useApi'
import { useConfig } from '@/store/context/ConfigContext' import { useConfig } from '@/store/context/ConfigContext'
import { useError } from '@/store/context/ErrorContext'
// utils // utils
import useNotifier from '@/utils/useNotifier' import useNotifier from '@/utils/useNotifier'
@ -112,9 +111,7 @@ const RegisterPage = () => {
const [loading, setLoading] = useState(false) const [loading, setLoading] = useState(false)
const [authError, setAuthError] = useState('') const [authError, setAuthError] = useState('')
const [successMsg, setSuccessMsg] = useState('') const [successMsg, setSuccessMsg] = useState(undefined)
const { authRateLimitError, setAuthRateLimitError } = useError()
const registerApi = useApi(accountApi.registerAccount) const registerApi = useApi(accountApi.registerAccount)
const ssoLoginApi = useApi(ssoApi.ssoLogin) const ssoLoginApi = useApi(ssoApi.ssoLogin)
@ -123,7 +120,6 @@ const RegisterPage = () => {
const register = async (event) => { const register = async (event) => {
event.preventDefault() event.preventDefault()
setAuthRateLimitError(null)
if (isEnterpriseLicensed) { if (isEnterpriseLicensed) {
const result = RegisterEnterpriseUserSchema.safeParse({ const result = RegisterEnterpriseUserSchema.safeParse({
username, username,
@ -196,7 +192,6 @@ const RegisterPage = () => {
}, [registerApi.error]) }, [registerApi.error])
useEffect(() => { useEffect(() => {
setAuthRateLimitError(null)
if (!isOpenSource) { if (!isOpenSource) {
getDefaultProvidersApi.request() getDefaultProvidersApi.request()
} }
@ -279,11 +274,6 @@ const RegisterPage = () => {
)} )}
</Alert> </Alert>
)} )}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
{successMsg && ( {successMsg && (
<Alert icon={<IconCircleCheck />} variant='filled' severity='success'> <Alert icon={<IconCircleCheck />} variant='filled' severity='success'>
{successMsg} {successMsg}

View File

@ -1,4 +1,4 @@
import { useEffect, useState } from 'react' import { useState } from 'react'
import { useDispatch } from 'react-redux' import { useDispatch } from 'react-redux'
import { Link, useNavigate, useSearchParams } from 'react-router-dom' import { Link, useNavigate, useSearchParams } from 'react-router-dom'
@ -19,9 +19,6 @@ import accountApi from '@/api/account.api'
import useNotifier from '@/utils/useNotifier' import useNotifier from '@/utils/useNotifier'
import { validatePassword } from '@/utils/validation' import { validatePassword } from '@/utils/validation'
// Hooks
import { useError } from '@/store/context/ErrorContext'
// Icons // Icons
import { IconExclamationCircle, IconX } from '@tabler/icons-react' import { IconExclamationCircle, IconX } from '@tabler/icons-react'
@ -73,8 +70,6 @@ const ResetPasswordPage = () => {
const [loading, setLoading] = useState(false) const [loading, setLoading] = useState(false)
const [authErrors, setAuthErrors] = useState([]) const [authErrors, setAuthErrors] = useState([])
const { authRateLimitError, setAuthRateLimitError } = useError()
const goLogin = () => { const goLogin = () => {
navigate('/signin', { replace: true }) navigate('/signin', { replace: true })
} }
@ -83,7 +78,6 @@ const ResetPasswordPage = () => {
event.preventDefault() event.preventDefault()
const validationErrors = [] const validationErrors = []
setAuthErrors([]) setAuthErrors([])
setAuthRateLimitError(null)
if (!tokenVal) { if (!tokenVal) {
validationErrors.push('Token cannot be left blank!') validationErrors.push('Token cannot be left blank!')
} }
@ -148,11 +142,6 @@ const ResetPasswordPage = () => {
} }
} }
useEffect(() => {
setAuthRateLimitError(null)
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [])
return ( return (
<> <>
<MainCard> <MainCard>
@ -166,11 +155,6 @@ const ResetPasswordPage = () => {
</ul> </ul>
</Alert> </Alert>
)} )}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
<Stack sx={{ gap: 1 }}> <Stack sx={{ gap: 1 }}>
<Typography variant='h1'>Reset Password</Typography> <Typography variant='h1'>Reset Password</Typography>
<Typography variant='body2' sx={{ color: theme.palette.grey[600] }}> <Typography variant='body2' sx={{ color: theme.palette.grey[600] }}>

View File

@ -14,7 +14,6 @@ import { Input } from '@/ui-component/input/Input'
// Hooks // Hooks
import useApi from '@/hooks/useApi' import useApi from '@/hooks/useApi'
import { useConfig } from '@/store/context/ConfigContext' import { useConfig } from '@/store/context/ConfigContext'
import { useError } from '@/store/context/ErrorContext'
// API // API
import authApi from '@/api/auth' import authApi from '@/api/auth'
@ -63,8 +62,6 @@ const SignInPage = () => {
const [showResendButton, setShowResendButton] = useState(false) const [showResendButton, setShowResendButton] = useState(false)
const [successMessage, setSuccessMessage] = useState('') const [successMessage, setSuccessMessage] = useState('')
const { authRateLimitError, setAuthRateLimitError } = useError()
const loginApi = useApi(authApi.login) const loginApi = useApi(authApi.login)
const ssoLoginApi = useApi(ssoApi.ssoLogin) const ssoLoginApi = useApi(ssoApi.ssoLogin)
const getDefaultProvidersApi = useApi(loginMethodApi.getDefaultLoginMethods) const getDefaultProvidersApi = useApi(loginMethodApi.getDefaultLoginMethods)
@ -74,7 +71,6 @@ const SignInPage = () => {
const doLogin = (event) => { const doLogin = (event) => {
event.preventDefault() event.preventDefault()
setAuthRateLimitError(null)
setLoading(true) setLoading(true)
const body = { const body = {
email: usernameVal, email: usernameVal,
@ -96,12 +92,11 @@ const SignInPage = () => {
useEffect(() => { useEffect(() => {
store.dispatch(logoutSuccess()) store.dispatch(logoutSuccess())
setAuthRateLimitError(null)
if (!isOpenSource) { if (!isOpenSource) {
getDefaultProvidersApi.request() getDefaultProvidersApi.request()
} }
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [setAuthRateLimitError, isOpenSource]) }, [])
useEffect(() => { useEffect(() => {
// Parse the "user" query parameter from the URL // Parse the "user" query parameter from the URL
@ -184,11 +179,6 @@ const SignInPage = () => {
{successMessage} {successMessage}
</Alert> </Alert>
)} )}
{authRateLimitError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authRateLimitError}
</Alert>
)}
{authError && ( {authError && (
<Alert icon={<IconExclamationCircle />} variant='filled' severity='error'> <Alert icon={<IconExclamationCircle />} variant='filled' severity='error'>
{authError} {authError}

View File

@ -208,8 +208,6 @@ const Chatflows = () => {
filterFunction={filterFlows} filterFunction={filterFlows}
updateFlowsApi={getAllChatflowsApi} updateFlowsApi={getAllChatflowsApi}
setError={setError} setError={setError}
currentPage={currentPage}
pageLimit={pageLimit}
/> />
)} )}
{/* Pagination and Page Size Controls */} {/* Pagination and Page Size Controls */}

View File

@ -18,15 +18,11 @@ import {
TableContainer, TableContainer,
TableRow, TableRow,
TableCell, TableCell,
DialogActions, Checkbox,
Card, FormControlLabel,
Stack, DialogActions
Link
} from '@mui/material' } from '@mui/material'
import { useTheme } from '@mui/material/styles'
import ExpandMoreIcon from '@mui/icons-material/ExpandMore' import ExpandMoreIcon from '@mui/icons-material/ExpandMore'
import SettingsIcon from '@mui/icons-material/Settings'
import { IconAlertTriangle } from '@tabler/icons-react'
import { TableViewOnly } from '@/ui-component/table/Table' import { TableViewOnly } from '@/ui-component/table/Table'
import { v4 as uuidv4 } from 'uuid' import { v4 as uuidv4 } from 'uuid'
@ -40,13 +36,12 @@ import { initNode } from '@/utils/genericHelper'
const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => { const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
const portalElement = document.getElementById('portal') const portalElement = document.getElementById('portal')
const theme = useTheme()
const [nodeConfigExpanded, setNodeConfigExpanded] = useState({}) const [nodeConfigExpanded, setNodeConfigExpanded] = useState({})
const [removeFromVS, setRemoveFromVS] = useState(false)
const [vsFlowData, setVSFlowData] = useState([]) const [vsFlowData, setVSFlowData] = useState([])
const [rmFlowData, setRMFlowData] = useState([]) const [rmFlowData, setRMFlowData] = useState([])
const getVectorStoreNodeApi = useApi(nodesApi.getSpecificNode) const getSpecificNodeApi = useApi(nodesApi.getSpecificNode)
const getRecordManagerNodeApi = useApi(nodesApi.getSpecificNode)
const handleAccordionChange = (nodeName) => (event, isExpanded) => { const handleAccordionChange = (nodeName) => (event, isExpanded) => {
const accordianNodes = { ...nodeConfigExpanded } const accordianNodes = { ...nodeConfigExpanded }
@ -57,37 +52,42 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
useEffect(() => { useEffect(() => {
if (dialogProps.recordManagerConfig) { if (dialogProps.recordManagerConfig) {
const nodeName = dialogProps.recordManagerConfig.name const nodeName = dialogProps.recordManagerConfig.name
if (nodeName) getRecordManagerNodeApi.request(nodeName) if (nodeName) getSpecificNodeApi.request(nodeName)
}
if (dialogProps.vectorStoreConfig) { if (dialogProps.vectorStoreConfig) {
const nodeName = dialogProps.vectorStoreConfig.name const nodeName = dialogProps.vectorStoreConfig.name
if (nodeName) getVectorStoreNodeApi.request(nodeName) if (nodeName) getSpecificNodeApi.request(nodeName)
}
} }
return () => { return () => {
setNodeConfigExpanded({}) setNodeConfigExpanded({})
setRemoveFromVS(false)
setVSFlowData([]) setVSFlowData([])
setRMFlowData([]) setRMFlowData([])
} }
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [dialogProps]) }, [dialogProps])
// Process Vector Store node data
useEffect(() => { useEffect(() => {
if (getVectorStoreNodeApi.data && dialogProps.vectorStoreConfig) { if (getSpecificNodeApi.data) {
const nodeData = cloneDeep(initNode(getVectorStoreNodeApi.data, uuidv4())) const nodeData = cloneDeep(initNode(getSpecificNodeApi.data, uuidv4()))
let config = 'vectorStoreConfig'
if (nodeData.category === 'Record Manager') config = 'recordManagerConfig'
const paramValues = [] const paramValues = []
for (const inputName in dialogProps.vectorStoreConfig.config) { for (const inputName in dialogProps[config].config) {
const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName) const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName)
if (!inputParam) continue if (!inputParam) continue
if (inputParam.type === 'credential') continue if (inputParam.type === 'credential') continue
const inputValue = dialogProps.vectorStoreConfig.config[inputName] let paramValue = {}
const inputValue = dialogProps[config].config[inputName]
if (!inputValue) continue if (!inputValue) continue
@ -95,71 +95,40 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
continue continue
} }
paramValues.push({ paramValue = {
label: inputParam?.label, label: inputParam?.label,
name: inputParam?.name, name: inputParam?.name,
type: inputParam?.type, type: inputParam?.type,
value: inputValue value: inputValue
}) }
paramValues.push(paramValue)
} }
setVSFlowData([ if (config === 'vectorStoreConfig') {
{ setVSFlowData([
label: nodeData.label, {
name: nodeData.name, label: nodeData.label,
category: nodeData.category, name: nodeData.name,
id: nodeData.id, category: nodeData.category,
paramValues id: nodeData.id,
} paramValues
]) }
])
} else if (config === 'recordManagerConfig') {
setRMFlowData([
{
label: nodeData.label,
name: nodeData.name,
category: nodeData.category,
id: nodeData.id,
paramValues
}
])
}
} }
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [getVectorStoreNodeApi.data]) }, [getSpecificNodeApi.data])
// Process Record Manager node data
useEffect(() => {
if (getRecordManagerNodeApi.data && dialogProps.recordManagerConfig) {
const nodeData = cloneDeep(initNode(getRecordManagerNodeApi.data, uuidv4()))
const paramValues = []
for (const inputName in dialogProps.recordManagerConfig.config) {
const inputParam = nodeData.inputParams.find((inp) => inp.name === inputName)
if (!inputParam) continue
if (inputParam.type === 'credential') continue
const inputValue = dialogProps.recordManagerConfig.config[inputName]
if (!inputValue) continue
if (typeof inputValue === 'string' && inputValue.startsWith('{{') && inputValue.endsWith('}}')) {
continue
}
paramValues.push({
label: inputParam?.label,
name: inputParam?.name,
type: inputParam?.type,
value: inputValue
})
}
setRMFlowData([
{
label: nodeData.label,
name: nodeData.name,
category: nodeData.category,
id: nodeData.id,
paramValues
}
])
}
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [getRecordManagerNodeApi.data])
const component = show ? ( const component = show ? (
<Dialog <Dialog
@ -173,130 +142,91 @@ const DeleteDocStoreDialog = ({ show, dialogProps, onCancel, onDelete }) => {
<DialogTitle sx={{ fontSize: '1rem', p: 3, pb: 0 }} id='alert-dialog-title'> <DialogTitle sx={{ fontSize: '1rem', p: 3, pb: 0 }} id='alert-dialog-title'>
{dialogProps.title} {dialogProps.title}
</DialogTitle> </DialogTitle>
<DialogContent <DialogContent sx={{ display: 'flex', flexDirection: 'column', gap: 2, maxHeight: '75vh', position: 'relative', px: 3, pb: 3 }}>
sx={{
display: 'flex',
flexDirection: 'column',
gap: 2,
maxHeight: '75vh',
position: 'relative',
px: 3,
pb: 3,
overflow: 'auto'
}}
>
<span style={{ marginTop: '20px' }}>{dialogProps.description}</span> <span style={{ marginTop: '20px' }}>{dialogProps.description}</span>
{dialogProps.vectorStoreConfig && !dialogProps.recordManagerConfig && ( {dialogProps.type === 'STORE' && dialogProps.recordManagerConfig && (
<div <FormControlLabel
style={{ control={<Checkbox checked={removeFromVS} onChange={(event) => setRemoveFromVS(event.target.checked)} />}
display: 'flex', label='Remove data from vector store and record manager'
flexDirection: 'row', />
alignItems: 'center',
borderRadius: 10,
background: 'rgb(254,252,191)',
padding: 10
}}
>
<IconAlertTriangle size={70} color='orange' />
<span style={{ color: 'rgb(116,66,16)', marginLeft: 10 }}>
<strong>Note:</strong> Without a Record Manager configured, only the document chunks will be removed from the
document store. The actual vector embeddings in your vector store database will remain unchanged. To enable
automatic cleanup of vector store data, please configure a Record Manager.{' '}
<Link
href='https://docs.flowiseai.com/integrations/langchain/record-managers'
target='_blank'
rel='noopener noreferrer'
sx={{ fontWeight: 500, color: 'rgb(116,66,16)', textDecoration: 'underline' }}
>
Learn more
</Link>
</span>
</div>
)} )}
{vsFlowData && vsFlowData.length > 0 && rmFlowData && rmFlowData.length > 0 && ( {removeFromVS && (
<Card sx={{ borderColor: theme.palette.primary[200] + 75, p: 2 }} variant='outlined'> <div>
<Stack sx={{ mt: 1, mb: 2, ml: 1, alignItems: 'center' }} direction='row' spacing={2}> <TableContainer component={Paper}>
<SettingsIcon /> <Table sx={{ minWidth: 650 }} aria-label='simple table'>
<Typography variant='h4'>Configuration</Typography> <TableBody>
</Stack> <TableRow sx={{ '& td': { border: 0 } }}>
<Stack direction='column'> <TableCell sx={{ pb: 0, pt: 0 }} colSpan={6}>
<TableContainer component={Paper} sx={{ maxHeight: '400px', overflow: 'auto' }}> <Box>
<Table sx={{ minWidth: 650 }} aria-label='simple table'> {([...vsFlowData, ...rmFlowData] || []).map((node, index) => {
<TableBody> return (
<TableRow sx={{ '& td': { border: 0 } }}> <Accordion
<TableCell sx={{ pb: 0, pt: 0 }} colSpan={6}> expanded={nodeConfigExpanded[node.name] || true}
<Box> onChange={handleAccordionChange(node.name)}
{([...vsFlowData, ...rmFlowData] || []).map((node, index) => { key={index}
return ( disableGutters
<Accordion >
expanded={nodeConfigExpanded[node.name] || false} <AccordionSummary
onChange={handleAccordionChange(node.name)} expandIcon={<ExpandMoreIcon />}
key={index} aria-controls={`nodes-accordian-${node.name}`}
disableGutters id={`nodes-accordian-header-${node.name}`}
> >
<AccordionSummary <div
expandIcon={<ExpandMoreIcon />} style={{ display: 'flex', flexDirection: 'row', alignItems: 'center' }}
aria-controls={`nodes-accordian-${node.name}`}
id={`nodes-accordian-header-${node.name}`}
> >
<div <div
style={{ style={{
display: 'flex', width: 40,
flexDirection: 'row', height: 40,
alignItems: 'center' marginRight: 10,
borderRadius: '50%',
backgroundColor: 'white'
}} }}
> >
<div <img
style={{ style={{
width: 40, width: '100%',
height: 40, height: '100%',
marginRight: 10, padding: 7,
borderRadius: '50%', borderRadius: '50%',
backgroundColor: 'white' objectFit: 'contain'
}} }}
> alt={node.name}
<img src={`${baseURL}/api/v1/node-icon/${node.name}`}
style={{
width: '100%',
height: '100%',
padding: 7,
borderRadius: '50%',
objectFit: 'contain'
}}
alt={node.name}
src={`${baseURL}/api/v1/node-icon/${node.name}`}
/>
</div>
<Typography variant='h5'>{node.label}</Typography>
</div>
</AccordionSummary>
<AccordionDetails sx={{ p: 0 }}>
{node.paramValues[0] && (
<TableViewOnly
sx={{ minWidth: 150 }}
rows={node.paramValues}
columns={Object.keys(node.paramValues[0])}
/> />
)} </div>
</AccordionDetails> <Typography variant='h5'>{node.label}</Typography>
</Accordion> </div>
) </AccordionSummary>
})} <AccordionDetails>
</Box> {node.paramValues[0] && (
</TableCell> <TableViewOnly
</TableRow> sx={{ minWidth: 150 }}
</TableBody> rows={node.paramValues}
</Table> columns={Object.keys(node.paramValues[0])}
</TableContainer> />
</Stack> )}
</Card> </AccordionDetails>
</Accordion>
)
})}
</Box>
</TableCell>
</TableRow>
</TableBody>
</Table>
</TableContainer>
<span style={{ marginTop: '30px', fontStyle: 'italic', color: '#b35702' }}>
* Only data that were upserted with Record Manager will be deleted from vector store
</span>
</div>
)} )}
</DialogContent> </DialogContent>
<DialogActions sx={{ pr: 3, pb: 3 }}> <DialogActions sx={{ pr: 3, pb: 3 }}>
<Button onClick={onCancel} color='primary'> <Button onClick={onCancel} color='primary'>
Cancel Cancel
</Button> </Button>
<Button variant='contained' onClick={() => onDelete(dialogProps.type, dialogProps.file)} color='error'> <Button variant='contained' onClick={() => onDelete(dialogProps.type, dialogProps.file, removeFromVS)} color='error'>
Delete Delete
</Button> </Button>
</DialogActions> </DialogActions>

View File

@ -186,19 +186,19 @@ const DocumentStoreDetails = () => {
setShowDocumentLoaderListDialog(true) setShowDocumentLoaderListDialog(true)
} }
const deleteVectorStoreDataFromStore = async (storeId, docId) => { const deleteVectorStoreDataFromStore = async (storeId) => {
try { try {
await documentsApi.deleteVectorStoreDataFromStore(storeId, docId) await documentsApi.deleteVectorStoreDataFromStore(storeId)
} catch (error) { } catch (error) {
console.error(error) console.error(error)
} }
} }
const onDocStoreDelete = async (type, file) => { const onDocStoreDelete = async (type, file, removeFromVectorStore) => {
setBackdropLoading(true) setBackdropLoading(true)
setShowDeleteDocStoreDialog(false) setShowDeleteDocStoreDialog(false)
if (type === 'STORE') { if (type === 'STORE') {
if (documentStore.recordManagerConfig) { if (removeFromVectorStore) {
await deleteVectorStoreDataFromStore(storeId) await deleteVectorStoreDataFromStore(storeId)
} }
try { try {
@ -239,9 +239,6 @@ const DocumentStoreDetails = () => {
}) })
} }
} else if (type === 'LOADER') { } else if (type === 'LOADER') {
if (documentStore.recordManagerConfig) {
await deleteVectorStoreDataFromStore(storeId, file.id)
}
try { try {
const deleteResp = await documentsApi.deleteLoaderFromStore(storeId, file.id) const deleteResp = await documentsApi.deleteLoaderFromStore(storeId, file.id)
setBackdropLoading(false) setBackdropLoading(false)
@ -283,40 +280,9 @@ const DocumentStoreDetails = () => {
} }
const onLoaderDelete = (file, vectorStoreConfig, recordManagerConfig) => { const onLoaderDelete = (file, vectorStoreConfig, recordManagerConfig) => {
// Get the display name in the format "LoaderName (sourceName)"
const loaderName = file.loaderName || 'Unknown'
let sourceName = ''
// Prefer files.name when files array exists and has items
if (file.files && Array.isArray(file.files) && file.files.length > 0) {
sourceName = file.files.map((f) => f.name).join(', ')
} else if (file.source) {
// Fallback to source logic
if (typeof file.source === 'string' && file.source.includes('base64')) {
sourceName = getFileName(file.source)
} else if (typeof file.source === 'string' && file.source.startsWith('[') && file.source.endsWith(']')) {
sourceName = JSON.parse(file.source).join(', ')
} else if (typeof file.source === 'string') {
sourceName = file.source
}
}
const displayName = sourceName ? `${loaderName} (${sourceName})` : loaderName
let description = `Delete "${displayName}"? This will delete all the associated document chunks from the document store.`
if (
recordManagerConfig &&
vectorStoreConfig &&
Object.keys(recordManagerConfig).length > 0 &&
Object.keys(vectorStoreConfig).length > 0
) {
description = `Delete "${displayName}"? This will delete all the associated document chunks from the document store and remove the actual data from the vector store database.`
}
const props = { const props = {
title: `Delete`, title: `Delete`,
description, description: `Delete Loader ${file.loaderName} ? This will delete all the associated document chunks.`,
vectorStoreConfig, vectorStoreConfig,
recordManagerConfig, recordManagerConfig,
type: 'LOADER', type: 'LOADER',
@ -328,20 +294,9 @@ const DocumentStoreDetails = () => {
} }
const onStoreDelete = (vectorStoreConfig, recordManagerConfig) => { const onStoreDelete = (vectorStoreConfig, recordManagerConfig) => {
let description = `Delete Store ${getSpecificDocumentStore.data?.name}? This will delete all the associated loaders and document chunks from the document store.`
if (
recordManagerConfig &&
vectorStoreConfig &&
Object.keys(recordManagerConfig).length > 0 &&
Object.keys(vectorStoreConfig).length > 0
) {
description = `Delete Store ${getSpecificDocumentStore.data?.name}? This will delete all the associated loaders and document chunks from the document store, and remove the actual data from the vector store database.`
}
const props = { const props = {
title: `Delete`, title: `Delete`,
description, description: `Delete Store ${getSpecificDocumentStore.data?.name} ? This will delete all the associated loaders and document chunks.`,
vectorStoreConfig, vectorStoreConfig,
recordManagerConfig, recordManagerConfig,
type: 'STORE' type: 'STORE'
@ -526,10 +481,7 @@ const DocumentStoreDetails = () => {
> >
<MenuItem <MenuItem
disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'} disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'}
onClick={() => { onClick={() => showStoredChunks('all')}
handleClose()
showStoredChunks('all')
}}
disableRipple disableRipple
> >
<FileChunksIcon /> <FileChunksIcon />
@ -538,10 +490,7 @@ const DocumentStoreDetails = () => {
<Available permission={'documentStores:upsert-config'}> <Available permission={'documentStores:upsert-config'}>
<MenuItem <MenuItem
disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'} disabled={documentStore?.totalChunks <= 0 || documentStore?.status === 'UPSERTING'}
onClick={() => { onClick={() => showVectorStore(documentStore.id)}
handleClose()
showVectorStore(documentStore.id)
}}
disableRipple disableRipple
> >
<NoteAddIcon /> <NoteAddIcon />
@ -550,10 +499,7 @@ const DocumentStoreDetails = () => {
</Available> </Available>
<MenuItem <MenuItem
disabled={documentStore?.totalChunks <= 0 || documentStore?.status !== 'UPSERTED'} disabled={documentStore?.totalChunks <= 0 || documentStore?.status !== 'UPSERTED'}
onClick={() => { onClick={() => showVectorStoreQuery(documentStore.id)}
handleClose()
showVectorStoreQuery(documentStore.id)
}}
disableRipple disableRipple
> >
<SearchIcon /> <SearchIcon />
@ -572,10 +518,7 @@ const DocumentStoreDetails = () => {
</Available> </Available>
<Divider sx={{ my: 0.5 }} /> <Divider sx={{ my: 0.5 }} />
<MenuItem <MenuItem
onClick={() => { onClick={() => onStoreDelete(documentStore.vectorStoreConfig, documentStore.recordManagerConfig)}
handleClose()
onStoreDelete(documentStore.vectorStoreConfig, documentStore.recordManagerConfig)
}}
disableRipple disableRipple
> >
<FileDeleteIcon /> <FileDeleteIcon />
@ -813,26 +756,20 @@ function LoaderRow(props) {
setAnchorEl(null) setAnchorEl(null)
} }
const formatSources = (files, source, loaderName) => { const formatSources = (files, source) => {
let sourceName = ''
// Prefer files.name when files array exists and has items // Prefer files.name when files array exists and has items
if (files && Array.isArray(files) && files.length > 0) { if (files && Array.isArray(files) && files.length > 0) {
sourceName = files.map((file) => file.name).join(', ') return files.map((file) => file.name).join(', ')
} else if (source && typeof source === 'string' && source.includes('base64')) {
// Fallback to original source logic
sourceName = getFileName(source)
} else if (source && typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
sourceName = JSON.parse(source).join(', ')
} else if (source) {
sourceName = source
} }
// Return format: "LoaderName (sourceName)" or just "LoaderName" if no source // Fallback to original source logic
if (!sourceName) { if (source && typeof source === 'string' && source.includes('base64')) {
return loaderName || 'No source' return getFileName(source)
} }
return loaderName ? `${loaderName} (${sourceName})` : sourceName if (source && typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
return JSON.parse(source).join(', ')
}
return source || 'No source'
} }
return ( return (
@ -886,62 +823,32 @@ function LoaderRow(props) {
onClose={handleClose} onClose={handleClose}
> >
<Available permission={'documentStores:preview-process'}> <Available permission={'documentStores:preview-process'}>
<MenuItem <MenuItem onClick={props.onEditClick} disableRipple>
onClick={() => {
handleClose()
props.onEditClick()
}}
disableRipple
>
<FileEditIcon /> <FileEditIcon />
Preview & Process Preview & Process
</MenuItem> </MenuItem>
</Available> </Available>
<Available permission={'documentStores:preview-process'}> <Available permission={'documentStores:preview-process'}>
<MenuItem <MenuItem onClick={props.onViewChunksClick} disableRipple>
onClick={() => {
handleClose()
props.onViewChunksClick()
}}
disableRipple
>
<FileChunksIcon /> <FileChunksIcon />
View & Edit Chunks View & Edit Chunks
</MenuItem> </MenuItem>
</Available> </Available>
<Available permission={'documentStores:preview-process'}> <Available permission={'documentStores:preview-process'}>
<MenuItem <MenuItem onClick={props.onChunkUpsert} disableRipple>
onClick={() => {
handleClose()
props.onChunkUpsert()
}}
disableRipple
>
<NoteAddIcon /> <NoteAddIcon />
Upsert Chunks Upsert Chunks
</MenuItem> </MenuItem>
</Available> </Available>
<Available permission={'documentStores:preview-process'}> <Available permission={'documentStores:preview-process'}>
<MenuItem <MenuItem onClick={props.onViewUpsertAPI} disableRipple>
onClick={() => {
handleClose()
props.onViewUpsertAPI()
}}
disableRipple
>
<CodeIcon /> <CodeIcon />
View API View API
</MenuItem> </MenuItem>
</Available> </Available>
<Divider sx={{ my: 0.5 }} /> <Divider sx={{ my: 0.5 }} />
<Available permission={'documentStores:delete-loader'}> <Available permission={'documentStores:delete-loader'}>
<MenuItem <MenuItem onClick={props.onDeleteClick} disableRipple>
onClick={() => {
handleClose()
props.onDeleteClick()
}}
disableRipple
>
<FileDeleteIcon /> <FileDeleteIcon />
Delete Delete
</MenuItem> </MenuItem>

View File

@ -26,7 +26,6 @@ import useApi from '@/hooks/useApi'
import useConfirm from '@/hooks/useConfirm' import useConfirm from '@/hooks/useConfirm'
import useNotifier from '@/utils/useNotifier' import useNotifier from '@/utils/useNotifier'
import { useAuth } from '@/hooks/useAuth' import { useAuth } from '@/hooks/useAuth'
import { getFileName } from '@/utils/genericHelper'
// store // store
import { closeSnackbar as closeSnackbarAction, enqueueSnackbar as enqueueSnackbarAction } from '@/store/actions' import { closeSnackbar as closeSnackbarAction, enqueueSnackbar as enqueueSnackbarAction } from '@/store/actions'
@ -77,7 +76,6 @@ const ShowStoredChunks = () => {
const [showExpandedChunkDialog, setShowExpandedChunkDialog] = useState(false) const [showExpandedChunkDialog, setShowExpandedChunkDialog] = useState(false)
const [expandedChunkDialogProps, setExpandedChunkDialogProps] = useState({}) const [expandedChunkDialogProps, setExpandedChunkDialogProps] = useState({})
const [fileNames, setFileNames] = useState([]) const [fileNames, setFileNames] = useState([])
const [loaderDisplayName, setLoaderDisplayName] = useState('')
const chunkSelected = (chunkId) => { const chunkSelected = (chunkId) => {
const selectedChunk = documentChunks.find((chunk) => chunk.id === chunkId) const selectedChunk = documentChunks.find((chunk) => chunk.id === chunkId)
@ -214,32 +212,13 @@ const ShowStoredChunks = () => {
setCurrentPage(data.currentPage) setCurrentPage(data.currentPage)
setStart(data.currentPage * 50 - 49) setStart(data.currentPage * 50 - 49)
setEnd(data.currentPage * 50 > data.count ? data.count : data.currentPage * 50) setEnd(data.currentPage * 50 > data.count ? data.count : data.currentPage * 50)
// Build the loader display name in format "LoaderName (sourceName)"
const loaderName = data.file?.loaderName || data.storeName || ''
let sourceName = ''
if (data.file?.files && data.file.files.length > 0) { if (data.file?.files && data.file.files.length > 0) {
const fileNames = [] const fileNames = []
for (const attachedFile of data.file.files) { for (const attachedFile of data.file.files) {
fileNames.push(attachedFile.name) fileNames.push(attachedFile.name)
} }
setFileNames(fileNames) setFileNames(fileNames)
sourceName = fileNames.join(', ')
} else if (data.file?.source) {
const source = data.file.source
if (typeof source === 'string' && source.includes('base64')) {
sourceName = getFileName(source)
} else if (typeof source === 'string' && source.startsWith('[') && source.endsWith(']')) {
sourceName = JSON.parse(source).join(', ')
} else if (typeof source === 'string') {
sourceName = source
}
} }
// Set display name in format "LoaderName (sourceName)" or just "LoaderName"
const displayName = sourceName ? `${loaderName} (${sourceName})` : loaderName
setLoaderDisplayName(displayName)
} }
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
@ -255,7 +234,7 @@ const ShowStoredChunks = () => {
<ViewHeader <ViewHeader
isBackButton={true} isBackButton={true}
search={false} search={false}
title={loaderDisplayName} title={getChunksApi.data?.file?.loaderName || getChunksApi.data?.storeName}
description={getChunksApi.data?.file?.splitterName || getChunksApi.data?.description} description={getChunksApi.data?.file?.splitterName || getChunksApi.data?.description}
onBack={() => navigate(-1)} onBack={() => navigate(-1)}
></ViewHeader> ></ViewHeader>

View File

@ -40,7 +40,7 @@ import Storage from '@mui/icons-material/Storage'
import DynamicFeed from '@mui/icons-material/Filter1' import DynamicFeed from '@mui/icons-material/Filter1'
// utils // utils
import { initNode, showHideInputParams, getFileName } from '@/utils/genericHelper' import { initNode, showHideInputParams } from '@/utils/genericHelper'
import useNotifier from '@/utils/useNotifier' import useNotifier from '@/utils/useNotifier'
// const // const
@ -69,7 +69,6 @@ const VectorStoreConfigure = () => {
const [loading, setLoading] = useState(true) const [loading, setLoading] = useState(true)
const [documentStore, setDocumentStore] = useState({}) const [documentStore, setDocumentStore] = useState({})
const [dialogProps, setDialogProps] = useState({}) const [dialogProps, setDialogProps] = useState({})
const [currentLoader, setCurrentLoader] = useState(null)
const [showEmbeddingsListDialog, setShowEmbeddingsListDialog] = useState(false) const [showEmbeddingsListDialog, setShowEmbeddingsListDialog] = useState(false)
const [selectedEmbeddingsProvider, setSelectedEmbeddingsProvider] = useState({}) const [selectedEmbeddingsProvider, setSelectedEmbeddingsProvider] = useState({})
@ -246,8 +245,7 @@ const VectorStoreConfigure = () => {
const prepareConfigData = () => { const prepareConfigData = () => {
const data = { const data = {
storeId: storeId, storeId: storeId,
docId: docId, docId: docId
isStrictSave: true
} }
// Set embedding config // Set embedding config
if (selectedEmbeddingsProvider.inputs) { if (selectedEmbeddingsProvider.inputs) {
@ -355,39 +353,6 @@ const VectorStoreConfigure = () => {
return Object.keys(selectedEmbeddingsProvider).length === 0 return Object.keys(selectedEmbeddingsProvider).length === 0
} }
const getLoaderDisplayName = (loader) => {
if (!loader) return ''
const loaderName = loader.loaderName || 'Unknown'
let sourceName = ''
// Prefer files.name when files array exists and has items
if (loader.files && Array.isArray(loader.files) && loader.files.length > 0) {
sourceName = loader.files.map((file) => file.name).join(', ')
} else if (loader.source) {
// Fallback to source logic
if (typeof loader.source === 'string' && loader.source.includes('base64')) {
sourceName = getFileName(loader.source)
} else if (typeof loader.source === 'string' && loader.source.startsWith('[') && loader.source.endsWith(']')) {
sourceName = JSON.parse(loader.source).join(', ')
} else if (typeof loader.source === 'string') {
sourceName = loader.source
}
}
// Return format: "LoaderName (sourceName)" or just "LoaderName" if no source
return sourceName ? `${loaderName} (${sourceName})` : loaderName
}
const getViewHeaderTitle = () => {
const storeName = getSpecificDocumentStoreApi.data?.name || ''
if (docId && currentLoader) {
const loaderName = getLoaderDisplayName(currentLoader)
return `${storeName} / ${loaderName}`
}
return storeName
}
useEffect(() => { useEffect(() => {
if (saveVectorStoreConfigApi.data) { if (saveVectorStoreConfigApi.data) {
setLoading(false) setLoading(false)
@ -446,15 +411,6 @@ const VectorStoreConfigure = () => {
return return
} }
setDocumentStore(docStore) setDocumentStore(docStore)
// Find the current loader if docId is provided
if (docId && docStore.loaders) {
const loader = docStore.loaders.find((l) => l.id === docId)
if (loader) {
setCurrentLoader(loader)
}
}
if (docStore.embeddingConfig) { if (docStore.embeddingConfig) {
getEmbeddingNodeDetailsApi.request(docStore.embeddingConfig.name) getEmbeddingNodeDetailsApi.request(docStore.embeddingConfig.name)
} }
@ -517,7 +473,7 @@ const VectorStoreConfigure = () => {
<ViewHeader <ViewHeader
isBackButton={true} isBackButton={true}
search={false} search={false}
title={getViewHeaderTitle()} title={getSpecificDocumentStoreApi.data?.name}
description='Configure Embeddings, Vector Store and Record Manager' description='Configure Embeddings, Vector Store and Record Manager'
onBack={() => navigate(-1)} onBack={() => navigate(-1)}
> >

View File

@ -21,8 +21,7 @@ import {
useTheme, useTheme,
Typography, Typography,
Button, Button,
Drawer, Drawer
TableSortLabel
} from '@mui/material' } from '@mui/material'
// project imports // project imports
@ -186,8 +185,6 @@ function ShowRoleRow(props) {
const [openViewPermissionsDrawer, setOpenViewPermissionsDrawer] = useState(false) const [openViewPermissionsDrawer, setOpenViewPermissionsDrawer] = useState(false)
const [selectedRoleId, setSelectedRoleId] = useState('') const [selectedRoleId, setSelectedRoleId] = useState('')
const [assignedUsers, setAssignedUsers] = useState([]) const [assignedUsers, setAssignedUsers] = useState([])
const [order, setOrder] = useState('asc')
const [orderBy, setOrderBy] = useState('workspace')
const theme = useTheme() const theme = useTheme()
const customization = useSelector((state) => state.customization) const customization = useSelector((state) => state.customization)
@ -199,38 +196,6 @@ function ShowRoleRow(props) {
setSelectedRoleId(roleId) setSelectedRoleId(roleId)
} }
const handleRequestSort = (property) => {
const isAsc = orderBy === property && order === 'asc'
setOrder(isAsc ? 'desc' : 'asc')
setOrderBy(property)
}
const sortedAssignedUsers = [...assignedUsers].sort((a, b) => {
let comparison = 0
if (orderBy === 'workspace') {
const workspaceA = (a.workspace?.name || '').toLowerCase()
const workspaceB = (b.workspace?.name || '').toLowerCase()
comparison = workspaceA.localeCompare(workspaceB)
if (comparison === 0) {
const userA = (a.user?.name || a.user?.email || '').toLowerCase()
const userB = (b.user?.name || b.user?.email || '').toLowerCase()
comparison = userA.localeCompare(userB)
}
} else if (orderBy === 'user') {
const userA = (a.user?.name || a.user?.email || '').toLowerCase()
const userB = (b.user?.name || b.user?.email || '').toLowerCase()
comparison = userA.localeCompare(userB)
if (comparison === 0) {
const workspaceA = (a.workspace?.name || '').toLowerCase()
const workspaceB = (b.workspace?.name || '').toLowerCase()
comparison = workspaceA.localeCompare(workspaceB)
}
}
return order === 'asc' ? comparison : -comparison
})
useEffect(() => { useEffect(() => {
if (getAllUsersByRoleIdApi.data) { if (getAllUsersByRoleIdApi.data) {
setAssignedUsers(getAllUsersByRoleIdApi.data) setAssignedUsers(getAllUsersByRoleIdApi.data)
@ -238,14 +203,12 @@ function ShowRoleRow(props) {
}, [getAllUsersByRoleIdApi.data]) }, [getAllUsersByRoleIdApi.data])
useEffect(() => { useEffect(() => {
if (openAssignedUsersDrawer && selectedRoleId) { if (open && selectedRoleId) {
getAllUsersByRoleIdApi.request(selectedRoleId) getAllUsersByRoleIdApi.request(selectedRoleId)
} else { } else {
setOpenAssignedUsersDrawer(false) setOpenAssignedUsersDrawer(false)
setSelectedRoleId('') setSelectedRoleId('')
setAssignedUsers([]) setAssignedUsers([])
setOrder('asc')
setOrderBy('workspace')
} }
// eslint-disable-next-line react-hooks/exhaustive-deps // eslint-disable-next-line react-hooks/exhaustive-deps
}, [openAssignedUsersDrawer]) }, [openAssignedUsersDrawer])
@ -338,28 +301,12 @@ function ShowRoleRow(props) {
}} }}
> >
<TableRow> <TableRow>
<StyledTableCell sx={{ width: '50%' }}> <StyledTableCell sx={{ width: '50%' }}>User</StyledTableCell>
<TableSortLabel <StyledTableCell sx={{ width: '50%' }}>Workspace</StyledTableCell>
active={orderBy === 'user'}
direction={orderBy === 'user' ? order : 'asc'}
onClick={() => handleRequestSort('user')}
>
User
</TableSortLabel>
</StyledTableCell>
<StyledTableCell sx={{ width: '50%' }}>
<TableSortLabel
active={orderBy === 'workspace'}
direction={orderBy === 'workspace' ? order : 'asc'}
onClick={() => handleRequestSort('workspace')}
>
Workspace
</TableSortLabel>
</StyledTableCell>
</TableRow> </TableRow>
</TableHead> </TableHead>
<TableBody> <TableBody>
{sortedAssignedUsers.map((item, index) => ( {assignedUsers.map((item, index) => (
<TableRow key={index}> <TableRow key={index}>
<StyledTableCell>{item.user.name || item.user.email}</StyledTableCell> <StyledTableCell>{item.user.name || item.user.email}</StyledTableCell>
<StyledTableCell>{item.workspace.name}</StyledTableCell> <StyledTableCell>{item.workspace.name}</StyledTableCell>

View File

@ -245,11 +245,7 @@ const Tools = () => {
))} ))}
</Box> </Box>
) : ( ) : (
<ToolsTable <ToolsTable data={getAllToolsApi.data.data} isLoading={isLoading} onSelect={edit} />
data={getAllToolsApi.data?.data?.filter(filterTools) || []}
isLoading={isLoading}
onSelect={edit}
/>
)} )}
{/* Pagination and Page Size Controls */} {/* Pagination and Page Size Controls */}
<TablePagination currentPage={currentPage} limit={pageLimit} total={total} onChange={onChange} /> <TablePagination currentPage={currentPage} limit={pageLimit} total={total} onChange={onChange} />

File diff suppressed because one or more lines are too long